UMASS BOOK DUPLICATE DETECTION DATASET

Purpose:

This dataset was created to evaluate the effectiveness of the partial duplicate detection framework for scanned books proposed by Yalniz, Can and Manmatha (2011). If you decide to make use of the dataset in your own work, please read the copyright notice first and also cite the paper below. This dataset is for research purposes only.

·         Zeki Yalniz, E. F. Can, R. Manmatha, Partial Duplicate Detection for Large Book Collections. Proceedings of CIKM’11.

IMPORTANT NOTICE:

According to the Project Gutenberg and Internet Archive websites, the books are out of copyright in the United States. This may not be the situation in a particular country so you are advised to check this and follow the law of your country. If you just want to read the book, you are better off looking at their websites where they have much nicer interfaces for doing this. We do not know the specifics of the OCR and preprocessing used.

THIS DATA IS PROVIDED BY THE UNIVERSITY OF MASSACHUSETTS AND OTHER CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DATA, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Dataset characteristics:

The dataset consists of four book collections. OCR outputs of books are contained in the DJVU files. All the books are downloaded from the Internet Archive website (www.archive.org). Project Gutenberg and the Internet Archive disclaimers in the front or rear should be removed to avoid false matches. The dataset does not include any page image.

-          The train set (English):  151 books, 67 duplicates  

-          The 1K set (English): 1092 books, 258 duplicates

-          The 3K set (French): 2883 books, 483 duplicates

-          Partial duplicate dataset (English): 458 books, 460 duplicates

The Internet Archive identifiers of books are also provided for each dataset. Each line in the ground truth file indicates a duplicate book pair.

-          train_set_ground_truth.txt

-          1k_set_ground_truth.txt

-          3k_set_ground_truth.txt

-          partial_set_ground_truth.txt

 

How to obtain the dataset?

Please contact downloads[at]ciir[dot]cs[dot]umass[dot]edu. A download link will be provided. 

Last updated: June 15, 2012