IDI
@institutional.org
A research center at Harvard working to strengthen society’s connection to knowledge by advancing our access to and understanding of the data that shapes AI.
Register to join the talk virtually: harvard.zoom.us/meeting/regi...
Welcome! You are invited to join a meeting: IDI Talk with Michele Dolfi & Peter Staar on SmolDocling. After registering, you will receive a confirmation email about joining the meeting.
Please join us for a talk with Michele Dolfi & Peter Staar from IBM Research in Zurich to discuss their work on SmolDocling, an “ultra-compact” model for diverse OCR tasks.
harvard.zoom.us
September 9, 2025 at 4:06 PM
Register to join the talk virtually: harvard.zoom.us/meeting/regi...
Reposted by IDI
Cohosted by @institutionaldatainitiative.org and The Berkman Klein Center. harvard.zoom.us/webinar/regi...
Welcome! You are invited to join a webinar: Open AI Development. After registering, you will receive a confirmation email about joining the webinar.
For AI to truly benefit society, it must be built on foundations of transparency, fairness, and accountability—starting with the most foundational building block that powers it: data.
Not long ago, ...
harvard.zoom.us
June 16, 2025 at 7:48 PM
Cohosted by @institutionaldatainitiative.org and The Berkman Klein Center. harvard.zoom.us/webinar/regi...
We hope Institutional Books will be the beginning of a process that makes millions more books accessible to the public for a variety of uses.
We welcome feedback as we continue to expand this dataset, refine its contents, and sharpen our process.
www.institutionaldatainitiative.org/institutiona...
We welcome feedback as we continue to expand this dataset, refine its contents, and sharpen our process.
www.institutionaldatainitiative.org/institutiona...
Institutional Books | Institutional Data Initiative
Institutional Books 1.0 is our first release of public domain books. This set was originally digitized through Harvard Library’s participation in the Google Books project..
www.institutionaldatainitiative.org
June 12, 2025 at 9:12 PM
We hope Institutional Books will be the beginning of a process that makes millions more books accessible to the public for a variety of uses.
We welcome feedback as we continue to expand this dataset, refine its contents, and sharpen our process.
www.institutionaldatainitiative.org/institutiona...
We welcome feedback as we continue to expand this dataset, refine its contents, and sharpen our process.
www.institutionaldatainitiative.org/institutiona...
We look forward to growing Institutional Books through community. We welcome collaboration from researchers and model makers as we:
- Evaluate the dataset’s impact on model outputs
- Continuing to refine our OCR pipelines
View the dataset on Hugging Face: huggingface.co/datasets/ins...
- Evaluate the dataset’s impact on model outputs
- Continuing to refine our OCR pipelines
View the dataset on Hugging Face: huggingface.co/datasets/ins...
institutional/institutional-books-1.0 · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
June 12, 2025 at 9:12 PM
We look forward to growing Institutional Books through community. We welcome collaboration from researchers and model makers as we:
- Evaluate the dataset’s impact on model outputs
- Continuing to refine our OCR pipelines
View the dataset on Hugging Face: huggingface.co/datasets/ins...
- Evaluate the dataset’s impact on model outputs
- Continuing to refine our OCR pipelines
View the dataset on Hugging Face: huggingface.co/datasets/ins...
As part of our refinement work, we supplemented the original OCR-extracted text with a post-processed version that utilizes line detection to reassemble the text according to the line type.
June 12, 2025 at 9:12 PM
As part of our refinement work, we supplemented the original OCR-extracted text with a post-processed version that utilizes line detection to reassemble the text according to the line type.
We included extensive volume-level metadata with both original and generated components, such as results from text-level language detection.
June 12, 2025 at 9:12 PM
We included extensive volume-level metadata with both original and generated components, such as results from text-level language detection.
We analyzed the dataset’s coverage across time, topic, and language and found:
- 40% of English text + long tail of 254 languages
- 20 clear topical tranches
- Largely published in the 19th and 20th centuries
Technical report here: arxiv.org/abs/2506.08300
- 40% of English text + long tail of 254 languages
- 20 clear topical tranches
- Largely published in the 19th and 20th centuries
Technical report here: arxiv.org/abs/2506.08300
June 12, 2025 at 9:12 PM
We analyzed the dataset’s coverage across time, topic, and language and found:
- 40% of English text + long tail of 254 languages
- 20 clear topical tranches
- Largely published in the 19th and 20th centuries
Technical report here: arxiv.org/abs/2506.08300
- 40% of English text + long tail of 254 languages
- 20 clear topical tranches
- Largely published in the 19th and 20th centuries
Technical report here: arxiv.org/abs/2506.08300