< Back to Blog
October 27, 2025

Common Crawl Foundation at Stanford HAI

Note: this post has been marked as obsolete.
The Common Crawl team presented a seminar at Stanford HAI entitled “Preserving Humanity's Knowledge and Making it Accessible: Addressing Challenges of Public Web Data”.
Common Crawl Foundation
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

The Common Crawl Foundation presented a Stanford Institute for Human-Centered Artificial Intelligence (HAI) Seminar entitled "Preserving Humanity's Knowledge and Making it Accessible: Addressing Challenges of Public Web Data".  The seminar was presented to a full house at Stanford and an actively engaged audience both in-person and online.  Following the seminar and Q&A, we spent several hours in follow-up conversations with attendees and meeting with partners and friends who attended.

A photograph of Sebastian Nagel presenting at Stanford HAI
Sebastian Nagel presenting at Stanford HAI

The presentation provided an introduction to Common Crawl and our data, and covered topics around crawler politeness and the Robots Exclusion Protocol, legal and policy issues, and web data and language coverage. You can download a PDF of the presentation slides via GitHub.

A photograph of Thom Vaughan, Greg Lindahl, Pedro Ortiz Suarez, and Sebastian Nagel fielding questions following the presentation
Left-to-right: Thom Vaughan, Greg Lindahl, Pedro Ortiz Suarez, and Sebastian Nagel fielding questions following the presentation

We would like to thank Patrick Hynes, Stanford HAI’s Senior Manager of Research Communities, for hosting us, Professor Diyi Yang for meeting with us, and to everyone who attended.

A photograph of Sammy Sidhu and Desmond Cheong from Eventual talking with our chairman Gil Elbaz
Sammy Sidhu and Desmond Cheong from Eventual talking with our chairman Gil Elbaz
This release was authored by:
No items found.

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.