←  Back to Blog
March 19, 2026

March 2026 Crawl Archive Now Available

We are pleased to announce the release of the March 2026 crawl, containing 1.97 billion web pages, or 344.64 TiB of uncompressed content. We also observed a dramatic increase in fetches over IPv6, explained by the enabling of Happy Eyeballs in the OkHttp library.

We are very happy to announce that the March 2026 crawl archive is now available.

Increase in IPv6 fetches

By upgrading the crawler's HTTP client library OkHttp, the Happy Eyeballs RFC (RFC 6555) was turned on.  Happy Eyeballs is an algorithm that lets clients race IPv4 and IPv6 connection attempts simultaneously, with a small head start given to IPv6. Whichever connects first wins, avoiding the long timeouts that would otherwise occur if one protocol is broken or slow. Having this enabled caused an increase of the amount of fetches over IPv6 from 0.5% to around 31%. See the OkHttp Change Log for more information.

We also recently ran an experiment to measure the adoption of IPv6 across the top 100k web hosts, about which you can read in our recent blog post, and see the corresponding data and code in its GitHub repository.

The crawl

The data was crawled between March 5th and March 17th, and contains 1.97 billion web pages (or 344.64 TiB of uncompressed content). Page captures are from 44 million hosts or 36.1 million registered domains and include 600 million new URLs, not visited in any of our prior crawls.

File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 100000 75.66
WAT wat.paths.gz 100000 13.54
WET wet.paths.gz 100000 5.64
Robots.txt robotstxt.paths.gz 100000 0.17
Non-200 responses non200responses.paths.gz 100000 2.35
URL index cc-index.paths.gz 302 0.15
Columnar URL index cc-index-table.paths.gz 900 0.19

Archive Location & Download

The March 2026 crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2026-12/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively, please see Get Started for detailed instructions.

This release was authored by:
Luca Foppiano is a Senior Engineer at the Common Crawl Foundation.
Luca Foppiano
Luca Foppiano is a Senior Engineer at the Common Crawl Foundation.
Thom is Principal Engineer at the Common Crawl Foundation.
Thom Vaughan
Thom is Principal Engineer at the Common Crawl Foundation.
Hande is a Senior ML Engineer with the Common Crawl Foundation.
Hande Çelikkanat
Hande is a Senior ML Engineer with the Common Crawl Foundation.
Thijs Dalhuijsen is a Senior Software Engineer at Common Crawl.
Thijs Dalhuijsen
Thijs Dalhuijsen is a Senior Software Engineer at Common Crawl.
Michael is a Senior Research Engineer at the Common Crawl Foundation.
Michael Paris
Michael is a Senior Research Engineer at the Common Crawl Foundation.

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.