< Back to Blog
July 2, 2018

June 2018 Crawl Archive Now Available

Note: this post has been marked as obsolete.
The crawl archive for June 2018 is now available! The archive contains 3.05 billion web pages and 235 TiB of uncompressed content, crawled between June 18th and 25th.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for June 2018 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-26/. It contains 3.05 billion web pages and 235 TiB of uncompressed content, crawled between June 18th and 25th.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 64000 56.58
WAT wat.paths.gz 64000 19.19
WET wet.paths.gz 64000 8.37
Robots.txt files robotstxt.paths.gz 64000 0.20
Non-200 responses non200responses.paths.gz 64000 1.73
URL index files cc-index.paths.gz 302 0.23
Columnar URL index files cc-index-table.paths.gz 900 0.26

The June crawl contains 700 million new URLs, not contained in any crawl archive before. New URLs are “mined” by

  • extracting and sampling URLs from sitemaps, RSS and Atom feeds if provided by hosts visited in prior crawls. Hosts are selected from the highest-ranking 60 million domains of the Feb/Mar/Apr 2018 webgraph data set
  • a breadth-first side crawl within a maximum of 4 links (“hops”) away from the home pages of the top 25 million hosts or top 25 million domains of the webgraph dataset
  • a random sample taken from WAT files of the May crawl

The remaining URLs (more than 2 billion) have already been included in one of the previous monthly crawl archives and have been stored in our URL database for a later re-fetch – if not marked as duplicates, classified as spam, etc. This huge "collection of bookmarks" dates back multiple years, even back to 2012 when we first received seed donations from Blekko. This month we started to remove old "bookmarks" from our URL database. In the future we'll remember a URL for only 12 months after seen last as a seed or outlink. On the one hand, we hope to increase the dynamic of the crawls, on the other hand a smaller URL database will save resources.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2018-26/. Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.

This release was authored by:
No items found.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.