< Back to Blog
May 2, 2018

April 2018 Crawl Archive Now Available

Note: this post has been marked as obsolete.
The crawl archive for April 2018 is now available! The archive contains 3.1 billion web pages and 230 TiB of uncompressed content, crawled between April 19th and 27th.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for April 2018 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-17/. It contains 3.1 billion web pages and 230 TiB of uncompressed content, crawled between April 19th and 27th.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 64320 54.24
WAT wat.paths.gz 64320 19.22
WET wet.paths.gz 64320 8.40
Robots.txt files robotstxt.paths.gz 64320 0.20
Non-200 responses non200responses.paths.gz 64320 1.58
URL index files cc-index.paths.gz 302 0.23
Columnar URL index files cc-index-table.paths.gz 900 0.26

The April crawl contains 625 million new URLs, not contained in any crawl archive before. New URLs are “mined” by

  • extracting and sampling URLs from
  • sitemaps if provided by any of the highest-ranking 100 million hosts taken from the Nov/Dec/Jan 2017/2018 webgraph data set
  • RSS and Atom feeds (random sample of 1 million feeds taken from the March crawl data)
  • a breadth-first side crawl within a maximum of 4 links (“hops”) away from the home pages of the top 40 million hosts or top 40 million domains of the webgraph dataset
  • a random sample taken from WAT files of the March crawl

We took actions to reduce the amount of images unintentionally crawled: Although our crawler is focused to fetch HTML pages, there has always been a small amount (1-2%) of other document formats. We accept these – it's a part of the web and these WARC records are useful to gain insights, e.g. to test PDF or Office document parsers at scale.

However, because image links contained in sitemaps haven't properly filtered out, the amount of images has grown during the last time and reached 2% in March 2018.

As a result of filtering image links from sitemaps, the amount of images now has dropped to approx. 0.5%, cf. the MIME type statistics of the latest three monthly crawls.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2018-17/. Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.

This release was authored by:
No items found.

Erratum: 

Erroneous title field in WAT records

Originally reported by: 
Robert Waksmunski
Permalink

The "Title" extracted in WAT records to the JSON path `Envelope > Payload-Metadata > HTTP-Response-Metadata > HTML-Metadata > Head > Title` is not the content included in the <title> element in the HTML header (<head> element) if the page contains further <title> elements in the page body. The content of the last <title> element is written to the WAT "Title". This bug was observed if the HTML page includes embedded SVG graphics.

The issue was reported by the user Robert Waksmunski:

...and was fixed for CC-MAIN-2024-42 by commoncrawl/ia-web-commons#37.

This erratum affects all crawls from CC-MAIN-2013-20 until CC-MAIN-2024-38.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.