Since 2017, the Archive has started ignoring robots.txt files for news services; whether or not the news site wants to be crawled, the Archive crawls it and makes copies of the different versions of the articles the site publishes. That's because news sites - even the so-called "paper of record" - have a nasty habit of making sweeping edits to published material without noting it.
8/