Ok I've looked into it.
This is just robots.txt on steroids in the sense that it's entirely opt-in and only binds law-abiding actors. It has no answer to the badly-behaved scrapers that ignore robots.txt and overwhelm our instances.
Having said that it will still be great to have a way to bill the 'good' crawlers and I appreciate the lightweight and simple methods they propose. It might work.
Or it could just mean the incentive for crawlers to spoof user-agents is higher...