My experience with @base and other web services run by Bielefeld University Library is in line with @gluejar's.
The IETF sound naive when they claim that “[r]ight now, AI vendors use a confusing array of non-standard signals in the robots.txt file (defined by RFC 9309) and elsewhere to guide their crawling and training decisions” when in reality many of them ignore whatever signals a website sends them. They even plunder the shadow libraries.