@martin_piper @atatassault @intransitivelie @jlsigman @ben I can't prove a negative, try again.
To anyone reading this after the fact: the evidence is that LLMs are notoriously bad for hallucinating attribution. There would need to be some pretty major changes to get attribution to work accurately and reliably, and this doesn't even cover the share-alike licensing issues.