@tomjennings could you be clearer about what the harm is? For me, I think the biggest harms in creating LLMs is using content like images and text without consent of the creator by ignoring robots.txt files. I also think it's harmful to use output from LLMs without human review, especially if there are safety issues. Are those the harms you are talking about?