I spent 5 years of my life working for a company that trained many LLMs. I don't have to know which models they're using to know that it would be basically impossible for them to have done so in an ethical fashion, and if they had done so, it would be the first and only thing they said about their work.
(Because they'd have spent hundred of millions of dollars to do it.)
I'm very familiar with the field. I make a lot less money now than I used to because I decided I couldn't participate in something as morally bankrupt as what I saw happening in AI.
The field, as a whole, is built on plagiarism and ignoring consent and any positive outcomes that come from that have to be viewed in light of the toxic stew from which they are born.
If they want to make writing more accessible, that's great. If they can do it in a way that doesn't abuse writers in the process, they have my blessing.
I've seen no evidence that this is what is happening, and I have enough personal experience to believe that without evidence to the contrary, they are most likely engaging in a massive violation of consent.