@marcel@rysiek In Cory Doctorow's opinion, the problem is the cost of switching. If YOU bear it, it's good for the providers and bad for you (because they can do things like hold your data to ransom or raise prices and no doubt many more things besides), if THEY bear it, it's the reverse. See, e.g., https://pluralistic.net/2024/11/01/bankshot/#personal-financial-data-rights
That's why it's so important that this space be regulated, with mandatory interoperability, as you mentioned.
@yasha@Iris You say that "AI can be used ethically and responsibly". But how do you address the problem of (unwitting, to be sure, but still) plagiarism? Also, assuming you mean things like using LLMs to improve formulations etc, when does "improving a formulation" turn into "letting AI generate my ideas for me"? I should think that this question does not have a sharply defined answer, and therefore the boundary between "responsible use" and "plagiarism" will always be ill-defined.
@yasha@Iris I am a computer scientist and by nature a neophile. But I also believe that the dividing line between "I'm using an LLM to improve my English" and "I'm using an LLM because I have a paper due and I'm all out of ideas" will FOREVER be murky, and that the only winning move is not to play.