@freakazoid@dahukanna Very few things started with it, but generative transformers trained on huge piles of content have radically accelerated the creation and distribution of “chum content” — seeing it bleed into critical research fields, versus just clogging search results and news feeds, is incredibly depressing
It seems pretty clear to me that the unethical behavior didn't start with LLMs. The papers that are getting through tell me that peer review already wasn't happening in many cases. So how many thousands of papers have been published over the past decades where the data have simply been faked, but the papers themselves were written by humans, making them much harder to catch unless without trying to reproduce the results?
There's too much pressure to publish, and no incentive at all to do real peer review, much less reproduce others' results. If we want ethics we need a thorough housecleaning. Starting with shutting down all the closed-access journals.
What fascinates me is that it’s scientists (AKA people): - who are prompting chatGPT and LLM-based solutions - who are copying the prompt results, pasting them into scientific papers as the authors and publishing them as their work. This situation demonstrates the lack of ethical decision making by scientists (AKA people), enabled by a specific tool, LLMs.