What fascinates me is that it’s scientists (AKA people):
- who are prompting chatGPT and LLM-based solutions
- who are copying the prompt results, pasting them into scientific papers as the authors and publishing them as their work.
This situation demonstrates the lack of ethical decision making by scientists (AKA people), enabled by a specific tool, LLMs.
Why isn’t the “lack of ethics” the conversation, not watermarking LLM content or creating tools to detect it?
- https://fediscience.org/@ct_bergstrom/113099873127186533