@undergrowthfeed Well, yes. They are AFAIK created with the only directive of generating output that the submitter wants or at least seeks, which should reasonably lead to just lying and e.g. pretending that there is a research paper supporting the thesis someone uses the "AI" to find evidence for. The "AI" does its job there and it would be up to humans to constrain it to normally acceptable human behaviors such as not lying and not making suggestions that have clear high potential for being harmful.
Those new thingies are after all just machine learning language models, not inherently capable of independent ethics and empathy.
b9AcE (b9ace@todon.eu)'s status on Saturday, 10-Jun-2023 16:43:44 JST
-
Embed this notice
b9AcE (b9ace@todon.eu)'s status on Saturday, 10-Jun-2023 16:43:44 JST b9AcE