@leaverou The issue is that people are using LLMs not as an idle conversation partner but as an authority. A human teacher, librarian, doctor, or news reader who was so often wrong certainly wouldn’t be worth listening to.
Conversation
Notices
-
Embed this notice
Kate Morley (kate@fosstodon.org)'s status on Tuesday, 31-Dec-2024 17:56:16 JST Kate Morley
-
Embed this notice
Paul Sutton (zleap@qoto.org)'s status on Tuesday, 31-Dec-2024 17:56:16 JST Paul Sutton
I have been watching some of the videos at @ditchsummit, quite a few on AI, some of the suggestions is you ask different models the same question and also ask 'google' search and compare and contrast the results.
There was also a discussion on bias, and bias exists within the system, I think 5/7 of the AI companies are based in the US, one of the questions asked of AI was 'what started the civil war' the responses they got back were based on the American Civil War, as if the AI just assumed this is what was being refered to.
I did find this interesting, I have just tried this in the UK on my laptop, and confirmed that it responds with the American Civil war.
I think for the context of the conference they had tried this and were reflecting on the results.
-
Embed this notice
Lea Verou (leaverou@front-end.social)'s status on Tuesday, 31-Dec-2024 17:56:18 JST Lea Verou
Every time someone says LLMs are useless because they are so often wrong, I can’t help but wonder if they also consider talking to humans useless for the same reason.
-
Embed this notice