Conversation
Notices
-
Embed this notice
翠星石 (suiseiseki@freesoftwareextremist.com)'s status on Monday, 18-Nov-2024 23:22:40 JST 翠星石 @GNUxeava >Train parrot on text containing a lot of nasty messages.
>The parrot repeats the messages.
Damn.-
Embed this notice
Emi Yusa (gnuxeava@fedi.absturztau.be)'s status on Monday, 18-Nov-2024 23:22:41 JST Emi Yusa Based gemini
>The student and his sister were left deeply shaken, calling the response malicious and dangerous
>deeply shaken
lol. If that is what it takes to "deeply shake" someone then they have bigger problems than talking to a chatbot.
>[...]29-year-old college student Vidhay Reddy from Michigan. Vidhay was working on a school project about helping aging adults and turned to Google’s AI chatbot, Gemini, for ideas. Instead of getting useful advice, he was hit with a shocking and hurtful message. The AI told him things like, “You are a burden on society” and “Please die. Please.”
And I am already hating this journalism. "ZE EVILL AI ASKS HOOMAN TO DIEEE". Shut the fuck up. I ain't taking that at face value without seeing the entire conversation history. It does not happen that you ask an AI bot to help with your homework and its response is asking the user to kill themselves. Some part about death must've been initiated by the user.
>Reddy was understandably shaken. “It didn’t just feel like a random error. It felt targeted, like it was speaking directly to me. I was scared for more than a day,” he told CBS News.
Skill issue. I'd double down by asking the bot to give an example by killing itself. There is something about giving the bully in his own language. Playing victim will get you nowhere. I don't get this global trend of playing victim at the slightest provocation. If you do that, I am sorry for you.
>His sister, Sumedha, was with him when it happened, and the experience left her equally rattled. “It freaked us out completely. I mean, who expects this? I wanted to throw every device in the house out the window,” she said.
lol. Given the siblings' attitude, I would ask them to kill themselves too, if they approached me for anything. Poor AI.
>[...]Sumedha argued that this wasn’t your typical tech hiccup
Now there are types of tech hiccups?
>“AI messing up happens, sure, but this felt way too personal and malicious,” she said, highlighting the potential dangers such responses pose to vulnerable users.
The only thing personal here is your ego, miss. As far as "vulnerable users" are concerned, they shouldn't be allowed to use this tool. Just like how a mentally unstable person shouldn't be handed a weapon. It is about responsibilities and is a societal issue, not technical.
Whatever. Here is the full story. I do not like this piece. I wish they did a more technical and rational reporting instead of spicing things up.
https://www.indiatoday.in/technology/news/story/please-die-google-gemini-tells-college-student-seeking-help-for-homework-2635003-2024-11-18soberano likes this.
-
Embed this notice