@FediThing @lauren @baldur These wrong answers lead me to the right answer faster than a google search, for instance, so yes, it's beneficial since it saves me time. As long as you've learned not to trust is blindly - which is why I said that we should teach how to use it properly.
Conversation
Notices
-
Embed this notice
VessOnSecurity (bontchev@infosec.exchange)'s status on Wednesday, 04-Jun-2025 23:50:07 JST
VessOnSecurity
-
Embed this notice
Lauren Weinstein (lauren@mastodon.laurenweinstein.org)'s status on Wednesday, 04-Jun-2025 23:50:02 JST
Lauren Weinstein
@bjoernstaerk @bontchev @FediThing @baldur It's worse than that, it will be increasingly difficult to even KNOW when you're communicating with horribly flawed LLM generative AI systems. As pushback continues, firms will be increasingly trying to obscure the fact that they are being used at all.
-
Embed this notice
VessOnSecurity (bontchev@infosec.exchange)'s status on Wednesday, 04-Jun-2025 23:50:04 JST
VessOnSecurity
@lauren @FediThing @baldur I am not talking about teaching them how AI tech works. We don't teach everybody how the internals of the computer work. I am talking about teaching them how to *use* AI properly - just like we teach kids how to use computers.
What is the alternative? Not teach them how to use AI properly and let them try to figure it out themselves and fall for hallucinations and other bullshit?
Oh, and you just blamed the users, BTW, by saying that they can't use security properly. Which they indeed can't - but it's our fault, not theirs, because so far we have failed to figure out how to make computer use for sensitive stuff both secure and intuitive.
-
Embed this notice
Bjørn Stærk (bjoernstaerk@snabelen.no)'s status on Wednesday, 04-Jun-2025 23:50:04 JST
Bjørn Stærk
@bontchev
i think most people are generally capable of making reasonable decisions for themselves, with the right tools and advice. in this case the right advice, in my view, is: _never_ ask a language model a question. not even if you know not to trust it.
@lauren @FediThing @baldurBørge repeated this. -
Embed this notice
Lauren Weinstein (lauren@mastodon.laurenweinstein.org)'s status on Wednesday, 04-Jun-2025 23:50:06 JST
Lauren Weinstein
@bontchev @FediThing @baldur I'm so, so tired of this reasoning. It's a struggle just to get users to understand basic login and authentication, and they still get phished and their accounts hijacked continuously. Passkeys can cause them even more problems, especially when like many their Internet access is only from a single device -- usually their phone. They don't understand about backup accounts and recovery addresses. THEY BARELY UNDERSTAND THIS STUFF. And YOU claim they should be educated about AI? Give me a friggin' break. I've been dealing with the social implications of this stuff (via my PRIVACY Forum for almost 35 years continuously on the Net) seemingly forever, and you are NOT going to get busy people to understand AI tech. This is 100% the fault of Big Tech CEOs pushing out generative AI they KNOW is flawed and even dangerous. Google KNOWS they are stealing data from sites and now giving virtually nothing in return. Often no links. Bad links. And AI Overviews that take up the entire screen. And now AI Mode is even worse. STOP blaming the USERS!
-
Embed this notice