@woe2you @dalias @tante in addition, LLM’s can pretend to be old people to waste time of scammers… that’s another solid use case.
Conversation
Notices
-
Embed this notice
altruios phasma (altruios@mastodon.social)'s status on Thursday, 16-Jan-2025 08:10:25 JST altruios phasma
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Thursday, 16-Jan-2025 08:10:25 JST Rich Felker
@altruios @woe2you @tante This ignores the asymmetry, that attackers (scammers) get far more advantage from LLMs than defenders (honeypots).
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Thursday, 16-Jan-2025 09:05:13 JST Rich Felker
@altruios @woe2you @tante Without LLMs, attacker has 100:1 advantage. With LLMs, 1000000000:1.
(Made up numbers but the concept holds. Honeytraps are not all that useful especially if the attacker has near unlimited parallelism.)
-
Embed this notice
altruios phasma (altruios@mastodon.social)'s status on Thursday, 16-Jan-2025 09:05:14 JST altruios phasma
@dalias @woe2you @tante I seem ignorant here on what disadvantage there would be.
wouldn’t an LLM be a potentially more effective (scalable too) honeytrap than current honeytraps? I know an attacker has an advantage in the general case: so why not supe up the honeypots?Using an LLM cost next to pennies. Training it is where the energy use happens). This seems like a really good idea to use them as honeypots for scammers…
-
Embed this notice
altruios phasma (altruios@mastodon.social)'s status on Thursday, 16-Jan-2025 09:21:20 JST altruios phasma
@dalias @woe2you @tante how do LLM honeytraps increase the attackers effectiveness as you claim? That makes zero sense: so if you could explain clearly…
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Thursday, 16-Jan-2025 09:21:20 JST Rich Felker
@altruios @woe2you @tante They don't. Rather, the existence of LLMs massively amplifies the attacker's power, but only helps the defender build more honeytraps. If you were, outside capitalist motives to scam investors & public, deciding whether to invest astronomical amounts of resources into creating LLMs to aid defenders, a rational decision process would quickly rule that out because they'd amplify attacker power far more.
-
Embed this notice