@nopatience one thing that is maybe suspicious about that text is that it lacks the things that could have made it interesting: the connection to humans in the real world. For example, where did the malware come from and how was it distributed, were people tricked into installing it and if so, how? Who was affected? The text does not say anything at all about such things as far as I can tell. And those are perhaps things an LLM-generated text would lack, as the LLM is unaware of the real world.
Conversation
Notices
-
Embed this notice
Elias (eliasr@social.librem.one)'s status on Tuesday, 24-Dec-2024 06:35:47 JST Elias -
Embed this notice
Christoffer S. (nopatience@swecyb.com)'s status on Tuesday, 24-Dec-2024 06:35:49 JST Christoffer S. I enjoy reading articles written by humans, because most often ... they read like as if a human had written it.
Tell me what you think of this one from Fortinet:
https://www.fortinet.com/blog/threat-research/analyzing-malicious-intent-in-python-code
To me this reads like an LLM has generated the output based on some technical indicators.
What's your take? I really, really dislike it. Please dont write like this if you are a human.
-
Embed this notice