Midnight Blizzard Entertainment. :blobcatpopcorn:
Conversation
Notices
-
Embed this notice
Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Sunday, 21-Jan-2024 08:03:39 JST Michał "rysiek" Woźniak · 🇺🇦 -
Embed this notice
noplasticshower (noplasticshower@zirk.us)'s status on Sunday, 21-Jan-2024 08:03:30 JST noplasticshower @rysiek Remember, we use ML (or should only use it) when we can't explicitly code HOW so we instead pile up the WHAT and make a machine become that.
-
Embed this notice
noplasticshower (noplasticshower@zirk.us)'s status on Sunday, 21-Jan-2024 08:03:31 JST noplasticshower @rysiek sure. Usual testing assumes determinism. When we run a test we get the same wrong answer until a bug is fixed. With stochastic code, the "right" answer is much harder to define. It is much easier in that case for wrongness to slip by. And it's really hard to get stochastic code into the very same state to find a bug (or watch it) twice. Believe me, my thesis code was non-deterministic and it was a pain in the ass to debug.
pettter and Tokyo Outsider (337ppm) repeated this. -
Embed this notice
Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Sunday, 21-Jan-2024 08:03:32 JST Michał "rysiek" Woźniak · 🇺🇦 @noplasticshower do go on? I have a somewhat intuitive understanding of why, but I would very much not mind getting some more specific data.
-
Embed this notice
noplasticshower (noplasticshower@zirk.us)'s status on Sunday, 21-Jan-2024 08:03:33 JST noplasticshower @rysiek in fact, it makes it significantly worse
-
Embed this notice
Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Sunday, 21-Jan-2024 08:03:34 JST Michał "rysiek" Woźniak · 🇺🇦 @noplasticshower oh yeah, and that does not make it any better! :blobcatcoffee:
-
Embed this notice
noplasticshower (noplasticshower@zirk.us)'s status on Sunday, 21-Jan-2024 08:03:35 JST noplasticshower @rysiek you're missing stochasticism
-
Embed this notice
Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Sunday, 21-Jan-2024 08:03:36 JST Michał "rysiek" Woźniak · 🇺🇦 With "regular" software, there is source code, there are tests, there is a way to rebuild a binary from scratch.
Yes, "on trusting trust" etc, but at least there are ways to lower the uncertainty here.
With an LLM? Where re-training the whole model from scratch would take insane amounts of time, money, energy, and water?
That is, if it were at all possible, in fact, since these companies often don't know themselves what went into the training corpus. :blobcateyes:
Am I missing anything?
-
Embed this notice
Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Sunday, 21-Jan-2024 08:03:37 JST Michał "rysiek" Woźniak · 🇺🇦 > To date, there is no evidence that the threat actor had any access to customer environments, production systems, source code, or AI systems.
Oh this gon b good! :blobcatpopcornnom:
Here's a question: if a threat actor *did* gain access to AI systems, and maliciously modified the models in some way — apart from audit trail, could they know?
There is no way for Microsoft to test for such modifications. AI is a black box, including to its creators.
-
Embed this notice
Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Sunday, 21-Jan-2024 08:03:38 JST Michał "rysiek" Woźniak · 🇺🇦 > Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents.
https://msrc.microsoft.com/blog/2024/01/microsoft-actions-following-attack-by-nation-state-actor-midnight-blizzard/
-
Embed this notice