This reminded me of something my father once told me about space missions. He was on the ground staff for two Space Shuttle missions and he mentioned that any realistic depiction of mission control would involve people eating snacks all the time because - when things went well - there was really not much else to do but sit around and wait.
We're all wondering when this bubble will burst, but what if it already has? The flash of a distant explosion might not look like much, but only until the shockwave reaches you
@eniko it sounds a lot like those people who make videos about how to do financial trading. They are most certainly not doing what they're telling people, it's 100% grifting
I think there's an important clarification to be made about LLM usage in coding tasks: do you trust the training data? Not your inputs, those are irrelevant, I mean the junk that the major vendors have dredged from the internet. Because I'm 100% positive that any self-respecting state-sponsored actor is poisoning training data as we speak by... simply publishing stuff on the internet.
I've seen people claiming - with a straight face - that mechanical refactoring is a good use-case for LLM-based tools. Well, sed was developed in 1974 and - according to Wikipedia - first shipped in UNIX version 7 in 1979. On modern machines it can process files at speeds of several GB/s and will not randomly introduce errors while processing them. It doesn't cost billions, a subscription or internet access. It's there on your machine, fully documented. What are we even talking about?
Pedro Sánchez' refusal to bow to Trump's demands is proof that European politicians don't have to be spineless cowards, most of them simply chose to be
A few years ago I designed a way to detect bit-flips in Firefox crash reports and last year we deployed an actual memory tester that runs on user machines after the browser crashes. Today I was looking at the data that comes out of these tests and now I'm 100% positive that the heuristic is sound and a lot of the crashes we see are from users with bad memory or similarly flaky hardware. Here's a few numbers to give you an idea of how large the problem is. 🧵 1/5
And to reinforce this estimate I've looked at the numbers we got from the users who run the memory tester after having experienced a crash: for every two crashes we think are caused by a bit-flip the memory tester found one genuine hardware issue. Keep in mind that this is not doing an extensive test of all the machine's RAM, it only checks up to 1 GiB of memory and runs for no longer than 3 seconds... and it has found lots of real issues! 4/5
In other words up to 10% of all the crashes Firefox users see are not software bugs, they're caused by hardware defects! If I subtract crashes that are caused by resource exhaustion (such as out-of-memory crashes) this number goes up to around 15%. This is a bit skewed because users with flaky hardware will crash more often than users with functioning machines, but even then this dwarfs all the previous estimates I saw regarding this problem. 3/5
In the last week we received ~470000 crash reports, these do not represent all crashes because it's an opt-in system, the real number of crashes will be several times larger. Still, out of these ~25000 crashes have been detected as having a potential bit-flip. That's one crash every twenty potentially caused by bad/flaky memory, it's huge! And because it's a conservative heuristic we're underestimating the real number, it's probably going to be at least twice as much. 2/5
And for the record I'm looking at this mostly on computers and phones, but this affects *every* device. Routers, printers, etc... you name it. That fancy ARM-based MacBook with RAM soldered on the CPU package? We've got plenty of crashes from those, good luck replacing that RAM without super-specialized equipment and an extraordinarily talented technician doing the job. 5/5
When talking about the energy consumed by LLMs don't be fooled by arguments focusing solely on the direct power consumption of these models, because they are externalizing a lot of it. Browsers are now doing proof-of-work calculations to access many websites, because websites need to protect themselves from AI scrapers. That takes power! Let that sink in: every computer, tablet or phone on earth is now consuming more power *every time it accesses a webpage* because of "AI".
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
Now you might wonder if these kinds of bugs can be fixed after the fact. Well, sometimes they can, sometimes they can't. CPUs are not purely hard-coded beasts, they rely on microcode for part of their operation. Traditionally microcode is a set of internal instructions that the CPU ran to execute external instructions. That's mostly not the case anymore, and modern microcode ships not only with implementations of complex instructions but also a significant amount of configuration. 16/31
Microcode can also be used to work around conditions caused by data races, by injecting bubbles in the pipeline under certain conditions. If the execution of two back-to-back operations is known to cause a problem it might be possible to avoid it by delaying the execution of the second operation by one cycle, again trading performance for stability. 19/31
When implementing a new core it is commonplace to implement new structures, and especially more aggressive performance features, in a way that makes it possible to disable them via microcode. This gives the design team the flexibility to ship a feature only if it's been proven to be reliable, or delay it for the next iteration. 18/31