In case you do not know how GenAI works, here is a very abridged description: First you train your model on some inputs. This is using some very fancy linear algebra, but can be seen as mostly being a regression of some sorts, i.e. a lower dimensional approximation of the input data. Once training is completed, you have your model predict the next token of your output. It will do so by creating a list of possible tokens, together with a rank of how good of a fit the model considers the specific token to be. You then randomly select from that list of tokens, with a bias to higher ranked tokens. How much bias your random choice has depends on the "temperature" parameter, with a higher temperature corresponding to a less biased, i.e. more random selection.
Now obviously, this process consumes a lot of randomness, and the randomness does not need to be cryptographically secure, so you usually use a statistical random number generator like the Mersenne twister at this step.
So when they write "using a Gen AI model to produce 'true' random numbers", what they're actually doing is using a cryptographically insecure random number generator and applying a bias to the random numbers generated, making it even less secure. It's amazing that someone can trick anyone into investing into that shit.
@sophieschmieg LMAO what??? There are ppl trying to use LLM output as RNG??? And thinking "I'm too stupid to understand how it works so that means it's secure!!!111" ??? 🤦
🤏 🎻 when they get pwned. I'm out of patience for the LLM fan 🤡 🚗
@sophieschmieg BTW not criticizing your choice of MT as an illustration because it's exactly the sort of thing these bozos would know by name, but it's utterly the worst choice of deterministic PRNG. Gratuitously large state, poor output quality. Even a 128bit or possibly even 64bit LCG throwing away lower bits is better.
@sophieschmieg@infosec.exchange Ironically most GenAI implementations have troubles on producing deterministic output due to floating point errors, inconsistent batching, etc. Not random enough for crypto, but random enough to create replication problems. It's what I call Murphy's Duality Law - In engineering, when a system can show both the property "A" and its negation "not A" depending on the specific context, it's always the opposite of what your application needs.
@sophieschmieg If LLMs are snake oil, this "AI RNG" is meta-snake oil. It's like expecting a homeopathy distillation of horse dewormer will cure Covid.
It's so obviously fake that I can't even find a good metaphor to explain how bad it is.
@sophieschmieg I know you have your methods, but if interested I've become the first only true random shitpost generator using my brain trained on naturally recurring shitposts with enough anthropy and randomness to defend against a whole quanta of warm tea
And, yes, this person is real, and apparently believes he is doing something interesting. https://www.linkedin.com/in/eric-dresdale/ (don't have liquids in your mouth when reading…).