With "regular" software, there is source code, there are tests, there is a way to rebuild a binary from scratch.
Yes, "on trusting trust" etc, but at least there are ways to lower the uncertainty here.
With an LLM? Where re-training the whole model from scratch would take insane amounts of time, money, energy, and water?
That is, if it were at all possible, in fact, since these companies often don't know themselves what went into the training corpus. :blobcateyes:
Am I missing anything?