In what world is it acceptable to have a product whose behavior is not reproducible at all? You have no idea what the training data is, what the evaluation data is, y'all write papers about the system "learning" this or that, when your test set might be part of its training set. And these companies can't provide any guarantees for what the output will be for a particular input, and the ways in which it will change, if the output is different for the same input.
How can you build anything "safe" on top of such a "foundation"?
These companies talking about their systems like they are supposed to be some sort of "digital god" is a great way for them to evade any basic requirement for any engineering output, if this can be called that.