@Sobieck even the word "bullshit" is too much anthropomorphism.
About the only way to give a sense as to what is happening is to draw some random data points (the body of ingested text) then pass a wiggly line that is supposed to 'fit' the data (the #llm), and point out the gross failure to capture the underlying behavior.
Its a mathematical malfunction devoid of any further meaning (as meaning never even enters the #ai equation).