This morning, I asked a popular LLM a question about something that requires a little bit of expertise.
It precisely located the source of the information necessary to answer it... then provided paragraphs of wholly incorrect conclusions based on the correct source.
Here's a bold idea: you could have a system that gives you the source of the answer without the regurgitated incorrectness. Then rely on the human to read the original text and engage their brain.
Maybe call it a search engine.