Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@FourOh-LLC I would not rely on chat GPT as a research tool. It doesn't really memorize facts, not really. What it and LLMs do is they produce text that appears to complete what you gave it, so "the quick brown fox" might get completed with something like "jumped over the lazy dog."
That's what the algorithm is doing under the hood. The chat program you're using just creates a chat log and submits that as the prompt to complete. So, why am I bringing this up?
Well, it has a habit of making shit up. They call this hallucinating. See, if I were to ask it about a specific topic and ask it to cite the specific studies that support it, it will actually make them up. It will provide beautiful MLA formatting for them and everything, but those studies will be made up whole cloth.
This is because it isn't really memorizing facts. What it's doing is producing an answer that looks like it would be a real one based on its training data.
The way to use LLMs is you give the LLM the facts that you need it to interpret in the prompt, and then you tell it to do something with that information. You tell it to take the information at face value. You, the user, must curate the information yourself.