Conversation
Notices
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Saturday, 27-Jul-2024 07:56:54 JST iced depresso
@tero you have to use a model that can do partial updates in a reasonable amount of time and equipment. intelligent systems have reinforcement learning, and it seems those are reinforced by some hard built +/- sensors combined with some kind of experience bank, and continously rewriting a limited amount of neurons in the attempt to predict outcomes.
LLMs cannot learn.-
Embed this notice
Tero Keski-Valkama (tero@rukii.net)'s status on Saturday, 27-Jul-2024 07:57:06 JST Tero Keski-Valkama
How to refine data for #LLMs? What does it mean that the data has high quality?
It's not about the data having fewer typos, or less wrong answers. Unless you are training a trivia bot.
The power of LLMs comes from them modelling the latent processes behind the task trajectories, the data, especially when the processes contain intelligent thought.
So, when you're generating synthetic data, or refining collected data, you will need to make sure the refinery output is of higher quality than its inputs.
This means you need to:
- Add intelligence. Make the new task trajectories perform deeper syntheses, pull in more relevant knowledge, take steps futher. Make more complex task performances out of simpler ones. Go through more possibilities. Go deeper meta-level and e.g. validate validations. Use search over alternative solutions.
- Groom out bad data. Rank, criticize, evaluate, and either improve/fix bad data or recontextualize it.
- Collect new data which is created by the data refinement processes themselves.
- Add knowledge from external sources, and synthesize it with the knowledge already known. Also consider the next level implications of all the knowledge already acquired.
- Apply skills to knowledge to produce new knowledge and new skills.LLMs are data-defined. Data isn't a static thing, it needs to be looked at philosophically.
-
Embed this notice