Conversation
Notices
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Monday, 17-Feb-2025 09:56:34 JST iced depresso
@tero its all crap. all of it. total shit.
the spike pulse trains are an important part of the circuitry, the critic network is correct in A3C but they communicate downward to inform local learning rules.
not quite sure about the hippocampus function. though if i was, i wouldn't still be here. :neofox_thonk:-
Embed this notice
Tero Keski-Valkama (tero@rukii.net)'s status on Monday, 17-Feb-2025 09:56:45 JST Tero Keski-Valkama
Hear me out: I think applying RL on #LLMs and LMMs is misguided, and we can do much better.
Those #RL algorithms are unsuitable for this, and for example they cannot learn how their decisions affect the eventual rewards, but instead are just optimized to make the decisions based on Bellman optimization.
Instead we can simply condition the LLMs with the rewards. The rewards become the inputs to the model, not something external to it, so the model will learn the proper reward dynamics, instead of only being externally forced towards the rewards. The model can itself do the credit assignment optimally without fancy mathematical heuristics!
This isn't a new idea, it comes from goal-conditioned RL, and decision transformers.
We can simply run the reasoning trajectories, judge the outcomes, and then put the outcome tokens first to these trajectories before training them to the model in a batch.
-
Embed this notice