Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@urusan there was a paper about that which was linked on fedi. people have looked specifically in to lstm-ish bots doing logic and they do not demonstrate it; they basically demonstrate statistical heuristics or something similar.
they would probably need to have an architecture which actually allows them to cogitate for a while before saying words. and i'm not sure if such a system would even be differentiable. (i would like to see how numenta's cortical models hold up to decision making tasks; i think they finished porting some of that to pytorch now, who knows if i'll feel like doing AI again this year.)