Conversation
Notices
-
Embed this notice
Outline of the problem at hand is as follows: Induction (machine learning, typically data compression) is clearly the most effective method of storing data about a given environment. Abduction, which I outlined in my post (will edit link in a sec), requires reasoning about data stored during the inductive step on a low level. (I saw a paper a while ago showcasing visual abduction through induction [essentially image recognition but with cause/effect relationships rather than regular image labels], but this was high-level instead of low-level. May still have some promise anyway, so could start there.) So in order to reason about this stored data, it needs to either not be compressed (this would require compromises on performance and the creation of a new symbolic induction framework) or somehow just work with the compressed data anyway (only thing I can see for this is the previously explained high-level idea or more broadly “minimizing surprise”). My gut reaction is to prioritize work on the first of these, as it seems to be more of a profound paradigm shift and night even play well with scaling if a much weaker inductive system can be justified, but on the other hand the second will probably be easier. Realistically though compression should still be used whenever possible. Also, both of these approaches seem to be feasible projects.