Embed Notice
HTML Code
Corresponding Notice
- Embed this noticehuh. neat.
reading this paper on sparse neural network training. they mention trying to track which parts of layers tend to be used or not.
this is what an old ukraine scientist did when he made something called GMDH. he didn't make a huge mesh and mask it out--he has it grow the network layer by layer, then prunes it, and continues.
i discovered this because some obscure mac developer uses this method in software he sells to financial forecasting professionals.