@KuteboiCoder i'll take a look, but yeah. especially some of the stuff like liquid state machines are being tested on like, 2,000 neuron "reservoirs," and are detecting features in video. or KAN networks that are more expensive to train (because they have a multi-point b-spline for activator functions) but require so many less that you still come out ahead.
it also looks like deep learning is a meme. decoder/encoder networks seem to be fixed by genetics or some one-off initialization process, and only one or two layers at the back actually handle interpreting (and, maybe, communicate to themselves with loops--as is theorized with the phenological loop, esp. people talking to themselves to solve problems)
@KuteboiCoder some of the people in the spike net space are either not using gpus at all (just using a rack of cpu servers and evolution) or just one chunky GPU, because the networks are so small you can tinker with them on a just a 2,000$ desktop machine :blobcatgamer: