iced depresso (icedquinn@blob.cat)'s status on Thursday, 19-Sep-2024 03:59:11 JST
-
Embed this notice
@s8n @picofarad kurzweil wrote about it in how to create a mind. they used bandpass filters to emulate a cochlea and then quantized the filters to a codebook so they could use markov models in dragon speech.
the reason dragon speech worked so well is because they tuned it with evolutionary solvers (gradient descent wasn't hip yet) instead of expectation-maximization
E-M works but is kind of bad which is why the TTS voices sound machiney (its just averaging a lot of states together.) somebody went and re-did the old mixture model tests with gradient descent and found that it was doing just fine.
(dragon was also capable of adaption, which none of the google-shit tech does.)