@hobbsc Yeah. I think so.
I feel like the technology is in the "dot matrix home printer" phase, and the next moves will be more and more about miniaturizing things. There's already "small language models" (smaller, more specific training data) and a bit matrix model (1bit matrices vs 64 bit, similar performance but way smaller).
In those new cases you can't really reuse the current models as is. You have to retrain from scratch. So the big guys don't want to do it, but more hobbyists can DIY.