@jarango I’ll be honest: I genuinely believe that the future for LLM-as-a-service style APIs is very limited once the heat of VC fervor dies down and the costs of training and running the models at scale are passed on to end users.
That said, for projects at the scale we’re talking about (and the average enterprise web site is well within that range) I think locally hosted models of increasing efficiency will become a standard part of the kit for analysis and experimentation.