@andersbiork @peacememories @Cheeseness @aral Thanks for the tag! The model in production is also Mistral which we host on our own server using vllm. The only reason the code references openai is because everyone seems to have settled on that as the standard api.
The summary only shows a couple of lines, so I think it can be seen as an expensive snippet that is only computed on-demand when the user requests it. If one wants more information they need to visit the original site.
[continued]