Perhaps I need to more explicitly call out what is actually the scariest part here: if you use this product, you're letting an application take control of your computer, using the output of a large language model as input. I know better than to describe an LLM as "just" a next-word predictor, because we've all seen how surprisingly powerful that can be. But still, it's all too common for LLMs to output things that don't make sense, especially when venturing outside their training.