Last week Laurie Voss announced that he’d released a new Mastodon client called Zeitgeist.blue, “a multi-social-network app that summarizes your feed for the last 24 hours.” As a Mastodon client, it appears to authenticate a given user with an existing instance (including social.coop, potentially) and, using Anthropic or GitHub Copilot, processes timelines for the purposes of summarizing one’s timelines using a large language model (LLM).Many replies to the announcement were critical of the project and took issue with the lack of consent and being subjected to AI surveillance. Voss was dismissive of those concerns, referring to people complaining about the lack of consent mechanisms as “tedious bastards,” and began blocking people replying in the thread.Voss was informed of an existing precedent for people to tag their bios in order to opt-out and subsequently added an opt-out for people who do not want their posts indexed. Initially Zeitgeist indexed everything, including DMs and follower-only posts, but now DMs are filtered out. There is also unique “User Agent” metadata that accompanies requests and theoretically could be used to block requests as an instance, if we chose to take that route.I’d like to start a discussion here of how the cooperative would like to handle this, or other projects that engage in trawling content and processing posts using LLMs. Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?Some related essays that might be helpful for thinking about this with:Eight tips about consent for fediverse developers by Jon PincusAdventures in Mastoland by Jan Lehnardt