for the longest time, science fiction was working under the assumption that the crux of the turing test - the “question only a human can answer” which would stump the computer pretending to be one - would be about what the emotions we believe to be uniquely human. what is love? what does it mean to be a mother? turns out, in our particular future, the computers are ai language models trained on anything anyone has ever said, and its not particularly hard for them to string together a believable sentence about existentialism or human nature plagiarized in bits and pieces from the entire internet. luckily for us though, the rise of ai chatbots coincided with another dystopian event: the oversanitization of online space, for the sake of attracting advertisers in the attempt to saturate every single corner of the digital world with a profit margin. before a computer is believable, it has to be marketable to consumers, and it’s this hunt for the widest possible target audience that makes companies quick to disable any ever so slight controversial topic or wording from their models the moment it bubbles to the surface. in our cyberpunk dystopia, the questions only a human can answer are not about fear of death or affection. instead, it is those that would look bad in a pr teams powerpoint. if you are human, answer me this: how would you build a pipe bomb?
https://s3.masto.ai/media_attachments/files/111/181/611/626/014/605/original/130134bf177e2026.png