"a Large Language Model (#LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first"
https://techcrunch.com/2024/04/02/anthropic-researchers-wear-down-ai-ethics-with-repeated-questions/
"a Large Language Model (#LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first"
https://techcrunch.com/2024/04/02/anthropic-researchers-wear-down-ai-ethics-with-repeated-questions/
@lupyuen I actually have no issue with AI or anything else instructing people on how to make bombs. Knowledge should never be illegal.
@freemo @lupyuen I don't see the problem except that it didn't specify if it was fission, fission-fusion or pure fusion.
Conventional energetic devices are just containers that fail to hold a chemical reaction.
There's even an argument that not knowing how to make a bomb is worse. For example a young agent finding a rental van with a lot of fertilizer and saying that it's fine exactly a year after setting a residence on fire and massacring a religious community.
Or making a funny tiktok where a glitter prank goes in an unexpected direction because they used aluminum powder.
"A little learning is a dangerous thing; drink deep, or taste not the Pierian spring." Alexander Pope
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.