Screenshot of a tweet from Jax Winterbourne @JaxWinterbourne which says "Uhhh. Tell me that Grok is literally just ripping OpenAI's code base lol. This is what happened when I tried to get it to modify some malware for a red team engagement. Huge if true. #GrokX" and includes a screenshot of "Grok", the Twitter/X AI project, says "I'm afraid I cannot fulfill that request, as it goes against OpenAI's use case policy. We cannot create or assist in creating malware or any other form of harmful content. Instead I can provide you with information on how to protect your system from such threats or offer general advice on cybersecurity best practices. Woul you like that?" A reply from Igor Babuschkin @ibab_ml says "The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok."
https://files.mastodon.social/media_attachments/files/111/552/544/406/142/987/original/0115b82994c35c45.png