@randomwalker @sayashk I believe you are interesting in this case. Japanese roll playing game fan Tried to play a game as game master with ChatGPT player.
And they got the perfect game. ChatGPT understand the battle rules which is made by GM during their play on the fly.
ChatGTP might be useful as human cloud.
This is the log. Sorry the actual logs are screenshots.
Conversation
Notices
-
Embed this notice
藤井太洋, Taiyo Fujii (taiyo@ostatus.taiyolab.com)'s status on Wednesday, 07-Dec-2022 09:16:35 JST 藤井太洋, Taiyo Fujii -
Embed this notice
Arvind Narayanan (randomwalker@mastodon.social)'s status on Wednesday, 07-Dec-2022 09:16:36 JST Arvind Narayanan Here are three kinds of tasks where @sayashk and I think ChatGPT can shine, despite its inability to discern truth in general:
1. Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.
2. Tasks where truth is irrelevant, such as writing fiction.
3. Tasks for which there does in fact exist a subset of the training data that acts as a source of truth, such as language translation.
https://aisnakeoil.substack.com/p/chatgpt-is-a-bullshit-generator-but
-
Embed this notice
Arvind Narayanan (randomwalker@mastodon.social)'s status on Wednesday, 07-Dec-2022 09:16:37 JST Arvind Narayanan The philosopher Harry Frankfurt defined bullshit as speech intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. So using ChatGPT in its current form would be a bad idea for applications like education or answering health questions.
Despite this, there are three areas where LLMs can be extremely useful: https://aisnakeoil.substack.com/p/chatgpt-is-a-bullshit-generator-but
-
Embed this notice