I have to admit, I'm often impressed when I ask GPT-4 Japanese language questions.
(Assuming of course this is correct, but a casual secondary web search suggests it's not wrong - if any Japanese speakers disagree, please let me know.)
Conversation
Notices
-
Embed this notice
マリオ (Mario Menti) (mario@neko.cat)'s status on Sunday, 04-Jun-2023 17:49:08 JST マリオ (Mario Menti) -
Embed this notice
マリオ (Mario Menti) (mario@neko.cat)'s status on Sunday, 04-Jun-2023 18:18:41 JST マリオ (Mario Menti) @shiawase agreed, but this really applies to the Internet as a whole too. You would rarely take things at face value without double checking (except perhaps if you really do trust a specific source). You're right though that it's easy to believe ChatGPT because it's so eloquent even when it's not accurate.
-
Embed this notice
Robert Belton (shiawase@mastodon.social)'s status on Sunday, 04-Jun-2023 18:18:42 JST Robert Belton @mario
It’s the sense of distrust in the accuracy of the output that diminishes the usefulness for me. It lacks authority because there’s always that sense of I should check that, which somehow defeats the purpose. The catch 22 of needing expertise to catch any errors.
If I found the info in a book or (most) websites I’d believe it. GPT algorithm ? Not so much. Maybe because despite how it’s marketed expertise is not it’s purpose, plausible English prose is. -
Embed this notice
Robert Belton (shiawase@mastodon.social)'s status on Sunday, 04-Jun-2023 19:58:27 JST Robert Belton @mario
I think one of the tools people use for judging sites (or expertise) is how they are written and presented. This partially goes out the window now with ChatGPT.
There’s also reputational trust. Eg BBC, NHK
Even websites for Japanese, I assume good intent and human oversight. With content sourced from books and experience.
It’s possible to judge a site as a whole, or see maybe a specific area you don’t value.
The internet skill is in judging sites. GPT makes that harder. -
Embed this notice
マリオ (Mario Menti) (mario@neko.cat)'s status on Sunday, 04-Jun-2023 19:58:27 JST マリオ (Mario Menti) @shiawase true, but I expect we will all gain GPT skills just like we gained Internet skills...
-
Embed this notice
マリオ (Mario Menti) (mario@neko.cat)'s status on Monday, 05-Jun-2023 04:45:25 JST マリオ (Mario Menti) @shiawase I think there's plenty of opportunity for domain-specific AI. We're seeing this already in many fields, IMO there's no reason there couldn't be a Japanese-English model trained specifically for that purpose.
Sort of related, I saw an article/academic paper recently where they gave LLMs a lot of context, and made it do research in a specific field before doing a translation (i.e. similar to what a human translator would do), and supposedly the results were much better than what the current machine translation models can do.
-
Embed this notice
Robert Belton (shiawase@mastodon.social)'s status on Monday, 05-Jun-2023 04:45:26 JST Robert Belton @mario
Maybe.Once I assess a site’s trustworthiness it’s more or less constant. ChatGPT has to be evaluated each time. The hope seems to be that if it comes up with proper prose that prose will also be accurate and factual.
If some respected organisation came out with “Expert Japanese-English Teacher” I’d probably trust that over ChatGPT for accurate information (rather than a summary or starting point)
Time will tell.
-
Embed this notice
Robert Belton (shiawase@mastodon.social)'s status on Monday, 05-Jun-2023 06:19:22 JST Robert Belton @mario
Because I don’t have a background it’s hard to get a handle on how LLMs work and what processes they go through.
I have heard a presentation that talked about how expert training didn’t produce as good a result as just increasing the size of the LLM (what GPT4 did).
Leaving aside factual accuracy. Going from a single sentence question prompt to several paragraphs of a plausible answer in good English through a program is frankly amazing. -
Embed this notice
マリオ (Mario Menti) (mario@neko.cat)'s status on Monday, 05-Jun-2023 06:19:22 JST マリオ (Mario Menti) @shiawase I think there are limits in terms of how much you can increase the size, both in terms of the sheer cost, and diminishing returns. I'm no expert either though! Interesting, for sure..
-
Embed this notice
Robert Belton (shiawase@mastodon.social)'s status on Monday, 05-Jun-2023 06:19:23 JST Robert Belton @mario
I could see how it would improve translation. (I think). The problem is more constrained (but not simple) with translation; you have a large prompt and only so many ways of outputting a meaningful matching result. And while we expect accuracy, we aren’t asking for extra information not present in the original text. (Leaving aside stylistic choices etc.)
Is training an LLM really it doing research? Can it ‘do’ anything? There’s so much anthropomorphic language around ‘AI’
-
Embed this notice