GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Ulrike Franke (rikefranke@bagarrosphere.fr)'s status on Tuesday, 07-Oct-2025 18:53:43 JST Ulrike Franke Ulrike Franke

    And here we go. I never wrote this article, and yet it is cited here.

    https://www.liberalbriefs.com/geopolitics/power-without-mandate-the-perils-of-germanys-military-ambition

    And of course, it sounds so plausible, I seriously checked whether I had forgotten it, or the footnote was slightly wrong.

    The other sources are hallucinated too.

    #AIisnotresearch

    In conversation about 2 months ago from bagarrosphere.fr permalink

    Attachments


    1. https://bagarrosphere.fr/system/media_attachments/files/115/331/979/642/906/104/original/f1d881227a9bf60b.jpeg
    • Embed this notice
      es0mhi (es0mhi@tilde.zone)'s status on Tuesday, 07-Oct-2025 21:59:42 JST es0mhi es0mhi
      in reply to

      @rikefranke

      The least you could do now is to prompt the AI for writing the article. Show some empathy and wash away its shame! We are all human ;-)

      In conversation about 2 months ago permalink
    • Embed this notice
      Kermode (gemlog@tilde.zone)'s status on Tuesday, 07-Oct-2025 23:15:58 JST Kermode Kermode
      in reply to
      • Drew Kadel

      @rikefranke
      No brain to 'hallucinate'. The program functioned as designed.

      Drew Kadel @DrewKadel wrote:

      My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:

      "Something that seems fundamental to me about ChatGPT, which gets lost over and over again: When you enter text into it, you're asking "What would a response to this sound like?" If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing! But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else. It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation."

      Calling it 'auto-complete on steroids' is close, but not giving it proper credit.

      In conversation about 2 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.