GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Evan Prodromou (evan@cosocial.ca)'s status on Friday, 18-Apr-2025 23:44:05 JST Evan Prodromou Evan Prodromou

    Thanks to everyone for the responses. I am a yes, but. I think it's good to optionally hook into a single local agent across applications. A lot of apps today support the Ollama API, which is probably a good one to use.

    In conversation about 23 days ago from cosocial.ca permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Friday, 18-Apr-2025 23:45:57 JST Evan Prodromou Evan Prodromou
      in reply to

      I had a few replies that cursed me to the deepest hells for suggesting that AI could be something that people might like or want. Sorry not sorry! 💃🏼

      In conversation about 23 days ago permalink
    • Embed this notice
      jdw 🍁 (jdw@cosocial.ca)'s status on Saturday, 19-Apr-2025 01:02:16 JST jdw 🍁 jdw 🍁
      in reply to

      @evan it is a tool and like any other it has great use cases and bad use cases. It’s performance is directly related to how closely aligned its intended use case and the user’s intention is. Those that are shrill about it generally have very little experience using it correctly, or little experience using it at all. I would bet 90% of the population has zero use case for ai so their experience is concocted use cases to play with it or simply parroting what others say.

      In conversation about 23 days ago permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 04:24:37 JST Evan Prodromou Evan Prodromou
      in reply to
      • tom jennings

      @tomjennings I'm not a fan of the "stolen work" idea. I don't think it's accurate. The output of that indexing process isn't a verbatim copy of the original corpus of text, but something much more like a search engine index. The generated text often includes facts and ideas from the original works, but that's not covered by copyright or other IP protection. I agree that LLM training bots should respect robots.txt and other signals that the authors don't want their work used for training.

      In conversation about 23 days ago permalink
    • Embed this notice
      tom jennings (tomjennings@tldr.nettime.org)'s status on Saturday, 19-Apr-2025 04:24:38 JST tom jennings tom jennings
      in reply to

      @evan

      Minus the cursing -- speaking of the big LLM service providers, there is no ethical use case for their stolen/stealing work. They are part of the trump regime taking down the US. Any technical sweetness is vastly overshadowed by the straight up evil they do.

      In conversation about 23 days ago permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 04:26:33 JST Evan Prodromou Evan Prodromou
      in reply to
      • ohmrun

      @ohmrun I don't understand your point.

      In conversation about 23 days ago permalink
    • Embed this notice
      ohmrun (ohmrun@c.im)'s status on Saturday, 19-Apr-2025 04:26:34 JST ohmrun ohmrun
      in reply to

      @evan
      What chance does the average CPU have against a water-cooled nuclear power plant? What are you talking about?
      You can extend this analogy as far as you like.

      In conversation about 23 days ago permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 04:33:35 JST Evan Prodromou Evan Prodromou
      in reply to
      • jdw 🍁

      @jdw I'm not so sure of that number. I find using Llama for simple research extremely useful. I can ask questions in plain English, without having to formulate weird query strings as for Google searches. I especially like using Llama for asking questions that collate different kinds of information.

      In conversation about 23 days ago permalink
    • Embed this notice
      jdw 🍁 (jdw@cosocial.ca)'s status on Saturday, 19-Apr-2025 05:11:35 JST jdw 🍁 jdw 🍁
      in reply to

      @evan I admit to having no source for that number. 😀 I use Perplexity and it’s almost entirely replaced all search engines for me. Mostly because of the reasons you say; I can ask some pretty vague stuff or ask for an intro to a topic, stuff like that.

      The research model is useful to me. I am planning a move across country and I have asked it to create “new resident” booklets on various towns we’re considering moving to. The reasoning model is good at planning stops and gas costs on the trip.

      In conversation about 23 days ago permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 05:13:16 JST Evan Prodromou Evan Prodromou
      in reply to
      • ohmrun

      @ohmrun So, you asked how a CPU could compete with a nuclear power plant. I *think* you're saying that a local LLM model cannot work as well as a centralized one. I think that's possible, but I also think there are certain things I need from an LLM that seem to work just fine locally -- such as doing research, summarizing text, or developing code.

      In conversation about 23 days ago permalink
    • Embed this notice
      ohmrun (ohmrun@c.im)'s status on Saturday, 19-Apr-2025 05:13:17 JST ohmrun ohmrun
      in reply to

      @evan
      You seem to be being deliberately naive. There isn't any creative person alive that isn't being robbed by these cretins, so minting coinage on the back of it is perverse to the extreme.

      In conversation about 23 days ago permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 05:15:50 JST Evan Prodromou Evan Prodromou
      in reply to
      • ohmrun

      @ohmrun I'm not sure I agree that creative people are being robbed by LLMs. I think the argument gets even harder to make for local, Open Source LLMs. Nobody is making money from my local Phi model. I think that LLMs work much more like search indexers than like an archiver; their output doesn't usually emit verbatim copies of their training set. When LLMs share facts and ideas that came from their training set, that seems pretty reasonable. I do think indexers should respect robots.txt.

      In conversation about 23 days ago permalink
    • Embed this notice
      ohmrun (ohmrun@c.im)'s status on Saturday, 19-Apr-2025 05:18:40 JST ohmrun ohmrun
      in reply to

      @evan
      It's a fractal of horrible nastiness, I can't even begin to communicate it.

      If the market value of independent thought approaches zero, what happens to the market for independent thought?

      In conversation about 23 days ago permalink
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 06:07:53 JST Evan Prodromou Evan Prodromou
      in reply to
      • jdw 🍁

      @jdw yeah, I feel like that kind of research is pretty common. I don't think everyone likes it, but I think that more than 10% of people do. I'd say somewhere close to the percentage of people who like search engines.

      In conversation about 23 days ago permalink
    • Embed this notice
      tom jennings (tomjennings@tldr.nettime.org)'s status on Saturday, 19-Apr-2025 15:12:48 JST tom jennings tom jennings
      in reply to

      @evan

      I'm a hard disagree here. Scumbag corps are doing explicitly harmful things. There's no excuse for it and IMHO using their output makes you complicit.

      I'm personally ok with drawing this as a bright like I will not cross.

      In conversation about 22 days ago permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Evan Prodromou (evan@cosocial.ca)'s status on Saturday, 19-Apr-2025 20:44:26 JST Evan Prodromou Evan Prodromou
      in reply to
      • tom jennings

      @tomjennings could you be clearer about what the harm is? For me, I think the biggest harms in creating LLMs is using content like images and text without consent of the creator by ignoring robots.txt files. I also think it's harmful to use output from LLMs without human review, especially if there are safety issues. Are those the harms you are talking about?

      In conversation about 22 days ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.