GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    ben 🇵🇸 ui (ben@m.benui.ca)'s status on Tuesday, 07-May-2024 15:47:15 JST ben 🇵🇸 ui ben 🇵🇸 ui

    Stack Overflow announced that they are partnering with OpenAI, so I tried to delete my highest-rated answers.

    Stack Overflow does not let you delete questions that have accepted answers and many upvotes because it would remove knowledge from the community.

    So instead I changed my highest-rated answers to a protest message.

    Within an hour mods had changed the questions back and suspended my account for 7 days.

    In conversation about a year ago from m.benui.ca permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: www.community.so
      Félicitations ! Votre domaine a bien été créé chez OVHcloud !
      from OVHcloud
      OVHcloud accompagne votre évolution grâce au meilleur des infrastructures web : hébergement, nom de domaine, serveur dédié, CDN, Cloud, Big Data, ...

    2. https://cdn.masto.host/mbenuica/media_attachments/files/112/396/477/264/722/905/original/a87c9ffb23376561.png

    3. https://cdn.masto.host/mbenuica/media_attachments/files/112/396/481/292/921/049/original/ae4b4cbbe28b02ee.png
    • Pleroma-tan likes this.
    • Aral Balkan and Pleroma-tan repeated this.
    • Embed this notice
      ben 🇵🇸 ui (ben@m.benui.ca)'s status on Tuesday, 07-May-2024 16:15:00 JST ben 🇵🇸 ui ben 🇵🇸 ui
      in reply to

      It's just a reminder that anything you post on any of these platforms can and will be used for profit. It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you.

      In conversation about a year ago permalink
      clacke likes this.
      Pleroma-tan repeated this.
    • Embed this notice
      ben 🇵🇸 ui (ben@m.benui.ca)'s status on Tuesday, 07-May-2024 16:15:03 JST ben 🇵🇸 ui ben 🇵🇸 ui
      in reply to

      I'm requesting that my questions and answers be permanently deleted under GDPR.

      In conversation about a year ago permalink
    • Embed this notice
      Arne Babenhauserheide (arnebab@rollenspiel.social)'s status on Tuesday, 07-May-2024 16:19:24 JST Arne Babenhauserheide Arne Babenhauserheide
      in reply to
      • Martin Piper (he/him) 💙💛🌻💉

      @martin_piper it wasn’t given away. It was licensed under a sharealike license.

      But „AI“ is used to launder copyright so they can proprietarize what I only gave as free culture.
      @ben

      In conversation about a year ago permalink
      clacke likes this.
    • Embed this notice
      Martin Piper (he/him) 💙💛🌻💉 (martin_piper@mastodon.social)'s status on Tuesday, 07-May-2024 16:19:25 JST Martin Piper (he/him) 💙💛🌻💉 Martin Piper (he/him) 💙💛🌻💉
      in reply to

      @ben you gave them information for free. You don't own it, they do. That was the working relationship.

      Imagine if you were working for a company producing work and you suddenly tried to sabotage that work. That's what you were trying to do, sabotage it. They would be perfectly within their rights to restrict your access.

      The moral of this story is, if you want to retain ownership then don't give it away (for free) to someone else.

      In conversation about a year ago permalink
    • Embed this notice
      Aral Balkan (aral@mastodon.ar.al)'s status on Tuesday, 07-May-2024 20:00:40 JST Aral Balkan Aral Balkan
      in reply to

      @ben They’re not yours, they’re theirs. Jeff Atwood thanks you for your free labour. (I’m kidding, he doesn’t. Feel grateful he even allowed you to contribute in the first place, serf.)

      Speaking of Jeff Atwood, isn’t he the guy helping fund Mastodon now? 🤔

      #SiliconValley #PeopleFarming #JeffAtwood #surveillance #capitalism #AllYourDataAreBelongToUs

      In conversation about a year ago permalink
    • Embed this notice
      Aral Balkan (aral@mastodon.ar.al)'s status on Tuesday, 07-May-2024 21:43:04 JST Aral Balkan Aral Balkan
      in reply to
      • Vítor

      @vitor @ben Did he? (I can’t say I follow the every move of every Silicon Valley tech bro.)

      In conversation about a year ago permalink
    • Embed this notice
      Vítor (vitor@hachyderm.io)'s status on Tuesday, 07-May-2024 21:43:05 JST Vítor Vítor
      in reply to
      • Aral Balkan

      @aral @ben Jeff left Stack Exchange over a decade ago. What’s the sense in criticising him for this?

      In conversation about a year ago permalink
    • Embed this notice
      Aral Balkan (aral@mastodon.ar.al)'s status on Tuesday, 07-May-2024 21:45:45 JST Aral Balkan Aral Balkan
      in reply to
      • Vítor

      @vitor @ben Updated; thanks.

      In conversation about a year ago permalink
    • Embed this notice
      nus (nus@mstdn.social)'s status on Wednesday, 08-May-2024 17:09:14 JST nus nus
      in reply to
      • Jason Hunter

      @hunterhacker @ben with ChatGPT the answer gets turned into atoms and reconstructed, often with errors. It's not showing anyone's answer, it's showing a slop that approximates what it thinks looks most correct.

      You have no one to thank, no one to correct, and ChatGPT couldn't start to tell you where the answer came from even if it was 99% from one person.

      In conversation about a year ago permalink
      clacke likes this.
    • Embed this notice
      Jason Hunter (hunterhacker@mastodon.social)'s status on Wednesday, 08-May-2024 17:09:15 JST Jason Hunter Jason Hunter
      in reply to

      @ben Why do people care if someone like me gets your excellent answer to a coding question by typing my error message into Google (forwarding to SO) or into ChatGPT?

      In neither situation were you getting paid. In both situations the middle man makes a buck. In both situations I’m thankful you spent time helping me.

      Is it that with ChatGPT I don’t know who to thank?

      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: www.me.is
        Menntaskólinn á Egilsstöðum
        from Menntaskólinn á Egilsstöðum
    • Embed this notice
      matty matty matty (wuppy@wetdry.world)'s status on Wednesday, 08-May-2024 17:09:21 JST matty matty matty matty matty matty
      in reply to
      • Jason Hunter

      @hunterhacker @ben it's that chatgpt is fundamentally built off copyright infringement and theft. even if in this situation there's no profit being taken, in other situations there absolutely is. openai is fundamentally scummy, and it's good to push back if you can.

      In conversation about a year ago permalink
      clacke likes this.
    • Embed this notice
      Mighty Orbot (mighty_orbot@retro.pizza)'s status on Wednesday, 08-May-2024 22:51:48 JST Mighty Orbot Mighty Orbot
      in reply to

      @ben Stack Overflow has already been monetizing your answers with ads for years. If “used for profit” is your main complaint, you’re a little late.

      In conversation about a year ago permalink
      Pleroma-tan and Fish of Rage like this.
    • Embed this notice
      Fish of Rage (sun@shitposter.world)'s status on Thursday, 09-May-2024 00:12:10 JST Fish of Rage Fish of Rage
      in reply to
      • Lorenzo Stoakes
      • Vlastimil Babka
      • Petr Tesarik
      • Pavel Machek
      @ljs @ptesarik @ben @pavel @vbabka I started as skeptical as you but now I use it every day and even with it being wrong a lot it saves me a shitload of time, and I am very sure I am not just bad at measuring that time. Most of your objections aren't worse than relying on stackexchange or coworkers to find an answer and in regular use I am not running into cases where I'm getting bugs as a result of alien AI problem solving.

      the steals software, carbon costs etc are value judgments or a matter of perspective I guess, I respect your objections to LLM on those points even if I disagree.

      Please have a wonderful morning/day/evening.
      In conversation about a year ago permalink
    • Embed this notice
      Lorenzo Stoakes (ljs@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:11 JST Lorenzo Stoakes Lorenzo Stoakes
      in reply to
      • Vlastimil Babka
      • Petr Tesarik
      • Pavel Machek
      @ptesarik @ben @pavel @vbabka the big problem is that people are very very bad at picking up on the kind of errors that an algorithm can generate.

      We all implicitly assume errors are 'human shaped' i.e. the kind of errors a human would make.

      An LLM can have a very good grasp of the syntax but then interpolates results in effect randomly as the missing component is a dynamic understanding of the system.

      As a result, they can introduce very very subtle bugs that'll still compile/run etc.

      People are also incredibly bad at assessing how much cost this incurs in practice.

      Having something that can generate such errors for only trivial tasks strikes me as being worse than having nothing at all.

      And the ongoing 'emperor's new clothes' issues with LLMs is this issue is insoluble. Hallucination is an unavoidable part of how they work.

      The whole machinery of the thing is trying to infer patterns from a dataset, so at a fundamental level it's broken by design.

      That's before we get on to the fact it's needs human input to work (you start putting LLM generated input in it completely collapses), so the whole thing couldn't work anyway on any long term scale.

      That's before we get on to the fact it steals software and ignores license, the carbon costs and monetary costs of compute, and a myriad of other problems...

      The whole problem with all this is it's a very very convincing magic trick and works so well that people are blinded to its flaws.

      See https://en.wikipedia.org/wiki/ELIZA_effect?useskin=vector
      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: upload.wikimedia.org
        ELIZA effect
        In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think", "know" or "understand." History The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patient"'s replies as questions: Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy? Though designed strictly as a mechanism to support "natural language conversation" with a computer, ELIZA's DOCTOR script was found to be surprisingly successful in eliciting...
    • Embed this notice
      Petr Tesarik (ptesarik@fosstodon.org)'s status on Thursday, 09-May-2024 00:12:12 JST Petr Tesarik Petr Tesarik
      in reply to
      • Lorenzo Stoakes
      • Vlastimil Babka
      • Pavel Machek

      @ljs @ben @pavel @vbabka LLMs often turn one type of work (create) into another type of work (review), consuming lots of energy in the process. For some people, it may be worth it (although if they had to pay the full costs of LLMs, humans might still be cheaper).

      In conversation about a year ago permalink
    • Embed this notice
      Pavel Machek (pavel@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:13 JST Pavel Machek Pavel Machek
      in reply to
      • Lorenzo Stoakes
      • Vlastimil Babka
      @ljs @ben @vbabka Well, your arguments were a bit disappointing, too. LMs are useful for trivial tasks, and for easy tasks where you can verify the result. I do both kinds of tasks from time to time.
      In conversation about a year ago permalink
    • Embed this notice
      Lorenzo Stoakes (ljs@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:13 JST Lorenzo Stoakes Lorenzo Stoakes
      in reply to
      • Vlastimil Babka
      • Pavel Machek
      @pavel @ben @vbabka the ones so disappointing you entirely ignored them (because I guess it's beneath you to rebut them) and just said 'try it' as if I hadn't?

      LLMs have uses, I disagree with their use for tasks like programming for the reasons previously stated that you ignored so not going to repeat.
      In conversation about a year ago permalink
    • Embed this notice
      Lorenzo Stoakes (ljs@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:14 JST Lorenzo Stoakes Lorenzo Stoakes
      in reply to
      • Vlastimil Babka
      • Pavel Machek
      @pavel @ben @vbabka sigh you're disappointing me man.

      But like all LLM proponents (just like all crypto guys I spoke to before, just like all anti vax guys I spoke to before, just like all [insert religious-style belief] proponents I spoke to before) you won't actually rebut what I say, you'll just assume that 'I don't get it' on some level.

      I have tried LLMs dude, thanks for patronising me by assuming I haven't.

      Unfollow.
      In conversation about a year ago permalink
    • Embed this notice
      Lorenzo Stoakes (ljs@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:15 JST Lorenzo Stoakes Lorenzo Stoakes
      in reply to
      • Vlastimil Babka
      • Pavel Machek
      @vbabka @pavel @ben hint: LLMs have no understanding of anything, so absolutely aren't suited to programming since they'll hallucinate in (often) subtle ways that fits the syntax and people are notoriously bad at picking up on it.

      Also they still work without credit/license etc. The fact they appear to work for a lot of programming situations makes it even more dangerous.

      It'd be one thing if people were just using them but acknowledging their limitations, it's quite another in a world where people openly lie about their capabilities.

      Totally and completely appropriate to not want your work part of it.
      In conversation about a year ago permalink
    • Embed this notice
      Pavel Machek (pavel@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:15 JST Pavel Machek Pavel Machek
      in reply to
      • Lorenzo Stoakes
      • Vlastimil Babka
      @ljs @vbabka @ben Hint: try it. It saved work for me.
      In conversation about a year ago permalink
    • Embed this notice
      Vlastimil Babka (vbabka@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:16 JST Vlastimil Babka Vlastimil Babka
      in reply to
      • Pavel Machek
      @pavel @ben translations are fine but not so sure about the programming languages part. Also, disagreement about using one's own content (created before LLMs took off) for LLM training is not the same thing as sabotaging, IMHO.
      In conversation about a year ago permalink
    • Embed this notice
      Vlastimil Babka (vbabka@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:17 JST Vlastimil Babka Vlastimil Babka
      in reply to
      • Pavel Machek
      @pavel @ben it's not? :(
      In conversation about a year ago permalink
    • Embed this notice
      Pavel Machek (pavel@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:17 JST Pavel Machek Pavel Machek
      in reply to
      • Vlastimil Babka
      @vbabka @ben Its not. Using LLM to answer questions might not be good idea, but they should work rather well at translations, including translations between programming languates.
      In conversation about a year ago permalink
    • Embed this notice
      Pavel Machek (pavel@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:18 JST Pavel Machek Pavel Machek
      in reply to
      @ben Play stupid games, win stupid prices. Why does everyone believe that sabotaging LLM development is cool?
      In conversation about a year ago permalink
    • Embed this notice
      Fish of Rage (sun@shitposter.world)'s status on Thursday, 09-May-2024 01:16:17 JST Fish of Rage Fish of Rage
      in reply to
      • Lorenzo Stoakes
      • Vlastimil Babka
      • Petr Tesarik
      • Pavel Machek
      @ptesarik @ben @ljs @pavel @vbabka I do enjoy shitposting but I'm telling the truth. I ended up buying an nvidia 4090 to play around with local LLMs that I train myself as it's more ethical than using OpenAI (also OpenAI censors badly.)

      I have been using it more for generating stats and fighting move descriptions for a game than for answering programming questions. but I intend to train it for my own use for personal programming projects so that I can make sure the result is licensed with a libre license to avoid that problem as well.

      I don't want to get too down on the gentleman I replied to but I believe LLM can be used intelligently and ethically if done with care. It won't be by OpenAI, though. Fuck those guys.
      In conversation about a year ago permalink
      Pleroma-tan and Linux Walt (@lnxw37j1) {3EB165E0-5BB1-45D2-9E7D-93B31821F864} like this.
    • Embed this notice
      Petr Tesarik (ptesarik@fosstodon.org)'s status on Thursday, 09-May-2024 01:16:18 JST Petr Tesarik Petr Tesarik
      in reply to
      • Lorenzo Stoakes
      • Vlastimil Babka
      • Pavel Machek
      • Fish of Rage

      @sun @ben @ljs @pavel @vbabka I have a feeling that it's no coincidence this poster's Mastodon instance is called “shitposter.world”…

      (This is meant to be funny, not an ad hominem argument.)

      In conversation about a year ago permalink
      Pleroma-tan repeated this.
    • Embed this notice
      cara@miruku.cafe's status on Friday, 10-May-2024 23:09:45 JST Cara Cara
      in reply to

      @ben@m.benui.ca sounds fair? you volunteered information to the public and gave them the right to host it, don't see why they couldn't keep it as it was?

      In conversation about a year ago permalink
      Pleroma-tan likes this.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.