GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Friday, 27-Jan-2023 17:34:54 JST Michał "rysiek" Woźniak · 🇺🇦 Michał "rysiek" Woźniak · 🇺🇦

    #ChatGPT and other AI-based stuff like Midjourney, Copilot, etc, are going to create a denial of service attack against our collective ability to process information.

    I believe this is already happening in academia and other fields where people can submit text co-written by ChatGPT and pretend it's their own. Teachers will have all sorts of problems with students claiming ChatGPT-generated texts are theirs.

    But it goes further. Works created with these models will end up in courts, a lot.

    1/?

    In conversation Friday, 27-Jan-2023 17:34:54 JST from mstdn.social permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 17:34:53 JST pettter pettter
      in reply to

      @rysiek I disagree, honestly. There's been Too Much Data To Handle for a long while at this point. AI generated stuff isn't going to be that big of a change, and if your exams and other procedures can be fooled by something like ChatGPT then it was already vulnerable to the sophisticated technique of Just Hiring A Guy To Do It For You (which is arguably what using ChatGPT is, with some steps removed)

      In conversation Friday, 27-Jan-2023 17:34:53 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 18:46:42 JST pettter pettter
      in reply to

      @rysiek Remember that OpenAI eats a _lot_ of the cost of using ChatGPT. Just the compute costs are "astronomical", and that's without counting the gig workers.

      I agree generally that it puts the tool into more people's hands, but again, it's not something fundamentally new. Rich people could and did cheat (or pay thinktanks and "reseachers" to produce "reports") before this, and it is and was a problem. Just not a new one.

      In conversation Friday, 27-Jan-2023 18:46:42 JST permalink
    • Embed this notice
      Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Friday, 27-Jan-2023 18:46:43 JST Michał "rysiek" Woźniak · 🇺🇦 Michał "rysiek" Woźniak · 🇺🇦
      in reply to
      • pettter

      @pettter hiring a guy will cost you a few bucks. Using ChatGPT will cost you pennies, if that. It's a orders of magnitude sized change.

      Plus, ChatGPT, Copilot, etc, are mathwashing copyright infringement, copying people's work and pretending it's okay because it's done by a computer. Loads of people would not copy stuff themselves, but will be tricked into using these models. Loads of cases for courts to figure out soon.

      In conversation Friday, 27-Jan-2023 18:46:43 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 18:48:40 JST pettter pettter
      in reply to

      @rysiek My position is essentially that a lot of AI tools are simply not fit for purpose, and are being deeply, deeply, subsidized by billionaire investors and stupid oil/tech money. It's not a sustainable model, and it's not going to be.

      Of course, that doesn't mean it won't do a lot of damage on the way, similar to e.g. fossil fuel use.

      In conversation Friday, 27-Jan-2023 18:48:40 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 21:45:15 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek You don't train it on "styles". You train it on actual copyrighted works.

      In conversation Friday, 27-Jan-2023 21:45:15 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 21:45:16 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @rysiek @pettter Furthermore, I disagree that training a neural network on styles is the same as sampling. But then, we already deeply disagree that this is "training" to begin with.

      In conversation Friday, 27-Jan-2023 21:45:16 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 21:45:17 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @rysiek @pettter The sampling decisions were super bad and logically inconsistent with copyright law themselves. It shows that the large media conglomerates get to play by different rules. When it comes to the visual arts, there is no such concentration of power, so I wouldn't expect that reasoning to be applied here.

      In conversation Friday, 27-Jan-2023 21:45:17 JST permalink
    • Embed this notice
      Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Friday, 27-Jan-2023 21:45:18 JST Michał "rysiek" Woźniak · 🇺🇦 Michał "rysiek" Woźniak · 🇺🇦
      in reply to
      • pettter
      • Walter van Holst

      @whvholst @pettter I won't argue with either here, you're the lawyer. ?

      That said, I find it fundamentally wrong that a couple of decades ago DJs and rap musicians got hit hard by "you sample, you license" decisions, while OpenAI and Microsoft sampling copyrighted works, remixing them, and benefiting from them without giving so much as credit and acknowledgement (let alone licensing fees!) to the creative folk who created them, might get off scot-free.

      But that's hardly a legal argument.

      In conversation Friday, 27-Jan-2023 21:45:18 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 21:45:20 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @rysiek @pettter I disagree with your foaming at the mouth at Midjourney etc, and the most level-headed analysis of the copyright cases I have seen so far give them little chance of truly succeeding. Regarding plagiarism in education: introduce open book, but offline testing in secondary education (which won't be easy) and it will be managable.

      In conversation Friday, 27-Jan-2023 21:45:20 JST permalink
    • Embed this notice
      Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Friday, 27-Jan-2023 21:45:21 JST Michał "rysiek" Woźniak · 🇺🇦 Michał "rysiek" Woźniak · 🇺🇦
      in reply to
      • pettter
      • Walter van Holst

      @whvholst @pettter which part of my reasoning do you disagree with?

      In conversation Friday, 27-Jan-2023 21:45:21 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 21:45:22 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @pettter @rysiek I disagree with Rysiek's reasoning, but not with his conclusion. Usenet died because of spam. Mail is in the dying spasms because of spam. Search is gravely ill because of spam. ChatGPT and its likes will finish it off, because it will be used by linkfarm spam bastards.

      In conversation Friday, 27-Jan-2023 21:45:22 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:01:53 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek Why is that an upside?

      In conversation Friday, 27-Jan-2023 22:01:53 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:01:54 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @rysiek @pettter In Europe we specifically carved out exceptions to copyright for large corpora for the purpose of AI/ML, one of the few upsides of the Copyright in the Digital Single Market Directive.

      In conversation Friday, 27-Jan-2023 22:01:54 JST permalink
    • Embed this notice
      Michał "rysiek" Woźniak · 🇺🇦 (rysiek@mstdn.social)'s status on Friday, 27-Jan-2023 22:01:55 JST Michał "rysiek" Woźniak · 🇺🇦 Michał "rysiek" Woźniak · 🇺🇦
      in reply to
      • pettter
      • Walter van Holst

      @whvholst @pettter we agree on the fact that the sampling decision, and that large players should not be able to just steamroll the legal system like that, of course.

      But if unlicensed sampling is illegal, letting Microsoft and OpenAI "train" stuff on copyrighted works without getting a license should also be illegal.

      And to my simple non-legally-trained mind, it seems like *maybe* (I am not married to this idea!) this should be a new, separate field of licensing, so to speak.

      In conversation Friday, 27-Jan-2023 22:01:55 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Friday, 27-Jan-2023 22:05:14 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Walter van Holst

      @pettter @whvholst @rysiek But that's the same way humans learn, isn't it? Learning is not the same as stealing! These AI art bots aren't reproducing anyone's work, they're generating *original* images from the machine equivalent of their 'imaginations', and combining elements in novel ways.

      It's a concern that the hard work of human artists may be devalued by making it trivially easy for those with no talent, training or artistic background, to get similar results. Sure. Artists struggle enough to make a living without competition from AI. But looking at a piece of art and getting ideas from it, doesn't infringe any copyright laws.

      In conversation Friday, 27-Jan-2023 22:05:14 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        http://www.ways.It/
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:05:14 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek Humans are quite bad at making copies of things. Computers are quite good at it, once the thing has been translated by a human into a form that a computer can replicate.

      They don't work the same way, and have different affordances and effects.

      In conversation Friday, 27-Jan-2023 22:05:14 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:08:12 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek I wonder how much damage the term "machine learning" did to the common understanding of what ML systems can do.

      Copyright applies across formats to a certain extent. Just because I apply JPEG compression to a bitmap, turning it from several MB of data to a few kb doesn't mean that I've created something new.

      In conversation Friday, 27-Jan-2023 22:08:12 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:08:13 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @pettter @rysiek Yes. And learning a style is something you do by analysing lots of copyrighted works. So, a difference without a distinction. I stand by my position: if you cannot analyse works and the patterns in them, you cannot learn from them either. You cannot have concordances, which historically have been understood as not copyright-infringing either. Or as someone noted: the Stable Diffusion neural network is only a few GB, good luck in proving that it is a copy of TBs of art.

      In conversation Friday, 27-Jan-2023 22:08:13 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:09:50 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek LLMs don't get trained primarily on books. They get trained on web data.

      In conversation Friday, 27-Jan-2023 22:09:50 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:09:51 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @pettter @rysiek Because requiring licensing large corpora of text before being able to analyse them would introduce sufficient amounts of friction that it would put them out of reach of everyone except the large publishing houses.

      In conversation Friday, 27-Jan-2023 22:09:51 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:10:59 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek You can't get "measured average brush width of a stroke" out of the several GB of Midjourney either though?

      In conversation Friday, 27-Jan-2023 22:10:59 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:11:00 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @pettter @rysiek It can be more things like "measured average brush width of a stroke" than "lossy compression".

      In conversation Friday, 27-Jan-2023 22:11:00 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:14:34 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek I'm well aware of various attempts of post-hoc reasoning and model analysis. It's nowhere near useful enough to yield useful insights of that sort. It's barely good enough to be able to detect various simple statistical biases.

      In conversation Friday, 27-Jan-2023 22:14:34 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:14:35 JST Walter van Holst Walter van Holst
      in reply to
      • pettter

      @pettter @rysiek There's a very interesting and growing body of research on transforming neural networks into rule sets.

      In conversation Friday, 27-Jan-2023 22:14:35 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:16:21 JST pettter pettter
      in reply to
      • Walter van Holst

      @whvholst @rysiek Or are you talking about the decision tree transformations? Those won't generally be more interpretable than, say, attention heatmaps in an image context at all.

      In conversation Friday, 27-Jan-2023 22:16:21 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Friday, 27-Jan-2023 22:23:54 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Walter van Holst

      @pettter @whvholst @rysiek Again, Stable Diffusion, Midjourney etc. generate *original* images. They don't copy anyone in particular, they have instead developed a 'feel' for what various elements look like when rendered in particular styles, and you can endlessly 'reroll' a particular image to get the AI to give you any number of creative variations and reinterpretations of your prompt. It is, in a sense, creative. The other day I got Midjourney to render me a man whose skin was made of savoy cabbage. The (bizarre!) result was something it's doubtful any human has ever drawn/painted. The image is shared on my TL, somewhere.

      In my view the human brain is nothing more than an elaborate machine, so if AI is plagiarizing, then so is every human artist on Earth.

      In conversation Friday, 27-Jan-2023 22:23:54 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        http://somewhere.In/
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:23:54 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek Humans are not computers, and computers are not humans. Computers don't "learn" they certainly don't generate "original images" and they, most of all, don't "feel".

      They optimize towards representing a specific mathematical space as well as possible from a collection of examples, and can then sample from a model of that space.

      In conversation Friday, 27-Jan-2023 22:23:54 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:24:18 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek No computer ever knew what "green" is.

      In conversation Friday, 27-Jan-2023 22:24:18 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:30:06 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @whvholst @BarrenPlanet @rysiek Yes those are hexadecimal numbers

      In conversation Friday, 27-Jan-2023 22:30:06 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:30:08 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • BarrenPlanet

      @pettter @BarrenPlanet @rysiek 0x000 0xFFF 0x000

      In conversation Friday, 27-Jan-2023 22:30:08 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:35:18 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @whvholst @BarrenPlanet @rysiek Color perception is much much more complex than "passing certain thresholds and staying below others in the visual cortex".

      You're correct that there's no "objective" concept of "green" among humans. This is my point. Humans can have subjective opinions and understandings of things. Computers can't.

      In conversation Friday, 27-Jan-2023 22:35:18 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:35:19 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • BarrenPlanet

      @pettter @BarrenPlanet @rysiek And green is passing certain thresholds and staying below others in the visual cortex. There's no objective concept of "green" among humans either. Even without colour-blindness. It's not a helpful argument, more of a Plato's cave one, to claim computers cannot perceive reality the same as humans do when no human does it the same way either.

      In conversation Friday, 27-Jan-2023 22:35:19 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:35:45 JST pettter pettter
      in reply to
      • SlightlyCyberpunk
      • Walter van Holst

      @whvholst @admin @rysiek Victim blaming? Really?

      In conversation Friday, 27-Jan-2023 22:35:45 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:35:46 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • SlightlyCyberpunk

      @admin @pettter @rysiek You may not have intended it a such, it still is victim blaming. No amount of thought put into running a mailserver will prevent you from being spammed. You just have been incredibly lucky so far.

      In conversation Friday, 27-Jan-2023 22:35:46 JST permalink
    • Embed this notice
      SlightlyCyberpunk (admin@mastodon.slightlycyberpunk.com)'s status on Friday, 27-Jan-2023 22:35:47 JST SlightlyCyberpunk SlightlyCyberpunk
      in reply to
      • pettter
      • Walter van Holst

      @whvholst @pettter @rysiek Hopefully it will inspire better systems. I mean talking about email spam..I've been running my own mail server for ten years now and I still haven't bothered to set up a spam filter. Don't need it. Can't remember the last time I got a spam mail -- it does happen, but it's quite rare. It is absolutely possible to build systems that prevent these issues if you put a little thought into it.

      Will be interesting to see how Mastodon does on that front...spam kinda helps companies like Twitter as long as it's not *completely* out of control...increases user count, increases engagement, helps them sell ads. But here, where many instances are run on donations and volunteers and each additional user means higher costs, there's a good incentive to wipe that shit out...

      In conversation Friday, 27-Jan-2023 22:35:47 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:38:06 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek It doesn't know that, and he doesn't, and MT didn't turn fashion designer and invent anything.

      There's probably more pictures of Putin in a suit than anything else in the training data though.

      In conversation Friday, 27-Jan-2023 22:38:06 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Friday, 27-Jan-2023 22:38:07 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Sotolf
      • Walter van Holst

      @sotolf @pettter @whvholst @rysiek The same is true of these AIs! There seems to be this widespread misconception that computers can't do anything novel. But Midjourney does this all the time, it's designed to do so. For example, check this out.

      I asked MT to draw me Putin in a pink floral dress. But it 'knows' that Putin always wears a jacket. So it turned fashion designer, and invented a pink, floral jacket that is also a dress.

      In conversation Friday, 27-Jan-2023 22:38:07 JST permalink

      Attachments


      1. https://s3.c.im/media_attachments/files/109/761/404/264/620/931/original/10bac31a2a5173de.png
    • Embed this notice
      Sotolf (sotolf@social.linux.pizza)'s status on Friday, 27-Jan-2023 22:38:08 JST Sotolf Sotolf
      in reply to
      • pettter
      • Walter van Holst
      • BarrenPlanet

      @pettter @BarrenPlanet @whvholst @rysiek

      I find the human way better, since the human may luck into making the copy better than the original, or maybe not strictly better, but more fitting to a special case for example :)

      In conversation Friday, 27-Jan-2023 22:38:08 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:38:45 JST pettter pettter
      in reply to
      • SlightlyCyberpunk
      • Walter van Holst

      @whvholst @admin @rysiek Bollocks I'll grant you.

      In conversation Friday, 27-Jan-2023 22:38:45 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:38:46 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • SlightlyCyberpunk

      @pettter @admin @rysiek Claiming that your server can be spam-free to the point that you never needed a spamfilter because "you put some thought into it" definitely is claiming that anyone who ever got spamruns on their server just hadn't put enough thought into it. It's simply bollocks.

      In conversation Friday, 27-Jan-2023 22:38:46 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:41:45 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @whvholst @BarrenPlanet @rysiek Of course there can, because humans can communicate in various ways. We have shared understandings of the same world and shared language to facilitate exactly knowledge transfer.

      Crucially though that transfer is very much not perfect. It is contingent, it fails constantly, and it is deeply grounded in the world and our experience of it, as well as our social context.

      Computational models are not grounded, not bodied, and have no social context.

      In conversation Friday, 27-Jan-2023 22:41:45 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:41:46 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • BarrenPlanet

      @pettter @BarrenPlanet @rysiek Being more objective does not preclude an ability to attain knowledge. While I don't buy much of the AI/ML hype (it's mostly the same technology as thirty years ago, just a lot more data), I don't buy into dismissing it as fundamentally unable to "learn" either. If any lessons learnt must be subjective, there cannot be any knowledge transfer between humans either.

      In conversation Friday, 27-Jan-2023 22:41:46 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 22:44:46 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @whvholst @BarrenPlanet @rysiek This isn't going anywhere, and I have a deadline.

      In any case, we desperately need copyright reform and have needed one for at least 50 years, arguably 100+

      In conversation Friday, 27-Jan-2023 22:44:46 JST permalink

      Attachments


    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Friday, 27-Jan-2023 22:44:51 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • BarrenPlanet

      @pettter @BarrenPlanet @rysiek They are embodied in hardware, grounded on pre-selected data sets, with the social context (and bias!) of that pre-selection.

      In conversation Friday, 27-Jan-2023 22:44:51 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Friday, 27-Jan-2023 23:57:51 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Walter van Holst

      @pettter @whvholst @rysiek This is a decades-old debate, and one that is largely settled, at least among scientists. Our subjective experience arises from all our dispositional complexities, organisational (cognitive) processes, and how our sensory apparatuses function. There's no reason to think that if all of that was replicated by a machine - even a non-biological machine - that the machine wouldn't also have subjective experiences. The proof is that we *are* machines ourselves! There's no magical ingredients, no elan vital, no special property or quality of "subjectivity" that can be measured or quantified.

      What you're expressing here is what Daniel Dennett calls the Zombie Hunch. Here is a beautifully written transcript of a 1999 lecture by Dennett explaining why the Zombie Hunch is so ridiculous. Perhaps give it a read?

      https://ase.tufts.edu/cogstud/dennett/papers/zombic.htm

      In conversation Friday, 27-Jan-2023 23:57:51 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 23:57:51 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek I'm not denying materialism. I'm denying that current computers are anything close to consciousness or subjectivity.

      In conversation Friday, 27-Jan-2023 23:57:51 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 27-Jan-2023 23:58:32 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek And that comes, in part, from being an actual scientist with a PhD in CS actually working with actual AI stuff.

      In conversation Friday, 27-Jan-2023 23:58:32 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 00:02:52 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek You'll not that nowhere in post did I say that the system hasn't generated an original image. I was very deliberate in which words I used.

      In conversation Saturday, 28-Jan-2023 00:02:52 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 00:02:53 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Sotolf
      • Walter van Holst

      @pettter @sotolf @whvholst @rysiek That's an entirely novel outfit. If you must, a priori, decide to dismiss that MJ has indeed generated an original image, when it blatantly has done (there are no images of Putin in a pink floral jacket, are there?) then it's difficult to see how this discussion can proceed.

      In conversation Saturday, 28-Jan-2023 00:02:53 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 00:25:11 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Walter van Holst

      @pettter @whvholst @rysiek I think we all agree on that. I just don't see how statistical pattern-matching is any different from what brains do. Dismissing the demonstrated capabilities of AI on grounds that there's no "consciousness" involved, seems to miss the point that the AI isn't plagiarizing anyone or copying, it's creating new works by, as it were, hallucinating from its dataset. That's not copyright-infringing.

      Pretty sure everyone in this thread understands that such language is figurative in the case of current AI.

      In conversation Saturday, 28-Jan-2023 00:25:11 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 00:25:11 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek We should use concrete, precise, and accurate language, in order to avoid confusion and hype.

      Computer aren't magic. They aren't conscious. They don't "hallucinate". They interpolate (or extrapolate), at best.

      There are artistic practises where it makes sense to use ML tools, but we should be careful to not dismiss the rights of actual humans in creating those tools, and not to mistake the tool for something it's not.

      In conversation Saturday, 28-Jan-2023 00:25:11 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 00:28:05 JST pettter pettter
      in reply to
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @whvholst @rysiek Words mean things. We should try to use words that mean the things that are actually true and not the words that are not.

      At least we should, if the goal is to communicate true things.

      There are of course true things that can more easily be communicated with words that are untrue (e.g. using stories, poetry, etc.), but if we're talking about actually existing systems and their capabilities, that's not really the same sort of discussion.

      In conversation Saturday, 28-Jan-2023 00:28:05 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Home
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 00:28:06 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Walter van Holst

      @whvholst @pettter @rysiek Me too. It's just not always practical to have to come up with objective definitions for everything, rather than all agree that, say, 'creativity' is as good a term as any (for purposes of informal discussion) to describe the endlessly novel ways in which Stable Diffusion, Midjourney etc. seamlessly fuse together disparate elements to generate original art.

      In conversation Saturday, 28-Jan-2023 00:28:06 JST permalink
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Saturday, 28-Jan-2023 00:28:07 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • BarrenPlanet

      @pettter @BarrenPlanet @rysiek Am with you on that denial.

      In conversation Saturday, 28-Jan-2023 00:28:07 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 00:34:55 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Sotolf
      • Walter van Holst

      @pettter @sotolf @whvholst @rysiek You did actually explicitly say elsewhere in the thread that MJ doesn't produce anything original. You even scare-quoted the word!

      The crux of this discussion, as I understand it - but perhaps I don't? - is whether the AI is stealing human work or generating new art based on the totality of human work to which it has been exposed.

      In conversation Saturday, 28-Jan-2023 00:34:55 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 00:34:55 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek It creates nothing, it interpolates well within spaces of existing data.

      In conversation Saturday, 28-Jan-2023 00:34:55 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 00:35:33 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek To be clear, human creativity is not limited to spaces of existing data.

      In conversation Saturday, 28-Jan-2023 00:35:33 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 01:14:21 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @whvholst @BarrenPlanet @sotolf @rysiek Humans are great at seeing what they want to. If you _want_ to go all Campbell and claim that All Stories Are One Monomyth then sure, you can probably convince yourself of that.

      It doesn't track my experience, though.

      In conversation Saturday, 28-Jan-2023 01:14:21 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        THAT.IT
    • Embed this notice
      Walter van Holst (whvholst@eupolicy.social)'s status on Saturday, 28-Jan-2023 01:14:23 JST Walter van Holst Walter van Holst
      in reply to
      • pettter
      • Sotolf
      • BarrenPlanet

      @pettter @BarrenPlanet @sotolf @rysiek There are very few untold stories left. Certain plotlines keep recurring (Achilles and Superman, for example). It's how we render those stories that's considered creative and how those renderings reflect our personalities what is protected by copyright. Not the unique plotlines.

      In conversation Saturday, 28-Jan-2023 01:14:23 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 01:18:29 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek The only dataset that could capture everything a human mind can experience would be the dataset that is the entire universe.

      The strength and weakness of computers is that they are mathematical, working on discrete bits in a deterministic fashion.

      The strength and weakness of humans is that we exist in the world, with actual, physical brains that have _evolved_ as physical objects, with all that entails. They work in a fundamentally different way.

      In conversation Saturday, 28-Jan-2023 01:18:29 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 01:18:30 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Sotolf
      • Walter van Holst

      @pettter @sotolf @whvholst @rysiek I'm really not convinced that's true. It may subjectively *seem* that way, but that's not what's happening. Take the phenomenon of hypnogogia/hypnopompia. These are pre- and post- sleep trance states involving sleep paralysis, and an overwhelming sense of 'presence' that probably explains a wide variety of 'supernatural' experiences such as ghost visitations, alien abductions, and demonic apparitions. The interesting thing is that these hallucinations are clearly culturally-specific, with Americans seeing little grey aliens with big googly dark eyes, Britons seeing ghosts and other entities from European folklore, and Africans seeing demons and other entities from African folklore.

      How the human brain interpets the experience depends - to put this bluntly - on its *dataset*, or existing archetypes to which its been exposed through stories and images passed down within the culture in which its embedded.

      So while I agree that human creativity is more sophisticated, I don't agree that it is as many orders of magnitude different as you seem to be arguing. The vast majority of humans aren't creative enough to imagine anything outside their existing experience. You can see this by looking at prehistoric cave art by what are, biologically, modern humans. Why is it so crudely stylized? Where are Picassos, the Van Goghs? Where are the vividly realistic paintings? These people are the same as us, yet they can only draw stick figures?

      *Culture* is your dataset, and culture has to evolve too, before individual humans can even concieve of something as obvious (to us) as painting something with realistic detail.

      In short, I think you're romanticizing creativity.

      In conversation Saturday, 28-Jan-2023 01:18:30 JST permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: www.hitmedia.in
        Under Construction
      2. Domain not in remote thumbnail source whitelist: cdn1.dan.com
        embedded.so - Domain Name For Sale | Dan.com
        from @undeveloped
        I found a great domain name for sale on Dan.com. Check it out!
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 01:21:36 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek Oh, and you are drawing an incredible false equivalency between replication and creativity.

      Why would you need to draw anything "realistically" if you communicate what you want with a stick figure? Everyone knows what they represent anyway, to the point that _tens of thousands of years later_ we can still understand what the pictures are meant to communicate, at least to a certain extent.

      In conversation Saturday, 28-Jan-2023 01:21:36 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 01:24:13 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek You are also vastly underestimating the creativity of the majority of people, though I guess if the height of creativity, to you, is a nicely aesthetic blob or salad man, rather than the stories and fantasies or improvised music or sketches and universes dreamt up by people unconstrained from staying in some particular style, then yeah I guess I agree that the vast majority of people don't have the time to practise their art enough.

      In conversation Saturday, 28-Jan-2023 01:24:13 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 01:28:36 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek I should maybe expand on this a bit: The reason why we don't have "the picassos" from prehistory is that a) some of them got destroyed by colonizers or other conquerors b) they were probably mostly oral or made from perishable materials - not a lot last for tens of thousands of years, c) we don't recognise them as such, because of shifting cultural sensibilities d) tools and culture matter, to the point that we probably can't see all the subtleties in them

      In conversation Saturday, 28-Jan-2023 01:28:36 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 01:35:00 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek I don't follow why that should be a "problem" for my perspective?

      In conversation Saturday, 28-Jan-2023 01:35:00 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 01:35:01 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Sotolf
      • Walter van Holst

      @pettter @sotolf @whvholst @rysiek Which leaves you with the problem of feral children, raised by other animals, of which there are several historical examples. They theoretically have the same access to the "entire universe", yet they never even learn full human language, let alone the world-knowledge they would need to make the sorts of giant imaginative leaps you seem to think are innate to all humans.

      Raw sensory input is meaningless to us. We need first to be *trained* to interpret it - by our parents, by schools, by society. Without that training we're lost in a bewildering phantasmagoria that we'll never understand.

      In conversation Saturday, 28-Jan-2023 01:35:01 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Saturday, 28-Jan-2023 06:41:26 JST pettter pettter
      in reply to
      • Sotolf
      • Simon Lucy
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @simon_lucy @sotolf @whvholst @rysiek Seeing anything in static 2D as "realistic" is very much a learned skill, just so you know. No paintings are "realistic", cave or otherwise.

      In conversation Saturday, 28-Jan-2023 06:41:26 JST permalink
    • Embed this notice
      barrenplanet@c.im's status on Saturday, 28-Jan-2023 06:41:27 JST BarrenPlanet BarrenPlanet
      in reply to
      • pettter
      • Sotolf
      • Simon Lucy
      • Walter van Holst

      @simon_lucy @pettter @sotolf @whvholst @rysiek There are no realistic cave paintings. Anywhwre in the world.

      In conversation Saturday, 28-Jan-2023 06:41:27 JST permalink
    • Embed this notice
      Simon Lucy (simon_lucy@mastodon.social)'s status on Saturday, 28-Jan-2023 06:41:28 JST Simon Lucy Simon Lucy
      in reply to
      • pettter
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @pettter @sotolf @whvholst @rysiek

      The realistic cave paintings concern themselves with the subjects that matter, animals that they hunt.

      In conversation Saturday, 28-Jan-2023 06:41:28 JST permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Wednesday, 01-Feb-2023 18:36:06 JST pettter pettter
      in reply to
      • Sotolf
      • Walter van Holst
      • BarrenPlanet

      @BarrenPlanet @sotolf @whvholst @rysiek For reference StableDiffusion memorizes and outputs copyrighted images from its training set: https://arxiv.org/abs/2301.13188

      In conversation Wednesday, 01-Feb-2023 18:36:06 JST permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: static.arxiv.org
        Extracting Training Data from Diffusion Models
        Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacy-preserving training.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.