GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Dawn Ahukanna (dahukanna@mastodon.social)'s status on Tuesday, 18-Jun-2024 02:53:59 JST Dawn Ahukanna Dawn Ahukanna
    • Pavel A. Samsonov

    Totally agree with @PavelASamsonov UX Design research isn’t about producing/writing output “persona” document. It’s about designing, setting up & running your experiment to prove/disprove human behavior hypothesis.
    This would be like a chemist not bothering with the laboratory experiment or Pharma not bothering with clinical trials & letting LLM come up with words, cos cheaper & unethical as hell!
    > "No, AI user research is not “better than nothing” — it’s much worse"
    - https://uxdesign.cc/no-ai-user-research-is-not-better-than-nothing-its-much-worse-5add678ab9e7

    In conversation about a year ago from mastodon.social permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: miro.medium.com
      No, AI user research is not “better than nothing”—it’s much worse
      from https://spavel.medium.com
      Synthetic insights are not research. They are the fastest path to destroy the margins on your product.
    • Embed this notice
      Mr. Completely (mrcompletely@heads.social)'s status on Tuesday, 18-Jun-2024 02:53:50 JST Mr. Completely Mr. Completely
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Tim Kellogg

      @kellogh @futurebird @dahukanna @PavelASamsonov fundamentally the issue to me is that these are not cognitive systems but they are being treated as if they are. They're linguistic pattern matching systems. That's not what minds are. The methods an LLM uses to arrive at output have no parallels in modern cognitive science. So why would thought-like states emerge? It's like throwing soup ingredients in a blender and expecting a working car to pop out if you just keep adding carrots.

      In conversation about a year ago permalink
    • Embed this notice
      Tim Kellogg (kellogh@hachyderm.io)'s status on Tuesday, 18-Jun-2024 02:53:52 JST Tim Kellogg Tim Kellogg
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Mr. Completely

      @mrcompletely @futurebird @dahukanna @PavelASamsonov yep, agreed. what LLMs do today is just “system 1” with a little faking “system 2”, if that makes sense. but it’s hard to say if those other aspects won’t spontaneously emerge with scale. then again, are there easier ways to develop those systems? like, maybe symbolic reasoning will emerge, but why not just wire in our existing systems that do it?

      In conversation about a year ago permalink
      esmevane, sorry repeated this.
    • Embed this notice
      Mr. Completely (mrcompletely@heads.social)'s status on Tuesday, 18-Jun-2024 02:53:53 JST Mr. Completely Mr. Completely
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Tim Kellogg

      @kellogh @futurebird @dahukanna @PavelASamsonov where the model generates seemingly very large numbers of new possible chemicals, drugs or proteins or whatnot. But then experts review the results and say most of them are implausible or useless.

      You can generate novelty through randomness. Novelty itself isn't value, because most new statements about the world haven't been uttered before precisely because they're false. The problem here is that the bullshit sounds "truthy" as Colbert coined it.

      In conversation about a year ago permalink
    • Embed this notice
      Mr. Completely (mrcompletely@heads.social)'s status on Tuesday, 18-Jun-2024 02:53:54 JST Mr. Completely Mr. Completely
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Tim Kellogg

      @kellogh @futurebird @dahukanna @PavelASamsonov it's true that LLMs can generate novelty in a recombinatory or juxtapositional sense; after all, that's precisely what the "hallucinations" aka bullshit results are. They're novel constructions; it's just that they don't relate to reality, they are not true. There are many possible statements about any given real world situation, but many fewer true ones, and the LLM has no ability to distinguish truth. We see this in the chemical modeling...

      In conversation about a year ago permalink
    • Embed this notice
      Tim Kellogg (kellogh@hachyderm.io)'s status on Tuesday, 18-Jun-2024 02:53:55 JST Tim Kellogg Tim Kellogg
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist

      @futurebird @dahukanna @PavelASamsonov hmmm… 🤔 most applications of LLMs don’t simply use the model alone. it’s typically a mixture of the trained model + banks of knowledge + input from the user. i’ve definitely built LLM apps that interview SMEs and then turn around and use that as a bank of knowledge in another app

      In conversation about a year ago permalink
    • Embed this notice
      myrmepropagandist (futurebird@sauropods.win)'s status on Tuesday, 18-Jun-2024 02:53:56 JST myrmepropagandist myrmepropagandist
      in reply to
      • Pavel A. Samsonov
      • Tim Kellogg

      @kellogh @dahukanna @PavelASamsonov

      Even if you did get something that most people would agree had to be called "new" (very subjective) it's not going to tell you anything about how people use software. Because the data didn't come from people.

      In conversation about a year ago permalink
    • Embed this notice
      Tim Kellogg (kellogh@hachyderm.io)'s status on Tuesday, 18-Jun-2024 02:53:57 JST Tim Kellogg Tim Kellogg
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist

      @futurebird @dahukanna @PavelASamsonov why? because that’s not exactly what they’re doing. as you scale up model size, new capabilities emerge, things they weren’t trained to do. “emergent behavior” isn’t a theory, it’s an observation. the open question is, what other sorts of capabilities will emerge when we scale further up. will they acquire an element of surprise? idk, i’d say no but i’ve also been wrong so far about what their limits should be

      In conversation about a year ago permalink
    • Embed this notice
      myrmepropagandist (futurebird@sauropods.win)'s status on Tuesday, 18-Jun-2024 02:53:59 JST myrmepropagandist myrmepropagandist
      in reply to
      • Pavel A. Samsonov

      @dahukanna @PavelASamsonov

      How could you ... discover anything new? Learn anything?

      The whole point of research is when it surprises you. When they user keeps doing something you didn't expect and you don't know why.

      How could AI ever ever ever produce this most rare but also most valuable of data?

      All it can do is make results that ... look like other results, that say what you expect them to say. How do people keep missing the point of what LLMs can and CAN NOT do?

      In conversation about a year ago permalink
    • Embed this notice
      Stargeezer Smith (stargazersmith@social.linux.pizza)'s status on Tuesday, 18-Jun-2024 02:54:17 JST Stargeezer Smith Stargeezer Smith
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Kevin Riggle
      • knowuh
      • nonlinear

      @nonlinear @kevinriggle @futurebird @dahukanna @PavelASamsonov @knowuh
      It seems the microscopic to macro influence must be hinted by the old limerick:

      Big whorls have little whorls Which feed on their velocity, And little whorls have lesser whorls And so on to viscosity

      (By Lewis Fry Richardson)

      In conversation about a year ago permalink
    • Embed this notice
      nonlinear (nonlinear@social.praxis.nyc)'s status on Tuesday, 18-Jun-2024 02:54:18 JST nonlinear nonlinear
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Kevin Riggle
      • knowuh

      @kevinriggle @futurebird @dahukanna @PavelASamsonov @knowuh nice. This article is cool too.

      https://www.quantamagazine.org/the-new-math-of-how-large-scale-order-emerges-20240610/

      Sorry for everyone it's hard to prune list when *everyone* is new.

      Mastodon is weird.

      In conversation about a year ago permalink
      esmevane, sorry repeated this.
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Tuesday, 18-Jun-2024 02:54:19 JST Kevin Riggle Kevin Riggle
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • knowuh
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh oh thank you for flagging, I got distracted

      https://www.amazon.com/Cortex-Critical-Point-Understanding-Emergence-ebook/dp/B09RF2SJRQ/

      In conversation about a year ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Amazon.com: The Cortex and the Critical Point: Understanding the Power of Emergence eBook : Beggs, John M.: Kindle Store
        Amazon.com: The Cortex and the Critical Point: Understanding the Power of Emergence eBook : Beggs, John M.: Kindle Store
    • Embed this notice
      nonlinear (nonlinear@social.praxis.nyc)'s status on Tuesday, 18-Jun-2024 02:54:20 JST nonlinear nonlinear
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Kevin Riggle
      • knowuh

      @kevinriggle @futurebird @dahukanna @PavelASamsonov @knowuh wait you didn't link the book.

      If you wanna chat about the one I recommended, after, I'm game. It's a lot to unpack.

      In conversation about a year ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Tuesday, 18-Jun-2024 02:54:21 JST Kevin Riggle Kevin Riggle
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • knowuh
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh in the much weaker and non-homomorphic sense that we can use the models on one side to make predictions about the models on the other side and then test them against the real world, sure absolutely. That’s just science! But we really, really can’t assume that the real world will validate our extrapolations.

      In conversation about a year ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Tuesday, 18-Jun-2024 02:54:21 JST Kevin Riggle Kevin Riggle
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • knowuh
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh (this is the book I’m reading and it goes into quite some detail about how the symmetries break down. BUT, causality and modeling is of great interest to me and I now know what I’m reading next thank you :)

      In conversation about a year ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Tuesday, 18-Jun-2024 02:54:22 JST Kevin Riggle Kevin Riggle
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • knowuh
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh homomorphic implies (aiui) that all operations on one half of the homomorphism can be mapped 1:1 to operations on the other half, and my point here is that we already know that at least in the strongest form that argument is not true.

      In conversation about a year ago permalink
    • Embed this notice
      nonlinear (nonlinear@social.praxis.nyc)'s status on Tuesday, 18-Jun-2024 02:54:23 JST nonlinear nonlinear
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Kevin Riggle
      • knowuh

      @kevinriggle @futurebird @dahukanna @PavelASamsonov substitutable as homomorphic, right?

      that means we can play *with* simulations, surface opportunities there, and then keep researching *out* of simulations... with a better grasp.

      like @knowuh's Alpha Fold example.

      In conversation about a year ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Tuesday, 18-Jun-2024 02:54:24 JST Kevin Riggle Kevin Riggle
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov The idea of emergence is tha certain levels of abstraction have more predictive power in the information theory sense than others, and lower levels are not always better, but it doesn’t follow from this that at some level of abstraction in these systems all models are perfectly substitutable

      In conversation about a year ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Tuesday, 18-Jun-2024 02:54:25 JST Kevin Riggle Kevin Riggle
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov I’m currently reading a book about how the brain works, and while they do find that simulations of avalanches in piles of sand can help us understand avalanches in networks of neurons, the facts of the brain avalanche models which are not captured in the sand avalanche models are obviously just as important as the facts which are.

      In conversation about a year ago permalink
    • Embed this notice
      nonlinear (nonlinear@social.praxis.nyc)'s status on Tuesday, 18-Jun-2024 02:54:26 JST nonlinear nonlinear
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist

      @futurebird @dahukanna @PavelASamsonov this book talks how to add this layer of simulation. The theory is that emergent properties of digital or analog processes are homomorphic, provided you know which model to use.

      That means computers can guide the promising experiments to research, from simulations. Making research less wasteful.

      Then dude goes on each model for each discipline. It's a journey.

      https://www.bloomsbury.com/us/philosophy-and-simulation-9781350096790/

      In conversation about a year ago permalink
    • Embed this notice
      nonlinear (nonlinear@social.praxis.nyc)'s status on Tuesday, 18-Jun-2024 02:54:27 JST nonlinear nonlinear
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist

      @futurebird @dahukanna @PavelASamsonov AI can be used as a layer of simulation between scientific hunch and actual testing.

      And yes I'm with you, those claiming AI can replace anything are unhinged born-again types peddling to investors. It's hype, it will fall, and hurt a lot of people in the process.

      For my own sanity, I separated the technology of AI from the business of AI.

      In conversation about a year ago permalink
    • Embed this notice
      Dawn Ahukanna (dahukanna@mastodon.social)'s status on Tuesday, 18-Jun-2024 18:25:34 JST Dawn Ahukanna Dawn Ahukanna
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Charlie Stross
      • Winchell Chung ⚛🚀
      • nonlinear
      • MaxTheFox

      @cstross @maxthefox @nonlinear @futurebird @PavelASamsonov @nyrath

      What always fascinates me with that thinking & focus to eliminate labour to “minimise/eliminate” costs, who happen happen to be the same audience that forms the market for your product, is this question:

      Who is the “mass market” that is going to purchase your “minimized costs” produced products - the 1% hoarding (b/m)illionaires?

      “cutting your nose to spite your face” proverb comes to mind.

      In conversation about a year ago permalink
    • Embed this notice
      Charlie Stross (cstross@wandering.shop)'s status on Tuesday, 18-Jun-2024 18:25:35 JST Charlie Stross Charlie Stross
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Winchell Chung ⚛🚀
      • nonlinear
      • MaxTheFox

      @maxthefox @nonlinear @futurebird @dahukanna @PavelASamsonov @nyrath Alas, the "replace all humans" bullcrap isn't simply an epiphenomenon of the AI tech bubble, it's an ideological stance emergent from capitalism—if you prioritise capital over labour, you end up wanting to abolish labour entirely, and automate all the components of your current very slow AIs (the corporations).

      This won't "blow over" unless we shoot all the neoliberal economists and their masters.

      In conversation about a year ago permalink
      pettter repeated this.
    • Embed this notice
      MaxTheFox (maxthefox@spacey.space)'s status on Tuesday, 18-Jun-2024 18:25:36 JST MaxTheFox MaxTheFox
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Winchell Chung ⚛🚀
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov @nyrath I hope that after this snake-oil bullcrap with them trying to fully replace humans blows over (and it will blow over sooner or later, like all tech bubbles it's not sustainable), I hope this kinda application stays. At least we got something with a lasting impact out of it unlike the previous tech bubble (NFTs)...

      In conversation about a year ago permalink
    • Embed this notice
      nonlinear (nonlinear@social.praxis.nyc)'s status on Tuesday, 18-Jun-2024 18:25:40 JST nonlinear nonlinear
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Winchell Chung ⚛🚀
      • MaxTheFox

      @maxthefox @futurebird @dahukanna @PavelASamsonov @nyrath Yesssss, that's the premise of the book. It doesn't substitute humans (that's snake oil from AIbros), it *surfaces outliers* for our revision. Augmented systems.

      In conversation about a year ago permalink
    • Embed this notice
      MaxTheFox (maxthefox@spacey.space)'s status on Tuesday, 18-Jun-2024 18:25:41 JST MaxTheFox MaxTheFox
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Winchell Chung ⚛🚀
      • nonlinear

      @nonlinear @futurebird @dahukanna @PavelASamsonov @nyrath In materials science, my field, we use AI models for figuring out properties of various new materials, *but* they're specialized ones. And generally we validate them after. But it does free up a lot of grunt work of working out *potential* promising materials manually, when a computer can give us a few dozen decent ones and then we test through them.

      This kinda thing is where science benefits the most from AI, that and proteins.

      In conversation about a year ago permalink
    • Embed this notice
      pettter (pettter@mastodon.acc.umu.se)'s status on Tuesday, 18-Jun-2024 18:28:44 JST pettter pettter
      in reply to
      • Pavel A. Samsonov
      • myrmepropagandist
      • Charlie Stross
      • Winchell Chung ⚛🚀
      • nonlinear
      • MaxTheFox

      @dahukanna @cstross @maxthefox @nonlinear @futurebird @PavelASamsonov @nyrath Every capitalist wants there to be well-paid people who can pay high prices for things unrelated to quality or resources cost. Every capitalist wants those well-paid people to be paid by someone else.

      In conversation about a year ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.