GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Notices by ShadSterling (shadsterling@mastodon.social), page 2

  1. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 08-Oct-2024 17:49:27 JST ShadSterling ShadSterling
    in reply to
    • myrmepropagandist

    @futurebird eventually they’ll figure out that Computer Science has a split personality, one side almost the same as Pure Math, the other side they’ll say is like Applied Math but very disorganized because they won’t admit how much that side is like Economics but focused on building Rube Goldberg machines

    In conversation about 8 months ago from mastodon.social permalink
  2. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Monday, 23-Sep-2024 11:59:53 JST ShadSterling ShadSterling
    in reply to
    • Paul Cantrell

    @inthehands there’s a weird variation of this in the online product page Q&A systems, where someone can ask about some obscure detail and the site will email the question to a bunch of previous buyers and a bunch of answers will come in from them basically saying they don’t know

    In conversation about 8 months ago from mastodon.social permalink
  3. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:50:10 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @simon @annaleen @BBCWorld existing AIs are already doing billions of dollars in harm to our society just by amplifying ongoing harms without getting anywhere near the capabilities that the hype says are dangerous. The real multibillion-dollar AI question is will we choose to stop using it to hurt ourselves?

    In conversation about 8 months ago from mastodon.social permalink
  4. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:49:58 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @simon @annaleen @BBCWorld AI is being put to use where we refuse to fund human work, for things like filtering applications for all kinds of things - jobs, schools, scholarships, asylum, etc - and these AIs are trained on the same human biases that made Microsoft’s Tay turn antisemitic, and makes ChatGPT say doctors and lawyers can’t be women. We’re doing this despite knowing the harm it’s amplifying, and buying in to the hype helps increase the harm

    In conversation about 8 months ago from mastodon.social permalink
  5. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:48:41 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @simon @annaleen @BBCWorld if you mean the kind of self-development a person can do, we first need to develop a way to make software than can reason, and remember; two things none of our existing ML methods can include in their creations. And even when/if we do develop such a method, I we don’t know that it would be capable of developing any faster than a human baby

    In conversation about 8 months ago from mastodon.social permalink
  6. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:48:40 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @simon @annaleen @BBCWorld no it isn’t. The current problems with AI come from it being far less capable than the hype suggests, but being used carelessly despite its limitations. Like law enforcement using facial recognition (which is made with ML, tho not called AI) even though it’s unreliable, and especially unreliable with non-white faces. We already overprosecute non-white people, this use of AI adds to that

    In conversation about 8 months ago from mastodon.social permalink
  7. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • Voron
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @voron @simon @annaleen @BBCWorld any learning system that creates itself with feedback from its body will come out too different to have any low-level copying from one to another work, communication will have to be more abstract, and interpreted - more like how humans do it than software for standard computers. Even more so if it optimizes itself across CPU differences. Remote-control armies would work far better for the foreseeable future

    In conversation about 8 months ago from mastodon.social permalink
  8. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • Voron
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @voron @simon @annaleen @BBCWorld one simple example of a difference: we have IoT systems where thousands of devices are deployed for for a specific time period (e.g. to study volcano vibrations for 5 years), and some of them tune their activity to manage battery life across the small differences allowed by the manufacturing tolerance. A larger system would have that kind of difference across every component - every motor, and every CPU

    In conversation about 8 months ago from mastodon.social permalink
  9. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • Voron
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @voron @simon @annaleen @BBCWorld militaries have owned the best in secure communication for thousands of years, it hasn’t changed recently, and securing military communication is why computers happened when they did. The issue for AI swarming is not security, it’s variety - really independent learning machines can’t just copy each others brains without adaptation, because both their minds and their bodies will have zillions of differences

    In conversation about 8 months ago from mastodon.social permalink
  10. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling ShadSterling
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @simon @annaleen @BBCWorld I’m not sure why you’re linking back to an earlier entry in this same thread, but while it’s a worthy research topic it’s not clear that embodiment is key. I haven’t seen anything that uses ML and has any capacity to handle interruptions, and without being able to handle interruptions trying to control a body is not going to go well

    In conversation about 8 months ago from mastodon.social permalink
  11. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling ShadSterling
    in reply to
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • BBC News (World)
    • simon

    @HistoPol @simon @annaleen @BBCWorld a static model built by a human-directed “machine learning” process doesn’t grow other than as directed, and is certainly not self-organizing

    Machine learning begins with taking statistical regression and simplifying it so you can build a model using far more data than is practical with full regression. Regression is a generalization of curve-fitting, finding a mathematical function that fits some given data as well as possible

    In conversation about 8 months ago from mastodon.social permalink
  12. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:34:48 JST ShadSterling ShadSterling
    in reply to
    • Simon Willison
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴

    @HistoPol @simon @annaleen only 10%? That’s so much better than humanity, we should put them in charge right away!

    But more importantly, citation needed.
    Also needed: a working definition of general artificial intelligence

    In conversation about 8 months ago from mastodon.social permalink
  13. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:34:47 JST ShadSterling ShadSterling
    in reply to
    • Simon Willison
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴

    @HistoPol @simon @annaleen that 10% is a survey result, the survey provides no information about how any respondents chose their responses, so it’s not possible to assess the methodology they used.

    I want a “working definition” we could use to decide that something isn’t a GI, or is a GI. Maybe first it has to be able to do more than one thing - LLMs can’t do anything other than words, so LLMs are not GIs. But that’s very incomplete

    In conversation about 8 months ago from mastodon.social permalink
  14. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:34:45 JST ShadSterling ShadSterling
    in reply to
    • Simon Willison
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • Reuters

    @HistoPol @reuters @simon @annaleen All we really know about AGI is that we don’t know how to create one, and our inability to agree on a definition illustrates how far we are from figuring that out. That 10% figure isn’t a measure of what an AGI would do but a measure of what some people who don’t know how to create an AGI think one might do. It’s about as credible as people in the 1700s speculating about how aircraft might work.
    🧵

    In conversation about 8 months ago from mastodon.social permalink
  15. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:34:45 JST ShadSterling ShadSterling
    in reply to
    • Simon Willison
    • Annalee Newitz 🍜
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    • Reuters

    @HistoPol @reuters @simon @annaleen in this thread you’ve said both “generative artificial intelligence” and “general artificial intelligence”; I would avoid the latter and use exclusively “artificial general intelligence”
    🧵

    In conversation about 8 months ago from mastodon.social permalink
  16. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Sunday, 08-Sep-2024 04:04:28 JST ShadSterling ShadSterling
    in reply to
    • Paul Cantrell

    @inthehands with respect to scholarly works in particular, we also need to reform the peer review process. I don’t know what the right way to do peer review is, but the way we do it now is bad in several ways, including accepting too much pollution.

    ( https://mastodon.social/@ShadSterling/113097784879653538 )

    In conversation about 9 months ago from mastodon.social permalink

    Attachments

    1. No result found on File_thumbnail lookup.
      ShadSterling (@ShadSterling@mastodon.social)
      from ShadSterling
      @deevybee I remember being in college for computer science and reading about the first software-generated paper to be published in a CS journal. It was a stunt, a novelty, and everything I saw about it criticized the peer-review process. No one I saw had started to think about the potential threat of such generative software becoming widely available. Now we’re generating such nonsense at an overwhelming rate, and we haven’t even improved our peer-review enough to treat human authors well
  17. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Wednesday, 04-Sep-2024 06:24:38 JST ShadSterling ShadSterling
    in reply to
    • Alan Langford
    • Paul Cantrell

    @inthehands @alan I’ve supported The Guardian in the past, don’t remember stopping for any particular reason, but now I’m wary of starting again: https://spore.social/@ayoub/113074483640487123

    In conversation about 9 months ago from mastodon.social permalink

    Attachments

    1. No result found on File_thumbnail lookup.
      Elia Ayoub (he/him) (@ayoub@spore.social)
      from Elia Ayoub (he/him)
      In 2020 two academics published a paper looking at how The Guardian has been mainstreaming the Far Right. At the core there is one problem (this is my simplification): the conflation of Far Right concerns with 'the people'. By default whatever the Far Right says has be taken more seriously than anything even left of center (let alone further left) which is also why the Guardian often uses 'populism' and 'the far right' interchangeably.
  18. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Saturday, 31-Aug-2024 10:46:49 JST ShadSterling ShadSterling
    in reply to
    • Alan Langford
    • Paul Cantrell

    @inthehands @alan I’ve been trying to make a budget for supporting real journalism and a list of organizations to support; so far I have 404 Media, the Texas Observer, and Zeteo (tho it looks like the only way to support Zeteo is through Substack), but nothing with much focus on government and elections. What other organizations should I be considering?

    In conversation about 9 months ago from mastodon.social permalink
  19. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Sunday, 18-Aug-2024 18:50:00 JST ShadSterling ShadSterling
    in reply to
    • damon
    • Johannes Ernst

    @j12t @damon AFAIK we still don’t have good enough instance-choosing tools

    (We also don’t have good enough old-post-finding tools, or I would link to one of my old posts about what those tools need to do)

    In conversation about 9 months ago from gnusocial.jp permalink
  20. Embed this notice
    ShadSterling (shadsterling@mastodon.social)'s status on Sunday, 18-Aug-2024 18:49:59 JST ShadSterling ShadSterling
    in reply to
    • damon
    • Johannes Ernst

    @j12t @damon choosing an instance specific to one of the many subjects that interest me is contrary to how I relate to those communities. I’m not here for a community bulletin board that allows crossposting, I think we’d need a different interaction paradigm for me to want community features, and it wouldn’t involve tying my whole fediverse identity to a single community. What’s supposed to happen if I joined my hometown library’s instance and then moved away?

    In conversation about 9 months ago from gnusocial.jp permalink
  • After
  • Before

User actions

    ShadSterling

    ShadSterling

    Capitalism is a denial-of-service attack on human potential

    Tags
    • (None)

    Following 0

      Followers 0

        Groups 0

          Statistics

          User ID
          20730
          Member since
          9 Nov 2022
          Notices
          69
          Daily average
          0

          Feeds

          • Atom
          • Help
          • About
          • FAQ
          • TOS
          • Privacy
          • Source
          • Version
          • Contact

          GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

          Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.