GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    asbestosenjoyer666 (melonhusk@new.asbestos.cafe)'s status on Thursday, 17-Apr-2025 09:14:40 JST asbestosenjoyer666 asbestosenjoyer666
    I wish that folks would just admit that many discussions revolving around theoretical AGI are eugenicist in nature
    In conversation about a month ago from new.asbestos.cafe permalink
    • Embed this notice
      Fish of Rage (sun@shitposter.world)'s status on Thursday, 17-Apr-2025 09:14:39 JST Fish of Rage Fish of Rage
      in reply to
      @melonhusk what do you mean?
      In conversation about a month ago permalink
    • Embed this notice
      Fish of Rage (sun@shitposter.world)'s status on Thursday, 17-Apr-2025 15:55:49 JST Fish of Rage Fish of Rage
      in reply to
      • meso
      @meso @melonhusk ethereum use of power is negligible
      In conversation about a month ago permalink
    • Embed this notice
      meso (meso@new.asbestos.cafe)'s status on Thursday, 17-Apr-2025 15:55:51 JST meso meso
      in reply to
      • Fish of Rage
      • meso
      @melonhusk @sun interestingly lots of people shat on cryptocurrency for its environmental impact when i mean yeah thats how it works its pretty energy hungry. but then when OpenAI was found to use like im pretty sure what amounts to a small country's reserves level amounts of water it gets a few articles and people shut up. the environmental shit remains the largest argument against crypto in mainstream libshit communities. i mean crypto is just lame man you dont need a fancy performative reason to despise shit like that
      In conversation about a month ago permalink
    • Embed this notice
      meso (meso@new.asbestos.cafe)'s status on Thursday, 17-Apr-2025 15:55:54 JST meso meso
      in reply to
      • Fish of Rage
      • meso
      @melonhusk @sun they all just sound the same which is why i feel there might be some language code enforced on these topics
      In conversation about a month ago permalink
    • Embed this notice
      meso (meso@new.asbestos.cafe)'s status on Thursday, 17-Apr-2025 15:55:55 JST meso meso
      in reply to
      • Fish of Rage
      • meso
      @melonhusk @sun like are they trying to sound like they're intellectuals? is this some kind of code enforced in academia tackling anti-corporate subjects? like, you HAVE to just mention the adversely affecting racial and gender minorities etc etc and environmental impact and maaaybe you can sprinkle in some centralization but THAT'S IT!

      is that even article material when you're just stating what anybody can see? it seems like filler, except for the eugenicist stuff which is a new argument
      In conversation about a month ago permalink
    • Embed this notice
      meso (meso@new.asbestos.cafe)'s status on Thursday, 17-Apr-2025 15:55:56 JST meso meso
      in reply to
      • Fish of Rage
      @melonhusk @sun > As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability.
      I don't like when they have this much jargon, just say they're evil fucks that want to fuck regular people and the disadvantaged over. it sounds so performative I don't think they'll get anything done when they talk like that

      >The AGI race fueled by the TESCREAList goal to build “transhuman minds” (Goertzel, 2010) and bring about “unlimited intelligence” has also resulted in systems that consume more and more resources in terms of data and compute power. This leads to a high environmental impact, increases the risks that arise due to the lack of appropriate scoping discussed earlier, and results in the centralization of power among a handful of entities.
      i really don't see how it matters it's gonna have a "high environmental impact" and "increases the risks that arise"

      >As detailed by a number of scholars, both the environmental impacts and the unsafe outputs of these systems disproportionately affect marginalized groups like racial and gender minorities, disabled people, and citizens of developing countries bearing the brunt of the climate catastrophe (Bender and Gebru, et al., 2021). The AGI race not only perpetuates these harms to marginalized groups, but it does so while depleting resources from these same groups to pursue the race. Resources that could go to many entities around the world, each building computational systems that serve the needs of specific communities, are being siphoned away to a handful of corporations trying to build AGI.

      so much mention about how climate catastrophe is gonna affect minorities as if that's an argument that needs to be made given that... it's a climate catastrophe. yes, it will affect minorities. and POOR people. they only seem to mention the centralization and massive corporate control aspects of AI as a PS rather than one of the most important things.

      I probably agree with the whole article I just don't like the way things are written nowadays. I mean, it's just so many euphemisms. They want to kill the poor, just say it the way it is.
      In conversation about a month ago permalink
    • Embed this notice
      asbestosenjoyer666 (melonhusk@new.asbestos.cafe)'s status on Thursday, 17-Apr-2025 15:55:58 JST asbestosenjoyer666 asbestosenjoyer666
      in reply to
      • Fish of Rage
      @sun I can't put it any better into words than this article has https://firstmonday.org/ojs/index.php/fm/article/view/13636
      In conversation about a month ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: firstmonday.org
        The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
        The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.