GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Embed Notice

HTML Code

Corresponding Notice

  1. Embed this notice
    Timnit Gebru (she/her) (timnitgebru@dair-community.social)'s status on Saturday, 11-Jan-2025 04:09:55 JSTTimnit Gebru (she/her)Timnit Gebru (she/her)
    in reply to

    Some of their greatest hits are Nick "Blacks are more stupid than white's" Bostrom taking the stage to tell us about so-called "superintelligence" safety & other eugenicists we discussed in this paper: https://firstmonday.org/ojs/index.php/fm/article/view/13636.

    If you have an easier time getting certified eugenicists like Bostrom on stage to discuss the safety of non-existent "superintelligence" & a hard time having ppl who know first hand what so-called "AI safety" means, my conclusion is that you're in with the eugenicists.

    In conversationabout 4 months ago from dair-community.socialpermalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: firstmonday.org
      The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
      The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.