GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Wednesday, 05-Jun-2024 16:05:37 JST Per Axbom Per Axbom
    In June 2023 I wrote:

    «Will consumers perhaps come to see the phrase "AI-Powered System" in the same light as "Diesel-Powered SUV".»

    Well, not yet it would seem.

    In The Elements of AI Ethics from June of last year I build on The Elements of Digital Ethics from 2021. Which itself was the output of reading about digital harms for many years.

    Seeing all of the categories of harms just get worse year on year is disheartening.

    What goal is worth all this? I tend to fall back on a sentiment I use in my talks and teaching:

    When a privileged group benefits from a technology, the more inclined they will be to ignore the harms done unto others by the same technology. Because drawing attention to the harm would suggest they should give up their personal gain to help someone else.

    This appears to be true for the short term. In the long term the beneficiaries of technology will happily also ignore harm done unto themselves, as long as they get the experience boost in the moment.

    What hope is there?

    In my June 11 session for Ambition Empower I will be talking about how to champion technologies of compassion, drawing on work related to nature connectedness by P. Wesley Schultz, Marianne E. Krasny, F. Stephan Mayer and Cynthia M Frantz.

    Technologies of compassion work in unison with an acknowledgement of our connection not only to each other but also to nature. Technology tends to separate us from nature, making us value it less - and causing us to increasingly worsen our own living conditions, and the conditions of all other species, over time.

    But we can choose to design technology that takes nature into account.. Technology that works with, not against, nature. I believe this is what all schools must start teaching. Now.

    Expect me to write more about this over the next year.

    https://per.ax/aie

    #solarpunk
    In conversation about a year ago from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: axbom.com
      The Elements of AI Ethics
      from @axbom
      Let's talk about harm caused by humans implementing AI.
    • Embed this notice
      HistoPol (#HP) 🏴 🇺🇸 🏴 (histopol@mastodon.social)'s status on Wednesday, 05-Jun-2024 17:43:13 JST HistoPol (#HP) 🏴 🇺🇸  🏴 HistoPol (#HP) 🏴 🇺🇸 🏴
      in reply to

      @axbom

      #AIEthics #AICommunications #Deception

      (1/n)

      I just listened to your article which has many interesting aspects. Thank you for sharing.

      I just wanted to focus on the graph, which immediately answered a question that I've had for quite some time.

      Why are so many AI luminaries talking about the threat of General Artificial Intelligence. I agree with that threat assessment 1) 2), but why would they? In particular business people like Sam Altman?

      Your...

      https://axbom.com/aielements/

      In conversation about a year ago permalink
    • Embed this notice
      HistoPol (#HP) 🏴 🇺🇸 🏴 (histopol@mastodon.social)'s status on Wednesday, 05-Jun-2024 17:45:10 JST HistoPol (#HP) 🏴 🇺🇸  🏴 HistoPol (#HP) 🏴 🇺🇸 🏴
      in reply to

      @axbom

      Your graph answers this instantly. The great Chinese general, #SunTzu said:

      "All warfare is based on deception. Hence, when able to attack, we must seem unable; when using our forces, we must seem inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near."
      (#TheArtOfWar)

      They are simply deflecting from plethora of immediate threats that you portray on the right-hand side.

      That,...

      1)
      https://mastodon.social/@HistoPol/109894787077782438

      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: files.mastodon.social
        HistoPol (@HistoPol@mastodon.social)
        from HistoPol
        Attached: 1 image @simon@simonwillison.net (1/n) General Artificial Intelligence has a 10% probability of causing an Extinction Level Event for humanity (1) Thanks for this additional piece of information, Simon. It reminded me that I had wanted to add a word in my toot: indelibly. As any #SciFy aficionado will tell you: there should be a built-in self-destruct mechanism when tampering with these Laws or copying or moving the #AI to another system. Another classic movie comes to mind in this respect, #Wargames...
    • Embed this notice
      HistoPol (#HP) 🏴 🇺🇸 🏴 (histopol@mastodon.social)'s status on Wednesday, 05-Jun-2024 17:45:30 JST HistoPol (#HP) 🏴 🇺🇸  🏴 HistoPol (#HP) 🏴 🇺🇸 🏴
      in reply to

      @axbom

      #AIEthics #AICommunications #Deception

      (2/n)

      Your graph answers this instantly. The great Chinese general, #SunTzu said:

      "All warfare is based on deception. Hence, when able to attack, we must seem unable; when using our forces, we must seem inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near."
      (#TheArtOfWar)

      They are simply deflecting from plethora of immediate threats...

      1)
      https://mastodon.social/@HistoPol/109894787077782438

      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: files.mastodon.social
        HistoPol (@HistoPol@mastodon.social)
        from HistoPol
        Attached: 1 image @simon@simonwillison.net (1/n) General Artificial Intelligence has a 10% probability of causing an Extinction Level Event for humanity (1) Thanks for this additional piece of information, Simon. It reminded me that I had wanted to add a word in my toot: indelibly. As any #SciFy aficionado will tell you: there should be a built-in self-destruct mechanism when tampering with these Laws or copying or moving the #AI to another system. Another classic movie comes to mind in this respect, #Wargames...
    • Embed this notice
      HistoPol (#HP) 🏴 🇺🇸 🏴 (histopol@mastodon.social)'s status on Wednesday, 05-Jun-2024 17:46:16 JST HistoPol (#HP) 🏴 🇺🇸  🏴 HistoPol (#HP) 🏴 🇺🇸 🏴
      in reply to

      @axbom

      #AIEthics #AICommunications #Deception

      (3/3)

      ...that you portray on the right-hand side.

      That, I am quite still quite certain, that the threat of #ArtificialGeneralIntelligence (#AGI) is also quite real and not as distant as most people seem to think:

      1) (see previous page)

      2)
      https://mastodon.social/@HistoPol/110289610086727037

      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: files.mastodon.social
        HistoPol (#HP) (@HistoPol@mastodon.social)
        from HistoPol (#HP)
        Attached: 1 image The threat of #GAI #generativeAI (1/n) Almost every week now, + despite statements to the contrary, by many #AI #scientists and #programmers, the utopias of #IsaacAsimov and #PhilipKDick (+ others 1)) are making a leap forward. Due to all the white noise + the hype regarding #AI most of the general public. Since I posted my warning in February (https://mastodon.social/@HistoPol/109877181962607380), much has happened. I see the enabling of #robots with #AI (https://mastodon.social/@HistoPol/110129405482528991) as a particular threat because it...

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.