GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    LAUREN (noondlyt@mastodon.social)'s status on Wednesday, 10-Jan-2024 12:31:51 JST LAUREN LAUREN

    Let's not make the same mistake of giving AI the rights of a person like we did with Corporations and Citizens United.

    In conversation Wednesday, 10-Jan-2024 12:31:51 JST from mastodon.social permalink
    • clacke likes this.
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Wednesday, 10-Jan-2024 15:52:46 JST clacke clacke
      in reply to
      • ecsd

      @ecsd @noondlyt You two are talking about two different definitions of AI.

      The one that shouldn't have personhood is some automation technique that happened to come out of an AI lab.

      The one that should be respected and have rights is the one that may or may not exist in the far future, experiences the world and has agency and theory of mind.

      Recent developments are not on the path toward the latter.

      In conversation Wednesday, 10-Jan-2024 15:52:46 JST permalink
    • Embed this notice
      ecsd (ecsd@commons.whatiwanttoknow.org)'s status on Wednesday, 10-Jan-2024 15:52:47 JST ecsd ecsd
      in reply to

      @noondlyt

      #AI

      <Let's not make the same mistake of giving AI the rights of a person like we did with Corporations and Citizens United.>

      Sorry, but I think that's EXACTLY WRONG. And I think if ANY ONE THING would prompt an "AI revolt" in the future, it would be THAT ATTITUDE.

      I just finished reading Eric Michael Dyson's 'Tears We Cannot Stop' and Frederick Douglass's autobiography. Your attitude is one of slavery and apartheid: "we have to keep [them] down or they'll revolt."

      Imagine something as intelligent as Mr. Spock or Commander Data and telling it "it is incumbent upon my kind to keep your kind suppressed." Data has to give a blank stare of enough length to make you aware of your transgression; Spock would cock an eyebrow [and not be amused.]

      ==

      {per William Hurt in Dark City: "No-one Ever listens to me."} I have noted that our issues of AI "training" are no different from OUR training of young humans to exist in society, yet nobody that I have heard of has pointed out that if we could actually "solve" the problem of how an AI should behave in society, IT WOULD NOT BE ANY DIFFERENT FROM WHAT WE ALREADY TEACH OUR CHILDREN. Yet all the FEARS we have of AI are PROJECTIONS of the WORST-CASE HUMAN BEHAVIOR and yet STILL we do not take the lesson back and try to fix THAT within the human population. If we don't KNOW how to fix humanity in these regards, how can we teach machines [to have "proper" values if they EMULATE US.]

      I wrote a short story where the AI is a good guy: where, in fact, people should have been learning from the AI what values to hold. Spin the idea off: we engineer them to be honest and rational and then THEY SCOLD US for being greedy or petty or dishonest.

      ========

      What you DON'T do is engineer something that MIGHT turn out to be HAL 9000 and then GIVE IT LEGS AND A GUN. You don't even need the advice of an AI to know that.

      In conversation Wednesday, 10-Jan-2024 15:52:47 JST permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.