GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Susan Kaye Quinn 🌱(she/her) (susankayequinn@wandering.shop)'s status on Monday, 30-Dec-2024 23:44:56 JST Susan Kaye Quinn 🌱(she/her) Susan Kaye Quinn 🌱(she/her)

    Company execs everywhere are so enthusiastic about AI because they profit from the erasure of moral/legal responsibility that AI provides as a service.

    Start sending CEOs to jail for the shit their "AI" does and we'll start to see responsible use (maybe).

    (None of which erases the fact that most gen-AI is made of industrial theft and crimes to begin with)

    In conversation about 5 months ago from wandering.shop permalink

    Attachments


    1. https://stockroom.wandering.shop/media_attachments/files/113/742/250/389/355/411/original/d9a12c7f26779964.png
    • Doughnut Lollipop 【記録係】:blobfoxgooglymlem: likes this.
    • Rich Felker and Rainha Das 6 Da Tarde :Ryyca: repeated this.
    • Embed this notice
      Børge (forteller@tutoteket.no)'s status on Monday, 30-Dec-2024 23:46:33 JST Børge Børge
      in reply to
      • Benjamin Ross

      @BenRossTransit ^

      In conversation about 5 months ago permalink
    • Embed this notice
      Mastodon Migration (mastodonmigration@mastodon.online)'s status on Wednesday, 01-Jan-2025 01:45:45 JST Mastodon Migration Mastodon Migration
      in reply to

      @susankayequinn

      This should be really simple. An SFPD officer should stop the damn thing and give it a ticket. If the operator gets too many tickets it's license should be revoked. There should be personal liability for anyone that programs a machine to break the law. Government must protect society from this menace.

      Edit: To be clear, the point of this SFPD Waymo traffic stop example is to highlight the absurdity of permitting robot cars that intentionally break laws to operate on the road.

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Coach Pāṇini ® (paninid@mastodon.world)'s status on Wednesday, 01-Jan-2025 01:45:48 JST Coach Pāṇini ® Coach Pāṇini ®
      in reply to
      • Mastodon Migration

      @mastodonmigration @susankayequinn

      Not an original suggestion, but impounding is an option on the table.

      In conversation about 5 months ago permalink
    • Embed this notice
      Local Agency (laprice@beige.party)'s status on Wednesday, 01-Jan-2025 01:46:23 JST Local Agency Local Agency
      in reply to
      • Mastodon Migration
      • Coach Pāṇini ®

      @susankayequinn @paninid @mastodonmigration spike strips on up to forklifts and signal interception.

      Disabling a robot car gone rogue is not the hard problem; holding the people who caused it to be built and intentionally programmed it to violate safety regulations is.

      Because in our society at present; wealth is the grand exemption from responsibility.

      In conversation about 5 months ago permalink
    • Embed this notice
      Susan Kaye Quinn 🌱(she/her) (susankayequinn@wandering.shop)'s status on Wednesday, 01-Jan-2025 01:46:24 JST Susan Kaye Quinn 🌱(she/her) Susan Kaye Quinn 🌱(she/her)
      in reply to
      • Mastodon Migration
      • Coach Pāṇini ®

      @paninid @mastodonmigration

      I really do wonder what physical mechanisms exist to STOP the waymos. Like physically. You can't impound a car you have zero control over. You can't do anything to a car you have zero control over. Ostensibly locus of control exists somewhere, but I very much wonder what that causal chain of events looks like. And at what point we erase legal liability.

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Mastodon Migration (mastodonmigration@mastodon.online)'s status on Wednesday, 01-Jan-2025 01:49:07 JST Mastodon Migration Mastodon Migration
      in reply to
      • Coach Pāṇini ®

      @susankayequinn @paninid

      Absolutely, at this point it seems robots effectively shield their owners from personal liability for their actions.

      The very real situation exists that a robot cars could be programmed to intentionally kill people and it is not clear where any personally liability would rest. As you say, this is already happening. The same is true for AIs run amok giving out false and dangerous information.

      In conversation about 5 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Wednesday, 01-Jan-2025 01:49:07 JST Rich Felker Rich Felker
      in reply to
      • Mastodon Migration
      • Coach Pāṇini ®

      @mastodonmigration @susankayequinn @paninid Guess someone should try this with CEOs... 🤔

      In conversation about 5 months ago permalink
    • Embed this notice
      Dave Rahardja (drahardja@sfba.social)'s status on Wednesday, 01-Jan-2025 06:24:07 JST Dave Rahardja Dave Rahardja
      in reply to

      @susankayequinn Updated for 2024:

      AI CAN NEVER BE HELD ACCOUNTABLE

            THEREFORE

        THEIR OPERATORS MUST BE
         HELD ACCOUNTABLE
          FOR THEIR ACTIONS

      In conversation about 5 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.