Let's not make the same mistake of giving AI the rights of a person like we did with Corporations and Citizens United.
Conversation
Notices
-
Embed this notice
LAUREN (noondlyt@mastodon.social)'s status on Wednesday, 10-Jan-2024 12:31:51 JST LAUREN - clacke likes this.
-
Embed this notice
clacke (clacke@libranet.de)'s status on Wednesday, 10-Jan-2024 15:52:46 JST clacke @ecsd @noondlyt You two are talking about two different definitions of AI.
The one that shouldn't have personhood is some automation technique that happened to come out of an AI lab.
The one that should be respected and have rights is the one that may or may not exist in the far future, experiences the world and has agency and theory of mind.
Recent developments are not on the path toward the latter.
-
Embed this notice
ecsd (ecsd@commons.whatiwanttoknow.org)'s status on Wednesday, 10-Jan-2024 15:52:47 JST ecsd <Let's not make the same mistake of giving AI the rights of a person like we did with Corporations and Citizens United.>
Sorry, but I think that's EXACTLY WRONG. And I think if ANY ONE THING would prompt an "AI revolt" in the future, it would be THAT ATTITUDE.
I just finished reading Eric Michael Dyson's 'Tears We Cannot Stop' and Frederick Douglass's autobiography. Your attitude is one of slavery and apartheid: "we have to keep [them] down or they'll revolt."
Imagine something as intelligent as Mr. Spock or Commander Data and telling it "it is incumbent upon my kind to keep your kind suppressed." Data has to give a blank stare of enough length to make you aware of your transgression; Spock would cock an eyebrow [and not be amused.]
==
{per William Hurt in Dark City: "No-one Ever listens to me."} I have noted that our issues of AI "training" are no different from OUR training of young humans to exist in society, yet nobody that I have heard of has pointed out that if we could actually "solve" the problem of how an AI should behave in society, IT WOULD NOT BE ANY DIFFERENT FROM WHAT WE ALREADY TEACH OUR CHILDREN. Yet all the FEARS we have of AI are PROJECTIONS of the WORST-CASE HUMAN BEHAVIOR and yet STILL we do not take the lesson back and try to fix THAT within the human population. If we don't KNOW how to fix humanity in these regards, how can we teach machines [to have "proper" values if they EMULATE US.]
I wrote a short story where the AI is a good guy: where, in fact, people should have been learning from the AI what values to hold. Spin the idea off: we engineer them to be honest and rational and then THEY SCOLD US for being greedy or petty or dishonest.
========
What you DON'T do is engineer something that MIGHT turn out to be HAL 9000 and then GIVE IT LEGS AND A GUN. You don't even need the advice of an AI to know that.