«Will consumers perhaps come to see the phrase "AI-Powered System" in the same light as "Diesel-Powered SUV".»
Well, not yet it would seem.
In The Elements of AI Ethics from June of last year I build on The Elements of Digital Ethics from 2021. Which itself was the output of reading about digital harms for many years.
Seeing all of the categories of harms just get worse year on year is disheartening.
What goal is worth all this? I tend to fall back on a sentiment I use in my talks and teaching:
When a privileged group benefits from a technology, the more inclined they will be to ignore the harms done unto others by the same technology. Because drawing attention to the harm would suggest they should give up their personal gain to help someone else.
This appears to be true for the short term. In the long term the beneficiaries of technology will happily also ignore harm done unto themselves, as long as they get the experience boost in the moment.
What hope is there?
In my June 11 session for Ambition Empower I will be talking about how to champion technologies of compassion, drawing on work related to nature connectedness by P. Wesley Schultz, Marianne E. Krasny, F. Stephan Mayer and Cynthia M Frantz.
Technologies of compassion work in unison with an acknowledgement of our connection not only to each other but also to nature. Technology tends to separate us from nature, making us value it less - and causing us to increasingly worsen our own living conditions, and the conditions of all other species, over time.
But we can choose to design technology that takes nature into account.. Technology that works with, not against, nature. I believe this is what all schools must start teaching. Now.
Expect me to write more about this over the next year.
I just listened to your article which has many interesting aspects. Thank you for sharing.
I just wanted to focus on the graph, which immediately answered a question that I've had for quite some time.
Why are so many AI luminaries talking about the threat of General Artificial Intelligence. I agree with that threat assessment 1) 2), but why would they? In particular business people like Sam Altman?
Your graph answers this instantly. The great Chinese general, #SunTzu said:
"All warfare is based on deception. Hence, when able to attack, we must seem unable; when using our forces, we must seem inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near." (#TheArtOfWar)
They are simply deflecting from plethora of immediate threats that you portray on the right-hand side.
Your graph answers this instantly. The great Chinese general, #SunTzu said:
"All warfare is based on deception. Hence, when able to attack, we must seem unable; when using our forces, we must seem inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near." (#TheArtOfWar)
They are simply deflecting from plethora of immediate threats...
That, I am quite still quite certain, that the threat of #ArtificialGeneralIntelligence (#AGI) is also quite real and not as distant as most people seem to think: