Embed this noticePer Axbom (axbom@axbom.me)'s status on Friday, 23-Jun-2023 03:36:53 JST
Per AxbomA common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.
There are several ways personal data makes its way into the AI tools. First, since the tools are often trained on data available online and in an unsupervised manner, personal data will actually make its way into the workings of the tools themselves. Data may have been inadvertently published online or may have been published for a specific purpose, rarely one that supports feeding an AI system and its plethora of outputs. Second, personal data is actually entered into the tools by everyday users through negligent or inconsiderate use – data that at times is also stored and used by the tools. Third, when many data points from many different sources are linked together in one tool they can reveal details of an individuals’ life that any single piece of information could not.
In order for us to avoid seeing traumatitizing content when using many of these tools (such as physical violence, self-harm, child abuse, killings and torture) this content needs to be filtered out. In order for it to be filtered out, someone has to watch it. As it stands, the workers who perform this filtering are often exploited and suffer PTSD without adequate care for their wellbeing. Many of them have no idea what they are getting themselves into when they start the job.
One inherent property of AI is its ability to act as an accelerator of other harms. By being trained on large amounts of data (often unsupervised) – that inevitably contains biases, abandoned values and prejudiced commentary – these will be reproduced in any output. It is likely that this will even happen unnoticeably (especially when not actively monitored) since many biases are subtle and embedded in common language. And at other times it will happen quite clearly, with bots spewing toxic, misogynistic and racist content.
Because of systemic issues and the fact that bias becomes embedded in these tools, these biases will have dire consequences for people who are already disempowered. Scoring systems are often used by automated decision-making tools and these scores can for example affect job opportunities, welfare/housing eligibility and judicial outcomes.
The more complex the algorithms become, the harder they are to understand. As more people are involved, time passes, integrations with other systems are made and documentation is faulty, the further they deviate from human understanding. Many companies will hide proprietary code and evade scrutiny, sometimes themselves losing understanding of the full picture of how the code works. Decoding and understanding how decisions are made will be open to infinitely fewer beings.
And it doesn't stop there. This also affects autonomy. By obscuring decision-making processes (how, when, why decisions are made, what options are available and what personal data is shared) it is increasingly difficult for individuals to make properly informed choices in their own best interest.
When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain. Three million AI engineers is 0.0004% of the world's population.
The dominant actors in the AI space right now are primarily US-based. And the computing power required to build and maintain many of these tools is huge, ensuring that the power of influence will continue to rest with a few big tech actors.
I'll add clarifications regarding some of the topics to this thread. 👇
Regarding Monoculture. Today, there are nearly 7,000 languages and dialects in the world. Only 7% are reflected in published online material. 98% of the internet’s web pages are published in just 12 languages, and more than half of them are in English. When sourcing the entire Internet, that is still a small part of humanity.
76% of the cyber population lives in Africa, Asia, the Middle East, Latin America and the Caribbean, most of the online content comes from elsewhere. Take Wikipedia, for example, where more than 80% of articles come from Europe and North America.
Now consider what content most AI tools are trained on.
Through the lens of a small subset of human experience and circumstance it is difficult to envision and foresee the multitudes of perspectives and fates that one new creation may influence. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.