You are not learning fast enough. You are not using the new tools enough. You are not publishing enough. You are not in nature enough. You are not caring enough. You are not being social enough. You are not putting away your phone enough. You are not practicing enough. You are not recycling enough. You are not protesting enough. You are not planning enough. You are not moving quickly enough.
All of this is wrong. So very wrong. You are you. You matter. So much. And you are enough.
"AI is existing as it's supposed to exist," says McKernan. "I think it has had a lot of potential to make our lives easier, to make workflows more effective. My issue is that the implementation of it, especially with AI art, hasn't been ethical, in my opinion because of the way it is built off a massive data set with 5.1 billion images, and taxpayers' data, all of which was culled from the internet without consent."
Embed this noticePer Axbom (axbom@axbom.me)'s status on Sunday, 22-Oct-2023 17:33:09 JST
Per AxbomHere's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.
"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."
In this regard the tools don't take us to the future, but to the past.
No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.
In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.
Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.
It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.
Embed this noticePer Axbom (axbom@axbom.me)'s status on Sunday, 15-Oct-2023 17:24:56 JST
Per AxbomThe idea appears to be to let computers exponentially proliferate some of the tasks they excel at: numbers, statistics, labelling people, copying, collecting data and mass surveillance. Rather than sit down and talk about how we boost the values we as humans wish to proliferate: compassion, love, care, connection and belonging.
Few are talking about how the former is antithetical to the latter.
I’m not saying ”stop using computers”, I’m saying ”stop letting computers assume the leadership position”. Computers can act as aids for compassion, love, care, connection and belonging. Think of games, text-to-speech and long-distance communication. But computers arrive there by instruction code from humans. Not the other way around.
The more alarming truth is this: computers can be used to destroy compassion, love, care, connection and belonging much faster than we can keep up with building it. Sometimes that destruction is with intent, but often it is oblivious.
Was talking to @beantin about the “amazing” feat where ChatGPT passed the bar exam. We agreed that if you feed all the relevant content for the bar exam into ChatGPT there really should be no big surprise about it being able to spew out statistically relevant content.
The fact that it still got such a relatively low score should be a cause for worry, not celebration(!) It’s evidence that the tool has no understanding of what it is doing. It has the answers, it’s just not able to use them in the right way. It’s like a student sitting with a textbook with all the answers to the test and not being able to understand which answer fits where.
As Paris Marx wrote in March:
"it’s so funny to me that the AI people think it’s impressive when their programs pass a test after being trained on all the answers”
I tend to question most things I see online. Today was no exception. I was sent images of squirrels supposedly landing on the ground like superheroes after jumping from trees (one fist in the ground and the other arm stretched out behind them).
I really, really wanted to believe this. It’s too cool! But my mind immediately went… are these AI-generated?
As it turns out, this claim has been doing the rounds for years. The pictures are real. The context is not. That’s not a squirrel landing. It’s what a squirrel looks like during the activity of scratching their armpit with their hind leg.
And how do they in fact land? "When in a controlled fall, squirrels will spread their limbs out wide to increase air resistance and hit the ground like a bushy-tailed pancake. This helps spread the force of the impact over a greater area to prevent injury."
@HistoPol Good reflections. As it turns out, on Tuesday I’m attending a course on AI and regulation. It’s aimed at lawyers, but I was welcome. Hoping to make some valuable connections there, as I am also in fact hopeful that more legislative efforts can bring about change a bit quicker.
@HistoPol I believe this is a shift that can only happen with education and when becoming embedded in culture. Takes a lot of time.
Especially when equity gaps increase before they decrease.
My conviction has become to advocate for the idea of love, compassion and care as viable forces for innovation and business. But I have no illusion of making much of a dent within my own lifespan.
- who are made aware of the survey - who are given access to the survey - who are willing to respond - who are able to respond
People who are disenfranchised, living with disabilities, struggling with time, money and/or language/literacy generally will not respond.
Surveys are rarely representative because the time and effort required to make surveys inclusive is not invested.
The effect of surveys is then that people who are made invisible by society are made even more invisible by organisations that often call themselves data-driven.
« The AI sector is utterly dependent on criti-hype. They are burning tens of billions of dollars on engineering salaries, custom chip fabs, human data annotation, data-center rents, racks and racks of GPUs and ASICs, whole gridsworth of electricity and entire aquifers’ worth of fresh water for cooling.
They are hemorrhaging a river of cash, but that river’s source is an ocean-sized reservoir of even more cash.
To keep that reservoir full, the AI industry needs to convince fresh rounds of “investors” to give them hundreds of billions of dollars on the promise of a multi-trillion-dollar payoff.
That’s where the “AI Safety” story comes in. You know, the tech bros who run around with flashlights under their chins, intoning “ayyyyyy eyeeeee,” and warning us that their plausible sentence generators are only days away from becoming conscious and converting us all into paperclips. »
Embed this noticePer Axbom (axbom@axbom.me)'s status on Thursday, 28-Sep-2023 23:20:07 JST
Per AxbomI’m hoping a person who speaks at least English, French and German (or any other combination of the advertised languages) will do a writeup on all the weaknesses (and dangers) inherent in the concept of Spotify's new podcast auto-translation feature. If you see someone writing about this, do share it with me.
"We can't wait anymore." "7 minutes until the first warhead is in the observation zone." "We won't have time to retaliate. You have to make a decision!" "You see it?" "Could be." "No. That's not heat from a missile." "Damn!" "Let's keep looking." "THE COMPUTER CAN'T BE WRONG!" "I don't understand it." "Damn it! They have to confirm this damn attack." "All thirty levels of security levels confirms the attack!" "Infrared devices verify heat from all five launched missiles!" "What are we going to do?"
Stanislav Petrov: "Nothing. I don't trust the computer. We'll wait."
This dialogue is from a re-enactment in the documentary The Man Who Saved the World.
Last year I wrote about three learnings I take away from his story.
1. Embrace multiple perspectives. Petrov was educated as an engineer rather than a military man. He knew the unpredictability of machine output.
2. Look for multiple confirmation points. To confirm our beliefs we should expect many different variables to line up and tell us the same story. If one or more variables are saying something different, we need to pursue those anomalies to understand why. If the idea of a faulty system lines up with all other variables, that makes it more likely.
3. Reward exposure of faulty systems. If we keep praising our tools for their excellence and efficiency it's hard to later accept their defects. When shortcomings are found, this needs to be communicated just as clearly and widely as successes. Maintaining an illusion of perfect, neutral and flawless systems will keep people from questioning the systems when the systems need to be questioned.
Take "Humpty Dumpty sat on a... Even this snippet of a nursery rhyme reveals how much languages can differ from one another. In English, we have to mark the verb for tense; in this case, we say "sat" rather than "sit." In Indonesian you need not (in fact, you can't) change the verb to mark tense.
In Russian, you would have to mark tense and also gender, changing the verb if Mrs. Dumpty did the sitting. You would also have to decide if the sitting event was completed or not. If our ovoid hero sat on the wall for the entire time he was meant to, it would be a different form of the verb than if, say, he had a great fall.
In Turkish, you would have to include in the verb how you acquired this information. For example, if you saw the chubby fellow on the wall with your own eyes, you'd use one form of the verb, but if you had simply read or heard about it, you'd use a different form.
Do English, Indonesian, Russian and Turkish speakers end up attending to, understanding, and remembering their experiences differently simply because they speak different languages?"
The answer is yes.
In a world of sharing ideas across languages, understanding how and why languages make us think, behave and reason differently from each other is increasingly important.
"All this new research shows us that the languages we speak not only reflect or express our thoughts, but also shape the very thoughts we wish to express. The structures that exist in our languages profoundly shape how we construct reality, and help make us as smart and sophisticated as we are."
« Watch Lera Boroditsky's talk. Lera Boroditsky is an associate professor of cognitive science at University of California San Diego and editor in chief of Frontiers in Cultural Psychology. She previously served on the faculty at MIT and at Stanford. Her research is on the relationships between mind, world and language (or how humans get so smart).
She once used the Indonesian exclusive "we" correctly before breakfast and was proud of herself about it all day. »
I'll add clarifications regarding some of the topics to this thread. 👇
Regarding Monoculture. Today, there are nearly 7,000 languages and dialects in the world. Only 7% are reflected in published online material. 98% of the internet’s web pages are published in just 12 languages, and more than half of them are in English. When sourcing the entire Internet, that is still a small part of humanity.
76% of the cyber population lives in Africa, Asia, the Middle East, Latin America and the Caribbean, most of the online content comes from elsewhere. Take Wikipedia, for example, where more than 80% of articles come from Europe and North America.
Now consider what content most AI tools are trained on.
Through the lens of a small subset of human experience and circumstance it is difficult to envision and foresee the multitudes of perspectives and fates that one new creation may influence. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.
When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain. Three million AI engineers is 0.0004% of the world's population.
The dominant actors in the AI space right now are primarily US-based. And the computing power required to build and maintain many of these tools is huge, ensuring that the power of influence will continue to rest with a few big tech actors.
Pending follow request? It’s a bug! Read this: https://axbom.com/migfail/Teacher, coach, speaker and designer in the space of #DigitalEthics, #InclusiveDesign and #Accessibility. Long history of tinkering with computers and making stuff on the Internet.Writer, blogger and author working to mitigate online harm. Maker of visual explainers. Communication theorist by education, #HumanRights advocate by dedication.Born in Liberia of Swedish parents.Country-living, book-loving middle-aged family man with adult kids and a French bulldog. Love to untangle digital messes. Preferably during long walks in the forest or meditative motorcycle rides.Co-host of @uxpodcast@mastodon.social. Try to get paid for my work but I put most of it out there for free ?Social media is fickle and unpredictable. To make sure you continue to get updates from me, I recommend signing up for my free newsletter below.This is my 4th Fediverse account. My posts are licensed under Creative Commons Attribution-NonComm