A sympathy notice means that a union declares their intent to take conflict measures that support another union in their ongoing negotiations. The difference between a sympathy notice and a "normal" notice is that the supporting union is not directly involved in the ongoing negotiations.
This means a total of seven(!) unions in Sweden are now taking action against Tesla.
1) IF Metall: No workshop repair work 2) Transport: Blockage of ports (no car deliveries in the 4 ports of Malmö, Gothenburg, Trelleborg, and Södertälje) 3) Fastighets: No cleaning of premises 4) Seko: Post and delivery blockade 5) Elektrikerna: No electricity work or repair 6) Målarna: No paintwork on vehicles 7) ST: No mail or package deliveries
Background: Tesla is refusing to sign a collective agreement with their union, IF Metall. Metal workers at Tesla’s seven Swedish repair shops have been on strike since October 27. For context, around 90% of Swedish employees are covered by collective agreements. These agreeements outline terms of pay, pensions, and working conditions.
IF Metall has been working on getting Tesla to sign a collective agreement with workers in its repair shops since 2018. Union representatives have said they are ready for a long strike, if deemed necessary.
By way of @garymarcus newsletter I was made aware of the following:
In a New York Times article on the self-driving company Cruise that recently suspended its cars, some interesting figured were revealed:
”Half of Cruise’s 400 cars were in San Francisco when the driverless operations were stopped. Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle. The workers intervened to assist the company’s vehicles every 2.5 to five miles, according to two people familiar with is operations. In other words, they frequently had to do something to remotely control a car after receiving a cellular signal that it was having problems.”
That’s a human intervention every 4-8 kilometres. More and more people are becoming aware of how many people are involved in the development, maintenance and running of machine-learning models. It’s safe to assume that machine-controlled cars are no different.
Most of the world is talking about self-driving and autonomous as if those are apt descriptions of what is already happening. Reality begs to differ. I think we need words that better describe what is really going on, and for media (and evangelists) to stop parroting whatever the companies feed them.
Autonomous used to mean something. Let’s ask the companies what they intend for the words to mean, and urge them to disclose the number of humans involved in making something appear ”autonomous”.
In light of these numbers being talked about, Cruise CEO Vogt clarifies (on Hacker News) that Cruise AVs are remotely assisted 2-4% of the time on average.* Interestingly he also says: ”This is low enough already that there isn’t a huge cost benefit to optimizing much further.” He also goes on to say that they are intentionally over staffed ”in order to handle localized bursts of RA demand”.
So maybe that’s what self-driving means.
—————
*Note that the numbers ”every 2.5 to 5 miles” and ”2-4% of the time” are not necessarily in conflict, especially in San Francisco.
LET ME KNOW what other terms you find have been invented or shifted to mean something else to obscure limited functionality. I may have to make a glossary. ”Hallucination” is for example another one of those for me.
It’s still unclear what this will include, so your guess is as good as mine. Including what impact US regulation may have on the rest of the world.
What I do feel is becoming more and more clear is a growing need for organisations to adopt a well-defined role around the area of anti-discrimination oversight.
With increased use, and increased liability, organisations will have to be accountable for the discrimination that everyday use and output may proliferate.
« the Oct. 23 draft order calls for extensive new checks on the technology, directing agencies to set standards to ensure data privacy and cybersecurity, prevent discrimination, enforce fairness and also closely monitor the competitive landscape of a fast-growing industry »
According to leaked drafts, Biden’s order will also direct ”the Federal Trade Commission, for instance, to focus on anti-competitive behavior and consumer harms in the AI industry”.
You are not learning fast enough. You are not using the new tools enough. You are not publishing enough. You are not in nature enough. You are not caring enough. You are not being social enough. You are not putting away your phone enough. You are not practicing enough. You are not recycling enough. You are not protesting enough. You are not planning enough. You are not moving quickly enough.
All of this is wrong. So very wrong. You are you. You matter. So much. And you are enough.
"AI is existing as it's supposed to exist," says McKernan. "I think it has had a lot of potential to make our lives easier, to make workflows more effective. My issue is that the implementation of it, especially with AI art, hasn't been ethical, in my opinion because of the way it is built off a massive data set with 5.1 billion images, and taxpayers' data, all of which was culled from the internet without consent."
Embed this noticePer Axbom (axbom@axbom.me)'s status on Sunday, 22-Oct-2023 17:33:09 JST
Per AxbomHere's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.
"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."
In this regard the tools don't take us to the future, but to the past.
No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.
In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.
Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.
It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.
Embed this noticePer Axbom (axbom@axbom.me)'s status on Sunday, 15-Oct-2023 17:24:56 JST
Per AxbomThe idea appears to be to let computers exponentially proliferate some of the tasks they excel at: numbers, statistics, labelling people, copying, collecting data and mass surveillance. Rather than sit down and talk about how we boost the values we as humans wish to proliferate: compassion, love, care, connection and belonging.
Few are talking about how the former is antithetical to the latter.
I’m not saying ”stop using computers”, I’m saying ”stop letting computers assume the leadership position”. Computers can act as aids for compassion, love, care, connection and belonging. Think of games, text-to-speech and long-distance communication. But computers arrive there by instruction code from humans. Not the other way around.
The more alarming truth is this: computers can be used to destroy compassion, love, care, connection and belonging much faster than we can keep up with building it. Sometimes that destruction is with intent, but often it is oblivious.
Was talking to @beantin about the “amazing” feat where ChatGPT passed the bar exam. We agreed that if you feed all the relevant content for the bar exam into ChatGPT there really should be no big surprise about it being able to spew out statistically relevant content.
The fact that it still got such a relatively low score should be a cause for worry, not celebration(!) It’s evidence that the tool has no understanding of what it is doing. It has the answers, it’s just not able to use them in the right way. It’s like a student sitting with a textbook with all the answers to the test and not being able to understand which answer fits where.
As Paris Marx wrote in March:
"it’s so funny to me that the AI people think it’s impressive when their programs pass a test after being trained on all the answers”
I tend to question most things I see online. Today was no exception. I was sent images of squirrels supposedly landing on the ground like superheroes after jumping from trees (one fist in the ground and the other arm stretched out behind them).
I really, really wanted to believe this. It’s too cool! But my mind immediately went… are these AI-generated?
As it turns out, this claim has been doing the rounds for years. The pictures are real. The context is not. That’s not a squirrel landing. It’s what a squirrel looks like during the activity of scratching their armpit with their hind leg.
And how do they in fact land? "When in a controlled fall, squirrels will spread their limbs out wide to increase air resistance and hit the ground like a bushy-tailed pancake. This helps spread the force of the impact over a greater area to prevent injury."
@HistoPol Good reflections. As it turns out, on Tuesday I’m attending a course on AI and regulation. It’s aimed at lawyers, but I was welcome. Hoping to make some valuable connections there, as I am also in fact hopeful that more legislative efforts can bring about change a bit quicker.
@HistoPol I believe this is a shift that can only happen with education and when becoming embedded in culture. Takes a lot of time.
Especially when equity gaps increase before they decrease.
My conviction has become to advocate for the idea of love, compassion and care as viable forces for innovation and business. But I have no illusion of making much of a dent within my own lifespan.
- who are made aware of the survey - who are given access to the survey - who are willing to respond - who are able to respond
People who are disenfranchised, living with disabilities, struggling with time, money and/or language/literacy generally will not respond.
Surveys are rarely representative because the time and effort required to make surveys inclusive is not invested.
The effect of surveys is then that people who are made invisible by society are made even more invisible by organisations that often call themselves data-driven.
« The AI sector is utterly dependent on criti-hype. They are burning tens of billions of dollars on engineering salaries, custom chip fabs, human data annotation, data-center rents, racks and racks of GPUs and ASICs, whole gridsworth of electricity and entire aquifers’ worth of fresh water for cooling.
They are hemorrhaging a river of cash, but that river’s source is an ocean-sized reservoir of even more cash.
To keep that reservoir full, the AI industry needs to convince fresh rounds of “investors” to give them hundreds of billions of dollars on the promise of a multi-trillion-dollar payoff.
That’s where the “AI Safety” story comes in. You know, the tech bros who run around with flashlights under their chins, intoning “ayyyyyy eyeeeee,” and warning us that their plausible sentence generators are only days away from becoming conscious and converting us all into paperclips. »
Embed this noticePer Axbom (axbom@axbom.me)'s status on Thursday, 28-Sep-2023 23:20:07 JST
Per AxbomI’m hoping a person who speaks at least English, French and German (or any other combination of the advertised languages) will do a writeup on all the weaknesses (and dangers) inherent in the concept of Spotify's new podcast auto-translation feature. If you see someone writing about this, do share it with me.
"We can't wait anymore." "7 minutes until the first warhead is in the observation zone." "We won't have time to retaliate. You have to make a decision!" "You see it?" "Could be." "No. That's not heat from a missile." "Damn!" "Let's keep looking." "THE COMPUTER CAN'T BE WRONG!" "I don't understand it." "Damn it! They have to confirm this damn attack." "All thirty levels of security levels confirms the attack!" "Infrared devices verify heat from all five launched missiles!" "What are we going to do?"
Stanislav Petrov: "Nothing. I don't trust the computer. We'll wait."
This dialogue is from a re-enactment in the documentary The Man Who Saved the World.
Last year I wrote about three learnings I take away from his story.
1. Embrace multiple perspectives. Petrov was educated as an engineer rather than a military man. He knew the unpredictability of machine output.
2. Look for multiple confirmation points. To confirm our beliefs we should expect many different variables to line up and tell us the same story. If one or more variables are saying something different, we need to pursue those anomalies to understand why. If the idea of a faulty system lines up with all other variables, that makes it more likely.
3. Reward exposure of faulty systems. If we keep praising our tools for their excellence and efficiency it's hard to later accept their defects. When shortcomings are found, this needs to be communicated just as clearly and widely as successes. Maintaining an illusion of perfect, neutral and flawless systems will keep people from questioning the systems when the systems need to be questioned.
Pending follow request? It’s a bug! Read this: https://axbom.com/migfail/Teacher, coach, speaker and designer in the space of #DigitalEthics, #InclusiveDesign and #Accessibility. Long history of tinkering with computers and making stuff on the Internet.Writer, blogger and author working to mitigate online harm. Maker of visual explainers. Communication theorist by education, #HumanRights advocate by dedication.Born in Liberia of Swedish parents.Country-living, book-loving middle-aged family man with adult kids and a French bulldog. Love to untangle digital messes. Preferably during long walks in the forest or meditative motorcycle rides.Co-host of @uxpodcast@mastodon.social. Try to get paid for my work but I put most of it out there for free ?Social media is fickle and unpredictable. To make sure you continue to get updates from me, I recommend signing up for my free newsletter below.This is my 4th Fediverse account. My posts are licensed under Creative Commons Attribution-NonComm