@alcinnz@floss.social I'm sorry about that. I find myself in frustrating arguments about AI as well. That's partly why I post about it here, to refine my views.
I feel like these AI conversations can end up frustrating for everyone involved partly because there is a host of prejudices and blind spots that come together around this technology. For instance, a whole lot of people, especially Americans, are energy blind--they don't have a very good sense for how much energy it takes to do stuff, where that energy comes from, and what it takes to generate and transmit energy. Some of the more outlandish "proposals" about powering the hyperscale data centers behind generative AI ambitions would have us build more nuclear reactors than there is uranium to fuel, or deploy more batteries than we have minerals to build. Most people aren't informed enough to say "uh, that's physically impossible to do?" and some end up believing these prognostications. I also feel that a whole lot of people aren't really aware of just how much outright theft of material has occurred to create things like ChatGPT. Just straight up taking such an enormous amount of material that creators put so much hard work into is unsustainable. It will have knock-on effects, like people refusing to put their work on the internet for fear of having it stolen, that degrade life for everyone (it's a kind of tragedy of the commons perpetrated by a small number of rapacious actors). Is that the world we want to live in just to have a whiz bang toy that a lot of people say makes their jobs harder?
I feel like it is. It seems to be changing form from "left behind" to "you'll become a failure/obsolete". Certainly the Adobe and Rosetta quotes I cited have the latter, more threatening, tone.
Neither are true, though, not as written nor factually. You won't be "left behind". Instead, bosses/employers/owners will actively and knowingly modify job descriptions, fire or lay people off, etc. to bring about conditions where people they've chosen to exclude are "behind". This isn't gravity, it's human decisionmaking, and the decisions could be made differently. That's why I characterize rhetoric like this as threatening: the powers that be are threatening to take peoples' livelihoods away and use AI as an excuse, in an attempt to avoid blame for their destructive choices. That's also why I characterize it as coercive control: they are trying to control people's behavior and choices using veiled threats combined with trying to erode people's sense of self worth ("you'll end up a failure, a loser, if you don't do what we want").
Relentless repetition of dehumanizing language is a pillar of coercive control, which is really what generative AI has been and will continue to be about.
We don't need to live in a world where things we don't want are regularly forced upon us by people who have more power than we do. We can reject that world even when we have little choice but to navigate it.
Embed this noticeAnthony (abucci@buc.ci)'s status on Friday, 01-Nov-2024 02:48:21 JST
AnthonyIt's interesting how the rhetoric around #AI shifts around and now companies are using phrases like "embrace #AI or face extinction". I'm thinking of Adobe's recent move to force artists to use the AI features in their products, under the threat they are "not going to be successful" if they don't; or Rosetta announcing that linguists need to use Rosetta's AI features or face the "extinction" of the languages they work on.
It's a short step from "extinction"/"unsuccessful" ("low fitness") to "elimination". The latter word is what is meant. The passive voice/inevitability framing purposely obscures the agency of the literal, nameable human beings who are attempting to bring this reality into existence. "Embrace #AI or we will do our best to eliminate your profession, your livelihood, and you" is more precise and brings out the hostility of the threat these corporate statements attempt to hide.
This dehumanizing and ultimately eugenic idea frequently hides in plain sight like this. Sometimes evolutionary or genetic language and metaphors are used. Don't accept it. These folks may try to create this reality but that doesn't mean they'll succeed and it doesn't mean we need to surrender and let them succeed.
I've been consulting on and off in the tech sector since 2013 or so, and did occasional consulting projects as far back as 1995 when I freelanced as a web developer (!). For as long as I've been doing this, there have been people, many people, who thought they could create a piece of technology that would exponentially increase the speed of a process that was bound by some physical constraint and could not increase at that rate. I've commented on this phenomenon a few times before, for instance here: https://buc.ci/abucci/p/1705675509.136902 . Many times I've started a project, come to the conclusion that it could not achieve the desired goal with a piece of technology, and told the person this. Generally speaking people don't want to hear this.
It's a pernicious ideology, this belief that exponential growth is always just one discovery away.
I think #GenerativeAI and the current hype cycle is this phenomenon writ large. This pattern of reality-denying irrationality may have taken hold across a large segment of the economy, in other words. Furthermore, if this analysis is to be believed then the irrationality has turned into anticipation that is now reflected in artificially-high stock prices. One way to think of a company's stock price is as a reflection of anticipation about the company's future earnings. If investors believe companies will become remarkably more productive, and therefore more profitable, because of #AI, then the stock price is artificially high. I think it's probably more subtle than that, having to do with other factors such as how investors favor technology companies over companies that do or make stuff. The belief may also be that companies will shift away from doing or making stuff towards becoming more technology oriented. Such a shift would not necessarily be reflected in profits, but instead might manifest in layoffs and other shifts in personnel makeup, which we're also seeing.
Either way, it's an illusion because of the pesky uncooperative nature of physical reality. I don't know if there's a crash coming--though it seems there may be--but I do think it's likely the nature of (large) companies might change significantly in a bad way. Think GE turning into a globe-spanning financial company under Jack Welch ("The Man Who Broke Capitalism") and then mostly imploding ( https://www.investopedia.com/insights/rise-and-fall-ge/ ). Sam Altman idolizes Welch, and tech companies have resurrected some of Welch's worst practices like stack ranked layoffs.
NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous. https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/
Good for them! #longtermist / #EffectiveAltruist / #TESCREAL people are cultists and have no place in government. They're obsessed with fantasies like #xrisk that are disconnected from reality and distract from the actual harms #AI is already causing here on Earth. It's precisely the same phenomenon as holding endless discussions about how many angels can dance on the head of a pin while ignoring that people are suffering. It sounds like Secretary of Commerce Gina Raimondo might be a Kool-aid drinker herself or is sympathetic to the viewpoints of the Kool-aid drinkers.
From her Wikipedia entry: Gina Marie Raimondo...an American businesswoman, lawyer, politician, and venture capitalist Emphasis mine.
It's alarming that this is even happening, and you know the fix is in because they tried to rush the appointment without informing staffers ahead of time. I hope #NIST staffers prevail.
@grunfink@comam.es Hi! I've noticed a bug and was wondering whether reporting it here is preferable or putting it on the project in codeberg would be better.
Simply: in the web interface, if you edit a post with an image attached, the attachment is lost when you save. You have to find and re-attach the file and re-enter your descriptive text before saving the edits.
Embed this noticeAnthony (abucci@buc.ci)'s status on Wednesday, 13-Mar-2024 03:45:34 JST
AnthonyWhy the world cannot afford the rich The science is clear — people in more-equal societies are more trusting and more likely to protect the environment than are those in unequal, consumer-driven ones. Bigger gaps between rich and poor are accompanied by higher rates of homicide and imprisonment Greater equality will reduce unhealthy and excess consumption, and will increase the solidarity and cohesion that are needed to make societies more adaptable in the face of climate and other emergencies. Eye opening. From https://www.nature.com/articles/d41586-024-00723-3
@inthehands@hachyderm.io I don't recall if I shared this in thread yet but I think Dan McQuillan's thinking on this subject is good: https://danmcquillan.org/ai_thatcherism.html The real issue is not only that AI doesn't work as advertised, but the impact it will have before this becomes painfully obvious to everyone. AI is being used as form of 'shock doctrine', where the sense of urgency generated by an allegedly world-transforming technology is used as an opportunity to transform social systems without democratic debate. This shock doctrine process is clearly well underway.
I've read one of those "papers" and it was absurd and not credible, full of obvious reasoning errors. I fully believe that there are neoliberals and would-be barons who'd want nothing more than to have fully automated luxury capitalism, but it's a fantasy just as colonizing Mars is a fantasy.
@datarama@hachyderm.io@inthehands@hachyderm.io Oh I fully agree. I apologize, I should have finished my thought: having a bunch of people with power thinking that this is possible is dangerous. I don't believe that their fantasy world will ever be real, but I absolutely believe they wouldn't hesitate to hurt an enormous number of people in the process of trying to make it real.
Oof, please don't get me started on academia...that's a week-long rant, minimum!
@datarama@hachyderm.io@inthehands@hachyderm.io I'm sorry to jump in, but I share these kinds of feelings (and I like to believe there are a fair number of us who do). It's one of the many reasons I'm so negative about today's generative AI technology. I studied AI in graduate school, and technically I might have cashed in on this but I purposely chose not to purpose a career at any of the FAANG companies or the major corporate research labs for precisely these reasons. I'd like to live to see the neoliberal era end, not grab its hand and help pull it further along.
Embed this noticeAnthony (abucci@buc.ci)'s status on Saturday, 20-Jan-2024 19:17:09 JST
AnthonyRegarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.
LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.
Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.
@grunfink@comam.es Hi! I wanted to let you know that the other day I upgraded snac to the latest release version, 2.44, and ever since I've experienced numerous SEGFAULT crashes like I was seeing before. 2.43 was significantly more stable; I only had one or two of these crashes in the weeks I had it running, whereas I've had four or five crashes of 2.44 in a few days. It's so unstable that I have no choice but to downgrade to 2.43--in the time it's taken me to write this snac crashed again.
I saw that there were new commits to the repo since Jan 10, so I tried pulling the latest version and building it. make throws the following error when I do that:
... cc -g -Wall -Wextra -L/usr/local/lib *.o -lcurl -lcrypto -pthread -o snac /usr/bin/ld: httpd.o: in function `srv_state_op': /home/snacuser/snac2/httpd.c:644: undefined reference to `shm_open' /usr/bin/ld: /home/snacuser/snac2/httpd.c:667: undefined reference to `shm_open' /usr/bin/ld: /home/snacuser/snac2/httpd.c:692: undefined reference to `shm_unlink' collect2: error: ld returned 1 exit status make: *** [Makefile:9: snac] Error 1 This is new; up to and including v2.44, I've always been able to build snac without errors or even warnings. I just double checked and 2.44 builds fine, so something's changed since then that makes the build fail for me.
Given the scale of the training data sets used to train models like this, it is infeasible to ensure, in any reasonable sense, that the training data does not contain PII. Doing so would surely destroy any hopes of profitability the companies making these have. Thus, this is a new attack vector and a new externality that we're meant to simply accept collectively. Personally I don't recall being asked whether I'm OK with that...