I've been consulting on and off in the tech sector since 2013 or so, and did occasional consulting projects as far back as 1995 when I freelanced as a web developer (!). For as long as I've been doing this, there have been people, many people, who thought they could create a piece of technology that would exponentially increase the speed of a process that was bound by some physical constraint and could not increase at that rate. I've commented on this phenomenon a few times before, for instance here: https://buc.ci/abucci/p/1705675509.136902 . Many times I've started a project, come to the conclusion that it could not achieve the desired goal with a piece of technology, and told the person this. Generally speaking people don't want to hear this.
It's a pernicious ideology, this belief that exponential growth is always just one discovery away.
I think #GenerativeAI and the current hype cycle is this phenomenon writ large. This pattern of reality-denying irrationality may have taken hold across a large segment of the economy, in other words. Furthermore, if this analysis is to be believed then the irrationality has turned into anticipation that is now reflected in artificially-high stock prices. One way to think of a company's stock price is as a reflection of anticipation about the company's future earnings. If investors believe companies will become remarkably more productive, and therefore more profitable, because of #AI, then the stock price is artificially high. I think it's probably more subtle than that, having to do with other factors such as how investors favor technology companies over companies that do or make stuff. The belief may also be that companies will shift away from doing or making stuff towards becoming more technology oriented. Such a shift would not necessarily be reflected in profits, but instead might manifest in layoffs and other shifts in personnel makeup, which we're also seeing.
Either way, it's an illusion because of the pesky uncooperative nature of physical reality. I don't know if there's a crash coming--though it seems there may be--but I do think it's likely the nature of (large) companies might change significantly in a bad way. Think GE turning into a globe-spanning financial company under Jack Welch ("The Man Who Broke Capitalism") and then mostly imploding ( https://www.investopedia.com/insights/rise-and-fall-ge/ ). Sam Altman idolizes Welch, and tech companies have resurrected some of Welch's worst practices like stack ranked layoffs.
NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous. https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/
Good for them! #longtermist / #EffectiveAltruist / #TESCREAL people are cultists and have no place in government. They're obsessed with fantasies like #xrisk that are disconnected from reality and distract from the actual harms #AI is already causing here on Earth. It's precisely the same phenomenon as holding endless discussions about how many angels can dance on the head of a pin while ignoring that people are suffering. It sounds like Secretary of Commerce Gina Raimondo might be a Kool-aid drinker herself or is sympathetic to the viewpoints of the Kool-aid drinkers.
From her Wikipedia entry: Gina Marie Raimondo...an American businesswoman, lawyer, politician, and venture capitalist Emphasis mine.
It's alarming that this is even happening, and you know the fix is in because they tried to rush the appointment without informing staffers ahead of time. I hope #NIST staffers prevail.
@grunfink@comam.es Hi! I've noticed a bug and was wondering whether reporting it here is preferable or putting it on the project in codeberg would be better.
Simply: in the web interface, if you edit a post with an image attached, the attachment is lost when you save. You have to find and re-attach the file and re-enter your descriptive text before saving the edits.
Embed this noticeAnthony (abucci@buc.ci)'s status on Wednesday, 13-Mar-2024 03:45:34 JST
AnthonyWhy the world cannot afford the rich The science is clear — people in more-equal societies are more trusting and more likely to protect the environment than are those in unequal, consumer-driven ones. Bigger gaps between rich and poor are accompanied by higher rates of homicide and imprisonment Greater equality will reduce unhealthy and excess consumption, and will increase the solidarity and cohesion that are needed to make societies more adaptable in the face of climate and other emergencies. Eye opening. From https://www.nature.com/articles/d41586-024-00723-3
@inthehands@hachyderm.io I don't recall if I shared this in thread yet but I think Dan McQuillan's thinking on this subject is good: https://danmcquillan.org/ai_thatcherism.html The real issue is not only that AI doesn't work as advertised, but the impact it will have before this becomes painfully obvious to everyone. AI is being used as form of 'shock doctrine', where the sense of urgency generated by an allegedly world-transforming technology is used as an opportunity to transform social systems without democratic debate. This shock doctrine process is clearly well underway.
I've read one of those "papers" and it was absurd and not credible, full of obvious reasoning errors. I fully believe that there are neoliberals and would-be barons who'd want nothing more than to have fully automated luxury capitalism, but it's a fantasy just as colonizing Mars is a fantasy.
@datarama@hachyderm.io@inthehands@hachyderm.io Oh I fully agree. I apologize, I should have finished my thought: having a bunch of people with power thinking that this is possible is dangerous. I don't believe that their fantasy world will ever be real, but I absolutely believe they wouldn't hesitate to hurt an enormous number of people in the process of trying to make it real.
Oof, please don't get me started on academia...that's a week-long rant, minimum!
@datarama@hachyderm.io@inthehands@hachyderm.io I'm sorry to jump in, but I share these kinds of feelings (and I like to believe there are a fair number of us who do). It's one of the many reasons I'm so negative about today's generative AI technology. I studied AI in graduate school, and technically I might have cashed in on this but I purposely chose not to purpose a career at any of the FAANG companies or the major corporate research labs for precisely these reasons. I'd like to live to see the neoliberal era end, not grab its hand and help pull it further along.
Embed this noticeAnthony (abucci@buc.ci)'s status on Saturday, 20-Jan-2024 19:17:09 JST
AnthonyRegarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.
LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.
Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.
@grunfink@comam.es Hi! I wanted to let you know that the other day I upgraded snac to the latest release version, 2.44, and ever since I've experienced numerous SEGFAULT crashes like I was seeing before. 2.43 was significantly more stable; I only had one or two of these crashes in the weeks I had it running, whereas I've had four or five crashes of 2.44 in a few days. It's so unstable that I have no choice but to downgrade to 2.43--in the time it's taken me to write this snac crashed again.
I saw that there were new commits to the repo since Jan 10, so I tried pulling the latest version and building it. make throws the following error when I do that:
... cc -g -Wall -Wextra -L/usr/local/lib *.o -lcurl -lcrypto -pthread -o snac /usr/bin/ld: httpd.o: in function `srv_state_op': /home/snacuser/snac2/httpd.c:644: undefined reference to `shm_open' /usr/bin/ld: /home/snacuser/snac2/httpd.c:667: undefined reference to `shm_open' /usr/bin/ld: /home/snacuser/snac2/httpd.c:692: undefined reference to `shm_unlink' collect2: error: ld returned 1 exit status make: *** [Makefile:9: snac] Error 1 This is new; up to and including v2.44, I've always been able to build snac without errors or even warnings. I just double checked and 2.44 builds fine, so something's changed since then that makes the build fail for me.
Given the scale of the training data sets used to train models like this, it is infeasible to ensure, in any reasonable sense, that the training data does not contain PII. Doing so would surely destroy any hopes of profitability the companies making these have. Thus, this is a new attack vector and a new externality that we're meant to simply accept collectively. Personally I don't recall being asked whether I'm OK with that...
I put style.css in the fedidata directory (the data directory you make during snac2 installation). I then restarted snac but I'm not sure if that's necessary--still pretty new to this!