>Equally alarmingly, we might increasingly find ourselves conducting lengthy online discussions about the Bible, about QAnon, about witches, about abortion, or about climate change with entities we think are human but are actually computers. This could make democracy untenable ah the conclusion's going to be a global digital ID isn't it
>and invent completely new financial tools beyond our understanding we already have financial tools beyond our understanding, it's called "derivatives"
>Allegedly, I and others like me anthropomorphize computers and imagine that they are conscious beings that have thoughts and feelings. >People often confuse intelligence with consciousness, and many consequently jump to the conclusion that nonconscious entities cannot be intelligent. But intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform sure you can write paragraphs on how ackshually you're not wrong but in the end you're still assigning responsibility to the algo instead of their maintainers
if i wrote a shitty function and it broke something i'm on the hook for it, i can't just say "oh i didn't expect it to break that way"
>In particular, many readers may disagree that the algorithms made independent decisions, and may insist that everything the algorithms did was the result of code written by human engineers and of business models adopted by human executives. This book begs to differ the algo's literally just "bump clickbait" and ragebait tends to be clickbait, there's no spook in the machine going "hmm today i will promote extremist content" >Human soldiers are shaped by their genetic code and follow orders issued by executives, yet they can still make independent decisions. The same is true of AI algorithms. They can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw alternatively, those engineers and executives are just retarded
basically after distilling away the WEF-tainted biases the groundwork laid down in part 1 is: >information systems are on the centralized/decentralized spectrum and have varying ability to self-correct >bureaucracies are a manifestation of information systems, with le Democracy being decentralized and totalitarianism being centralized >totalitarianism failed so far because the centralized nature of it doesn't ensure the flow of information required to sustain it
>In the 1828 presidential elections, Adams lost to Andrew Jacksonーa rich slaveholding planter from Tennessee who was successfully rebranded in numerous newspaper columns as "the man of the people" and who claimed that the previous elections were in fact stolen by Adams and by the corrupt Washington elites. of course the jewish WEF nigger is allergic to andrew "end the central bank" jackson lol
>large scale democracies couldn't work until the development of mass media large scale democracies don't work because the goals and priorities gap between different interest groups become too large for them to make reasonable compromises
>Disenfranchising political rivals dismantles one of the vital self-correcting mechanisms of democratic networks you should tell scholz that at the next WEF meeting instead of burying your faces in escorts all the time
>A majority of voters might deny the reality of climate change, but they should not have the power to dictate scientific truth or to prevent scientists from exploring and publishing inconvenient facts. nice projection again you WEF nigger
>An institution can call itself by whatever name it wants, but if it lacks a strong self-correcting mechanism, it is not a scientific institution making correct statements while lacking self-awareness is almost kind of cute
>more information doesn't necessarily lead to better information >but the scientific community's "publish or perish" stance is different okayy!!!! forcing people to churn out papers won't result in a deluge of low quality papers that can't be reproduced!!!!
>Unlike the witch-hunting experts, the editors of the Philosophical Transactions of the Royal Society could not torture and execute anyone. And unlike the Catholic Church, the Académie des Sciences did not command huge territories and budgets. >Scientific institutions are nevertheless different from religious institutions, inasmuch as they reward skepticism and innovation rather than conformity. hasn't been the case for the past century you WEF nigger
>citing the DSM as having self-correcting mechanism for removing homosexuality from the listing malleability and self-correction are different things lol
>He didn't witness any witchcraft firsthand, but so much information about witches was circulating that it was difficult for him to doubt all of it. cool, now do the same for "viruses"
>Witches were not an objective reality oh yeah??? then why did the ones that weren't burned go on to have daughters with weirdly colored hair and extremely long bios???
ok i get your point was to say that technologies that improve the rate of information transmission doesn't necessarily mean the information being transmitted was good, but what else are you trying to insinuate here
chapter 4 brings up as an example how judaism devolved into endless pilpul because times change while the text isn't allowed to change, so interpretations play an ever increasing role instead
>While bureaucracies are never perfect, is there a better way to manage big networks? sure but what if we just periodically cull the bureaucrats instead of abolishing them completely
>when the tax collector comes to take a cut from your earnings, how can you well whether it goes to build a new public sewage system of a new private dacha for the president? easy, if the new sewage system still doesn't exist after a decade it's obviously going to funding trans operas in swaziland instead
>But some stories are able to create a third level of reality: intersubjective reality. Whereas subjective things like pain exist in a single mind, intersubjective things like laws, gods, nations, corporations, and currencies exist in the nexus between large numbers of minds. More specifically, they exist in the stories people tell one another. The information humans exchange about intersubjective things doesn't represent anything that had already existed prior to the exchange of information; rather, the exchange of information creates these things. yes some people refer to them as egregores
small brain: "The Bible routinely depicts epidemics as divine punishment for human sins and claims they can be stopped or prevented by prayers and religious rituals" midwit WEF brain: "However, epidemics are of course caused by pathogens and can be stopped or prevented by following hygiene rule and using medicines and vaccines" big brain: "epidemics as divine punishment for human sins"
>Sometimes erroneous representations of reality might also serve as a social nexus, as when millions of followers of a conspiracy theory watch a YouTube video claiming that the moon landing never happened. just can't stop yourself from being the arbitrator of truth huh you WEF nigger
>Sometimes, a truthful representation of reality can connect humans, as when 600 million people sat glued to their television sets in July 1969, watching Neil Armstrong and Buzz Aldrin walking on the moon. The images on the screen accurately represented what was happening 384,000 kilometers away,
>Contrary to what the naive view of information says, information has no essential link to truth, and its role in history isn't to represent a preexisting reality. Rather, what information does is to create new realities by tying together disparate thingsーwhether couples or empires. Its defining feature is connection rather than representation, and information is whatever connects different points into a network. Information doesn't necessarily inform us about things. Rather, it puts things in formation. Horoscopes put lovers in astrological formations, propaganda broadcasts put voters in political formations, and marching songs put soldiers in military formation. see 95% of the text like the above make sense but unfortunately the last 5% of harari being a WEF nigger sullys it
>The chapter contrasts institutions that relied on weak self-correcting mechanisms, like the Catholic Church, sure >with institutions that developed strong self-correcting mechanisms, like scientific disciplines lol. lmao ok harari you WEF nigger
small brain: "According to the naive view, astronomers derive 'real information' from the stars, while the information that astrologers imagine to read in constellations is either 'misinformation' or 'disinformation'. If only people were given more information about the universe, surely they would abandon astrology altogether." midwit WEF brain: "No matter what we think about the accuracy of astrological information, we should acknowledge its important role in history." galaxy billionaire brain: "astrology works"
>Is a single individual capable of doing all the necessary research to decide whether the earth's climate is heating up and what should be done about it? How would a single person go about collecting climate data from throughout the world, not to mention obtaining reliable records from past centuries? sure, but that single individual is capable of pointing out that putting a temperature sensor in an asphalt parking lot is going to skew your data you don't need to be a michelin chef to point out someone's cooking is shit
>Money is supposed to be a universal measure of value, rather than a token used only in some settings. But as more things are valued in terms of information, while being "free" in terms of money, at some point it becomes misleading to evaluate the wealth of individuals and corporations in terms of the number of dollars or pesos they possess. >This has far-reaching implications for taxation. of course that's your biggest concern you WEF nigger
@hakui there is no direct access to non-trivial objective truth from human subjective experience. the only access is indirect and even that is quite often hard to achieve
@hakui Not even Milei dared apply the most extreme bureaucracy efficiency strategies I suggested (decimation of bureaucrats whenever there's missing money or it the office is not productive)
@hakui I believe it'd take a single case of 9 bureaucrats being forced at gunpoint to beat a tenth bureaucrat to death with their bare hands for all bureaucrats in the organization to become immensely efficient overnight.
@nerthos@hakui Most offices would love an opportunity to beat one of the guys in the office to death. I don't think government offices are substantially different in this regard.
@nerthos@hakui milei has fucked up his presidency in 2 big ways since the start:
bureaucrats don't know fear of the people, to implement that all useless, corrupt or stupid bureaucrats should be executed by firing squad, their corpses butchered and the meat grilled with chimichurri to be served on banquets for the people.
Milei has neglected his duty to impregnate his sister.
@LordMordred@nerthos@hakui ah right, i forgot the deep retardation and uselessness of bureaucrats is not just pathological but actually a prion disease.
@hakui It sounds like he's confusing "surprising emergent results" with autonomy. You can work out a fractal from a simple algorithm on pencil and paper and get results that you could not have predicted, but one one thinks that the 1D Life algorithm ( https://en.wikipedia.org/wiki/Elementary_cellular_automaton ) is autonomous. A computer beats Kasparov at chess in 1997, and the program is written by people that could not beat Kasparov themselves. Speculation before that (e.g., "Goedel, Escher, Bach", an actually good book on this topic) was that it would require actual artificial intelligence to beat a GM at chess, but this turned out to be completely wrong: computers are just really good at some tasks.
It seems like most of the stuff he says about computers is stuff that has a contrary example from the 1970s, but "unpredictable output means autonomy" is really egregious. rule137.gif
>Allegedly, I and others like me anthropomorphize computers and imagine that they are conscious beings that have thoughts and feelings.
The allegations turn out to be true. I don't think he knows much about the nature of consciousness or intelligence; he certainly doesn't understand computers.
Here's a copy of GEB; if you can get past the guy's boomer sensibilities and his tendency to be overly pleased with his own terrible jokes, it's actually a really entertaining book. Hofstadter's big idea is that intelligence requires a feedback loop. Some of his predictions worked out, some did not, but he's spent a lot more time understanding brains and computers . (I do not recommend "I Am a Strange Loop". It is his attempt to--seriously, he put this in the introduction--revise GEB so it's much less entertaining.)
Also here's a copy of "The Quark and the Jaguar". Gell-Mann's big idea is a description of the "complex adaptive system". I have not finished the book but it's interesting so far. (Early in the book he draws a line between real complexity and "crude complexity".)
> Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform
This is utter horseshit; it is completely useless as a definition for intelligence. Every goddamn program on my computer is "intelligent" by that definition.
Engagement maximization is trivial: interaction_count/(total_interaction_opportunities*((1+age_penalty)^age)). You can tack on k-means (a little more complicated than a single equation; I'd have to look it up, I don't use it a lot) or crude vector similarity if you want it personalized.
You could say it's "intelligence" if it can come up with its own goals without being handed any prior information about its environment and work out how to achieve them. Autonomy is self-direction, intelligence is a measure of the effectiveness of autonomous decision-making. The Twitter algorithm didn't do this, because it can't: it's a statistical pipe, it doesn't really decide anything. "I don't want to promote that because I suspect that I won't like the outcome on broader society" isn't something that equation will ever do. It's not something ChatGPT can do, even: they have to hard-code "AND DON'T SAY THE N-WORD" into it. GEBen.pdf quark-jaguar.pdf
@augustus so far he's been covering only things we already know but do remember half of the people out there are dumber than average so somehow he can still manage to sound smart
>computers can now make their own decisions on how to carry out a task, and some of those decisions might not be human socially acceptable (well no shit)
>At present, we are in a political deadlock about climate change, partly because the computers are at a deadlock. Calculations run on one set of computers warn us of an imminent ecological catastrophe, but another set of computers prompts us to watch videos that cast doubt on those warnings "hurr hurr we're doing Real Science (just don't see the parameters we run the calculations with), while those chuddies are getting brainwashed by misinformation!!!" lol ok you WEF nigger
>Unfortunately, most of them don't use their knowledge to help regulate the explosive potential of the new technologies. Instead, they use it to make billions of dollars — or to accumulate petabits of information. There are exceptions, like Audrey Tang. She
>As with every powerful technology, these systems can be used for either good or bad purposes. Following the storming of the U.S. Capitol on January 6, 2021, the FBI and other U.S. law enforcement agencies used state−of−the−art surveillance systems to track down and arrest the rioters. of course the WEF nigger thinks that's a "good" purpose
>Ultimately, AI-powered surveillance technology could result in the creation of total surveillance regimes that monitor citizens around the clock and facilitate new kinds of ubiquitous and automated totalitarian repression. yes we know you spent the last page describing how you owned the chuds already
>One way to think of the social credit system is as a new kind of money. >Money is points that people accumulate by selling certain products and services" this is your WEF nigger mind on fiat
>Unfortunately, social credit algorithms combined with ubiquitous surveillance technology now threaten to merge all status competitions into a single never-ending race. Even in their own homes or while trying to enjoy a relaxed vacation, people would have to be extremely careful about every deed and word, as if they were performing onstage in front of millions. This could create an incredibly stressful lifestyle, destructive to people's well-being as well as to the functioning of society. If digital bureaucrats use a precise points system to keep tabs on everybody all the time, the emerging reputation market could annihilate privacy and control people far more tightly than the money market ever did.
-10000000 WEF credits for you harari, grovel and eat the bugs if you want to recover it
>I, too, routinely use YouTube and Facebook to connect with people, and I am grateful to social media for connecting me with my husband, whom I met on one of the first LGBTQ social media platforms back in 2002. that explains a lot of things
>The problem for utilitarians is that we don't possess a calculus of suffering. We don't know how many "suffering points" or "happiness" points to assign to particular events, ask shen "bike cuck" comix then
@hakui CIA declassified documents confirmed that covid escaped from a lab, by extension confirming that epidemics are divine punishment for human sins (the sin of allowing China and USA to exist to the present day)
@p@hakui The quark and the jaguar sounds pretty interesting. Here is the something about the role of the body intelligence. Computers can't be intelligent by definition, because they don't have a body in the traditional sense and therefor have no need to adapt to new situations.
Even if we get intelligence, we won't solve the problem of giving machine and consciousness in several centuries.
@maxmustermann@hakui Well, no brain in a jar, right; but human intelligence is an implementation of intelligence.
You can look at https://top500.org/lists/top500/2024/11/ and we have these massive machines. And you can look at the systems all the way back to 1993: https://top500.org/resources/top-systems/ . And the ones not used for nuclear simulations are mostly concerned with weather, because no matter how much power we throw at weather, we haven't managed to crack it yet and weather predictions more than a couple of days in the future are a crapshoot. Even something big, like a hurricane, sometimes it just decides to zip off. Tiny rules in systems much smaller than a thunderstorm end up moving the thunderstorm, right?
And you've got your liver and all of the various organs slushing around in there producing molecules that land in your blood and have molecule-level interactions with the rest of the system. Holding a gun raises testosterone levels, and testosterone levels affect cognition: how complex is that, if you try to break it down to molecules? You perceive a gun in your hand, something lights up in your visual cortex, you recognize the gun, and eventually gets down to the testes and the testosterone gets dumped into the bloodstream and makes its way up and binds to individual receptors in individual cells, and this just from slushing around in there: maybe it passes this cell because it was facing the wrong way and it lands in the next cell. At the molecular levels, it's really complex. A neuron fires or doesn't and then whether that even arrives at another neuron is dependent on how hard it fires and the length of the cable and the insulation (myelin) and a neuron alone is already hard to simulate, and it's feeding back into all of these other systems: your blood sugar drops, or a species of bacteria that manipulates the Vagus nerve (this actually exists, we found it) starts tapping that nerve, and you feel hungry. Weather is easy by comparison, and we're just barely able to do a day or two ahead, something an experienced farmer or sailor can do just by looking at the sky. So simulating human intelligence is not close; we'll have to hollow out the moon for the datacenter (and we'll still have some difficulty getting rid of heat) and then we can do maybe one guy. Currently, we can not yet get 80% accuracy for simulating a microscopic roundworm with 1,000 cells in its body, caenorhabditis elegans: https://github.com/openworm/OpenWorm/milestones .
If we come at it from the other angle, we can probably build a system with some kind of intelligence, but it'd have to be a state-level project, it'd take decades. It's not like two guys in a garage making a breakthrough, or even a trillion-dollar company funded by half of Silicon Valley, it'd be another space program. (We'd be better off trying to get cold fusion going first if we're talking that level.) Otherwise we'll occasionally get incremental advances like LLMs and they will occasionally catch the crest of the hype cycle and make some liars rich(er).