If you are really that annoyed by it, build your own distro. Base it off of an established one, most people use Ubuntu. Basically you repackage the base installer without systemd, and set up a repo that references the base distro for everything else.
He could not reinforce, resupply or rearm in Italy. He didn't dare hold still long enough to beseige a city. Essentially, after losing two armies to him, the Romans refused to engage and let him run out of gas.
"So you could make maybe 80% of the population smarter at that subject than 80% of people with a PhD " No, you can't. You can, if they reach some minimum standard of intelligence, teach them more about a subject. That does not make them smarter. Do not confuse the ability to regurgitate facts with intelligence, or even understanding. Also, do not conflate "with a PhD" with any particular level of learning or intelligence. that's conflating classroom compliance with education, education with understanding, and understanding with intelligence.
The causes of low IQ are, in order: Genetics Genetics Genetics also brain damage and drug abuse. IQ is pretty well fixed and is measurable by the age of three. It doesn't change significantly over the lifetime, except for a very gradual decline, mostly as the result of aging, drug abuse and minor brain damage accumulation. Parts of West Africa have an average 65 IQ. That qualifies as mentally retarded in the US. That's the average. It is inborn, congenital, and cannot be increased. Training a stupid person does not make them smart. It may make them better able to handle many tasks, but it will not make them smarter. Training them in the proper attitude towards Whites will not make them smart. They were born dumb. These are simple, demonstrable facts, supported by over 100 years of research into the nature of human intelligence. You would rather believe the fairy tale you were taught than the cold hard facts of life. Fairy tales make for incredibly bad policy
No. They are a language generator. They don't simulate human thought in any way. That path was explicitly rejected by the creators of Large Language Models, since it had thoroughly stagnated for over 30 years. Instead, they took the Turing test seriously, despite it being nonsense, and built a machine that attempys to meet that standard instead. They construct simple human language sentences in response to prompts. This is no small feat, but it is not thought, and it is not intelligence.
No, it does not know what a car is. It cannot abstract the idea of "car" from it's training database. It can't even abstract the idea of "object" from its database. This is why it's so terrible with hands. I has no abstract idea of what a hand it, what it does, what it looks like. It can only concatenate several millions of images tagged with the word "hand" and calculate an average of what lines, shapes and coloration correspond to that tag. But because hands are so mobile, fluid and expressive, the examples do not do much to constrain the image generator. Likewise, it has no idea of what a "car" is. It has a list of compositional elements, shapes, lines, curves, etc that correspond to the tag "car". The image generator has no concept of what a car is, or even what a car looks like, because concept is itself outside the programming. This is a category error. And AIs can't observe.
The problem of "today's" AI is not a level or implementation problem. It is that what they are producing is not intelligence. AI has nothing to do with intelligence. What they have produced is a very expensive engine that can generate pertinent answers in grammatically correct, though often simple, English. This is not a small achievement, but it is not intelligence. You claim "There is not reason we will not create intelligence someday." This is a very bold assertion, one with no evidence or thought behind it at all. Can you even define what intelligence is? The AI industry gave that pursuit up in the 1980s. I would assert, quite baldly, that computers, being what they are, and given how they work, will never ever ever, on a fundamental level, be capable of intelligence, no matter how much programming you put into the effort. BTW the Turing Test is ontological nonsense.
AI can, quite literally, only repeat what it's read. It has nothing whatever to do with intelligence. AI-generated science will turn out the equivalent of the AI drawing of a woman with big tits and 3 hands. With the deluge of Indians using AI too generate training content for AI, we will shortly reach the point where AI is widely seen as the joke that it is.
Very few scientists spend their lives studying anything. Physicists specifically spend their lives extending mathematical equations. IT is always assumed, though not without evidence, that those equations are already fundamentally correct. When a physicist claims to have "proved" something, 90% of the time, he means "the equations balance. But something in the Standard Model is seriously out of kilter. trying to refine equations in order to extend your understanding doesn't work if the basic model is incorrect. And gravity is the most incorrect part of the Standard Model. physical matter, light, electromagnetics are all fairly well understood, but Gravity is the problem child. There is no known mechanism for it to work. Every time some physicist thinks "AHA! I HAVE THE ANSWER TO GRAVITY!" he turns out to be very provably wrong. My personal opinion, and it is only an opinion because I don't have the background to prove anything one way or the other on it, is that LeSage had the right idea, gravity is a pushing mechanism, not a pulling one.
There were Christans long before there was a New Testament, before any of the books were written. Scripture is only significant because it is the codification of Sacred Tradition.