@rees@Arkana Given we have no general AI at present and Chat GP has proven to be comically inaccurate, feel free to look up my poasts on it, I can only give you a gruelling in your face NO!
@rees@Arkana Fascinating, now how many years of eugenics will we need to raise the average IQ of niggers to Whites.
Hint: it took 1,200 years of executing 1% of the population per year to result in the IQ of Western and Northern Europeans going from approximately 70 to 100
@Arkana waste of my time. already brought up that the IQ gap is shortening, how irishmen used to have the same IQ as blacks but caught up to whites, how worldwide IQ is going up, how geographical regions affects IQ, etc.
Btw the Flynn effect reversed in the 80s and continues to decline, in part due to migration from the third world and low birth rates with the smarter section of the population. USA has lost on average 2-3 points per decade since then.
@Arkana@filenotfound IQ gap between whites and blacks is decreasing, blacks have an alcohol and poverty/gang problem, and IQ in general worldwide is increasing it's called the flynn effect. IQ is very dependent on geography, mountainous regions trend toward lower IQ. areas with ports have higher IQ. societal advancement is correlated with sea access (i.e. whites)
@Arkana@filenotfound genuinely stupid graphs lol. irishmen were on par with black IQ, three generations later they are on par with the average caucasian
@Arkana@filenotfound look up klinefelter syndrome or XXYY syndrome. people born with XXY / XXYY chromsomes instead of XX/XY. happens in white populations, we still regard them as human. humans in general are stupid and violent
@Moon@rees@Arkana@Aether I use LLMs daily for actual work I get paid for and I can tell you that they are comically inaccurate. They are USEFUL but they are not accurate.
@RustyCrab@rees@Arkana@Aether yeah exactly, saying it's better than people at LSAT or whatever is not really true but it can augment the work of someopne competent
@rees@RustyCrab@Arkana@Aether the ways that AI fail aren't being addressed they're just being shoved under the rug in the hopes that they're more useful than flawed
@rees@RustyCrab@Arkana@Aether you think a better trained model will fix the problem but it won't. for instance something novel and new will have to be invented to deal with deliberate dataset poisoning.
@Moon@RustyCrab@Arkana@Aether you're making these arguments based on modals that were trained a year ago, which is 10-100 years in AI world because it's advancing so much faster than any other field
@rees@Arkana@Aether@Moon neither of us can do that. Nobody could have predicted that the technological marvel that was 2014 google would backslide into the useless pile of shit it is today. Technology doesn't just naturally 'extrapolate'. Models have limits and I don't think anybody knows what the real limits of this one are yet.
@rees@RustyCrab@Arkana@Aether it's going to get a lot better but we are right now at the extremely primitive stages of shaping how it works. you can tell by how phony the chatgpt guardrails are. don't you think they'd make the guardrails better than just a layer trying to undo what it really wants to say, if they could?
@Moon@RustyCrab@Arkana@Aether >you think a better trained model will fix the problem but it won't no I don't lol. I think a mixture/ensemble of experts will fix it. there are OSS examples like the ACE framework which stacks LLMs using a dual bus
@RustyCrab@Arkana@Aether@Moon you are anthropomorphizing. machines will fail just like humans do, but in ways differently from humans. I asked phind.com to spin a square graphically using rust/css and it did it just fine which is much more complex than a for loop
@rees@Arkana@Aether@Moon yes, it can do some very fancy things, but the key commonality is that it can do stock things. The more you veer off of boilerplate code the more it becomes wrong and the less it can competently predict. I use it for work in a million line project every day so I am quite accustom to how it behaves now. About 20% of the time it will be pretty much what you wanted with no bugs. As soon as you get into a custom algorithm though it will only be able to infer new lines about 10% of the time without comments, maybe 20% with human comments and usually those lines will require human correction to some degree. Often times it can get things right but only after I establish a brain-dead simple pattern for it to follow.
That’s still useful and it saves me a lot of typing, but it’s wrong.
If you can I would encourage you to use it for real work because if you get it into an expert level domain, the cracks in it start showing up FAST.
@Moon@rees@Arkana@Aether they're really good at digging through large volumes of information and getting something USEFUL out of it, but the answers they give are very often technically or wholly wrong and they are comically bad at predicting some very peculiar things.
For instance, I cannot explain why, but github copilot is not capable of writing a typical integer for loop in C++ in most real world contexts (my sample size being any of my large projects). I have tried to do it at least 5 times per day across many different contexts. It just shits itself and dies. Literally any human can do this after one day but copilot struggles greatly for some reason.
@RustyCrab@rees@Arkana@Aether I use it for programming when I can't find good information anywhere for a library and it lies to me about 90% of the time
@RustyCrab@Aether@Arkana@rees anyway my point isn't that I will extrapolate it gets better at what it's good at but I will only give it credit for fixing the things it's bad at when I see those things actually get fixed one by one
@rees@RustyCrab@Arkana@Aether@Moon okay. they improved the models by shoveling all the data possible into them. we already shoveled all the human data into them. we cannot generate more high quality data quickly. increasingly, the data will be polluted with AI output. ergo, the models peaked and will now plateau or decline, depending how much pollution they let into their training sets