@CounterPillow marcan yes, absolutely, this is what he does, in my view.
I am not a fan of his, and have said so directly to him. But since I have him blocked I tend not to explicitly mention him as it's unfair.
(he's blocked because he attacked me for inferring things about asahi based on available information, a terrible crime that deserved having a 'famous hacker' go at me I'm sure you'd agree...).
Lina has behaved well afaict, I have no issue or qualms with her or her conduct.
I disagree with some aspects of what she's said, but not others (in fact, most things I agree with I'd say). But I respect her and find her to be a perfectly decent and very, very talented person.
I think _most_ in asahi are earnest and good people, I think marcan, while very talented, is neither of those things.
They'd be better off putting somebody else in charge... anyway my mac runs mac os x exclusively now :)
The usual suspects (including people involved in a 'certain' very toxic linux distro) delete all nuance and start touting the same deeply insulting stereotype about kernel maintainers that you tiresomely hear over + over again.
"They're all horrible toxic megalomaniacal dickheads who won't change because they're too old [bit of ageism in there for fun] and stuck in their ways"
People who know nothing about the kernel love this as it fuels the typical us vs. them narrative that people get off on.
No greys, just black and white, shining knights on one side, awful dragon on the other.
And then it fades and quietens down and these same people go quiet.
And then, loop around...
If people would actually engage in a good faith, sensible discussion, listening to each other, we might make some progress.
I get less and less interested, frankly, in bothering to engage on these things because you know. It's not honest is it?
@gregkh@vbabka@mcepl@mort everything you say after 'I am not trolling' absolutely does not contradict the position that you are trolling (nor what you said in your talk about... err... trolling CVEs).
Trolling CVE = following the rules to the letter to demonstrate the rules are silly.
I mean I might not be as senior as Vlasta (which is probably why you're replying to him not me), but I did speak to other senior kernel people in person and EVERYBODY thinks this is what you're doing.
The issue are the downstream effects as collateral damage, but since your position is 'use stable kernels or I don't care' I guess you don't care ;)
@vbabka@pavel@ben hint: LLMs have no understanding of anything, so absolutely aren't suited to programming since they'll hallucinate in (often) subtle ways that fits the syntax and people are notoriously bad at picking up on it.
Also they still work without credit/license etc. The fact they appear to work for a lot of programming situations makes it even more dangerous.
It'd be one thing if people were just using them but acknowledging their limitations, it's quite another in a world where people openly lie about their capabilities.
Totally and completely appropriate to not want your work part of it.
But like all LLM proponents (just like all crypto guys I spoke to before, just like all anti vax guys I spoke to before, just like all [insert religious-style belief] proponents I spoke to before) you won't actually rebut what I say, you'll just assume that 'I don't get it' on some level.
I have tried LLMs dude, thanks for patronising me by assuming I haven't.
@pavel@ben@vbabka the ones so disappointing you entirely ignored them (because I guess it's beneath you to rebut them) and just said 'try it' as if I hadn't?
LLMs have uses, I disagree with their use for tasks like programming for the reasons previously stated that you ignored so not going to repeat.
@ptesarik@ben@pavel@vbabka the big problem is that people are very very bad at picking up on the kind of errors that an algorithm can generate.
We all implicitly assume errors are 'human shaped' i.e. the kind of errors a human would make.
An LLM can have a very good grasp of the syntax but then interpolates results in effect randomly as the missing component is a dynamic understanding of the system.
As a result, they can introduce very very subtle bugs that'll still compile/run etc.
People are also incredibly bad at assessing how much cost this incurs in practice.
Having something that can generate such errors for only trivial tasks strikes me as being worse than having nothing at all.
And the ongoing 'emperor's new clothes' issues with LLMs is this issue is insoluble. Hallucination is an unavoidable part of how they work.
The whole machinery of the thing is trying to infer patterns from a dataset, so at a fundamental level it's broken by design.
That's before we get on to the fact it's needs human input to work (you start putting LLM generated input in it completely collapses), so the whole thing couldn't work anyway on any long term scale.
That's before we get on to the fact it steals software and ignores license, the carbon costs and monetary costs of compute, and a myriad of other problems...
The whole problem with all this is it's a very very convincing magic trick and works so well that people are blinded to its flaws.