@aral I thought you were doing a bit since the newspapers have been bending over backwards to comply early but no they are almost word-for-word saying this. Jeez
@ploum@ludicity no problem! For what it is worth I think I may have been mistaken. I think Synergy Greg is actually what happens when Julius gets promoted well above his capability to department head or something 😅
@ploum that's awesome! Generally I have to say I really love pretty much everything @ludicity has to say about AI and the tech industry and I heartily share his work with my colleagues - I also really enjoy a lot of your essays too 😍 before Julius I enjoyed "A society that lost focus"
@ploum I'm concerned that Julius is the same person that @ludicity calls Synergy Greg. Noone has ever seen Julius and Synergy Greg in the same room together... coincidence? I think not!
@glynmoody call me a cynic but it sounds like a great way to get your bank account drained with no recourse: "it was definitely you sir, you passed the voice and image/video biometric checks..." 🙄
@phiofx not sure I'd agree that the unbearable tech bros that comment on the orange site are broadly representative of techies in general. I'd suggest that they are the vocal minority. HN is wretched hive of scum and villainy. I'd identify as a "techie" but I also get angry as hell whenever I browse HN threads that go any deeper than "cool website bro" for all the reasons you outlined.
@datarama@phiofx that's the thing I guess, it's really hard to know what is representative these days because each site is it's own silo with it's own norms and overton window. Certainly HN is a pretty libertarian bubble. With the death of twitter etc I don't think there is a social network that is big enough to give a good cross-sectional sample any more, just lots of little bubbles!
@phiofx@smallcircles @jryans@merveilles.town a silver lining of the silly hype is that it's led to some pretty impressive breakthroughs in desktop AI which is great for projects like this! Llama.cpp allows us to run state of the art LLMs on moderately specced pcs and smaller models (by today's standards) like BERT can run on a potato. Of course for many ML use cases simple models like logistic regression and random forest classifiers can get you there and they will definitely run on a potato!