Walmart now uses AI-based surveillance cameras in all its stores and is rolling out body cameras to associates to collect even more video of customers that are fed into its storage. Kroger, Fred Meyer, and QFC are also now using AI-based video surveillance and also store facial data of every customer that enters their stores. Going grocery shopping in America is becoming a privacy nightmare that will only get worse. #ai#privacy
I can't quite put my finger on why a German physicist warning about how cool authoritarianism looks to the masses disturbs me.
Keeping up with this new, keen, tech bro Star Wars (this time with sassy bits) will succeed in bankrupting a lot of companies and nations, leaving easy pickings for greedy jerks with deep pockets.
She may be happier in the UK, which I doubt, but she should give LLM AI a couple of years to percolate, devalue, and crush a lot of hopes and dreams. Then the EU will be in great shape, once again, by doing nothing.
😅 This is easier to do when you are financially secure. But having a work life balance is key to avoid burning out & missing out on the important things in life.
Another experiment in #AI coding with #Google Gemini. I try to be fair. When I call generative AI mostly slop, I don't do so blindly; I attempt to conduct reasonable tests in various contexts.
Yesterday I needed a couple of routines -- one in Bash, the other in Python. I tried the Python one first. This required code to asynchronously access a remote site API, authenticate, send and receive various data and process what was returned, relying on a well documented Python library on GitHub written specifically to deal with that site's API.
After almost two hours, I gave up. Gemini was consistently cheerful and cooperative -- almost to a creepy extent. It generated code that looked reasonable, was very well commented, and even provided helpful examples of how to configure, install, and run the code.
Unfortunately, none of it actually worked.
When I noted the problems, Gemini got oddly enthusiastic, with comments like "Wow, that's a great explanation of the problems, and a very useful error message! Let's figure out what's wrong! Here is another version with more diagnostics that accesses the library more directly!"
Sort of made me feel like I was dealing with an earnest but incompetent TA at an undergraduate CS course at UCLA long ago. Which was not something I enjoyed back then!
After a bunch of iterations, I gave up. Even starting over didn't help. Gemini never seemed to produce the same code twice, no matter how I worded the prompts. The code would use completely different models each time, sometimes embedded configuration values, sometimes external files, sometimes command line args. And the way it tried to use the Python library in question also varied enormously. It almost seemed random. Or at least pseudorandom.
I spent half an hour and wrote plus tested the code I needed from scratch. It worked on the second try, and was about half the number of lines of any of the code Gemini generated, and much simpler, for whatever that's worth. By comparison, Gemini's code was bloated and definitely unnecessarily complex (as well as wrong).
I did give Gemini another chance. I also needed a simple Bash script to do some date conversions. I offered that task to Gemini since I didn't want to bother digging through the various date format parameters required. Gemini came up with something reasonable for this in about four tries. Whether it's completely bug free I dunno for sure, I haven't dug into the code deeply since its not a critical application. But it seems to be working for now.
So really, I haven't seen a significant improvement in this area. There are probably some reasonable sets of problems where AI-coding can reduce some of the grunt work, but once you get into anything more complex the opportunities for errors, especially in larger chunks of code where detecting those errors might not be straightforward, seem to rise dramatically.
I know #AI gets a lot of hate in here but if you're strained for time, and you could have written it anyway, and you don't *wanna write it*, it has its place.
In this case I wanted a script to convert my wife's Google Tasks export from JSON to orgmode. AI can help you degoogle.
It makes sense why @zuck would do this as Meta is embracing #ArtificialIntelligence to a greater degree than most companies (they are even building data centers around AI).
I just hope it will not result in further layoffs as the tech world is hemorrhaging workers right now.
Cory Doctorow believes that the economic crash that will ensue when the "AI" bubble finally bursts will make the GFC look like a mild wobble. Worse, there's nothing any of us can do except burst the bubble as quickly as possible, to reduce the scale of the damage. Bleak stuff, but his logic is hard to fault;
404 Media: Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.
Mozilla announces more AI in Firefox, still failing to understand that at this point their market share is made of tech savvy people who do not want that.
Mozilla, please. Stop chasing the fads. Get your own values.
People use Firefox to get access to an open web, use a sturdy browser, and not to be tracked. Build in that direction and that direction only.
Nobody from your current user base will recommend Firefox with AI to their friends.
Anthropic is overstating the use of #AI by "nation state actors" while providing scant hard evidence. Automation of attacks in security is not new. The real technical question to ask is which parts of these attacks could ONLY be accomplished with AI. The answer to that question seems to be "none of it."
The confluence of cyber cyber and #ML is interesting indeed and hard even for deeply technical people who are firmly grounded in one of the two camps (security engineering or #ML ).(1/2) #MLsec