Edith Clarke was the first woman in the U.S. to be an electrical engineer, played an essential role in the build-out of our modern electrical grid, invented a widely-used calculator, graduated MIT in 1919, and designed the turbine system inside the Hoover Dam
We should thank her every time we flip on a light switch
I'm a reporter looking to interview freelancers who have seen demand for their work go down -- or shift -- in the wake of ChatGPT and all the AI image generators.
This is for a story about how freelancers, specifically, have seen demand for their labor change as use of AI has spread.
To sum up, doing a complete modernization of an existing intersection can cost a quarter of a million dollars.
Re-timing existing traffic signals using existing crews, and newly-available data and insights, actually saves cities money, while cutting down pollution.
🚦 A major source of congestion in cities is poorly-timed traffic lights. (Yes, I know, we should just have fewer cars.)
🚗💨 They're also a *huge* source of lung-killing pollution. It's 29x worse at intersections, because of those stops and starts. You might say "oh, how foolish, surely there is a 'smart' traffic light option?"
💰 And there is. But it's expensive. Good luck paying for it at all 300,000+ of America's traffic signals, much less the billions worldwide.
So what if we could start to solve congestion in cities without the huge spend required to update all our 300,000+ traffic lights?
That's the idea behind new systems from the U of Michigan, and Google.
Project Green Light uses existing data from mapping apps, and/or connected vehicles, to figure out traffic densities 24/7, then tell cities how to adjust the timing of their traffic lights.
Here's the key thing: Once you start monitoring traffic in cities 24/7 using data that's *already available* from our cars + mobile devices (and yes, it's anonymized) you can actually chop down the cost of maintaining existing traffic signals.
No more expensive traffic studies to re-time lights. Crews can be redeployed to adjust timing on existing signals. "Green waves" of lights can be orchestrated to speed traffic from one signal to another when needed.
"Having all the wealth and technology in the world doesn’t matter if we don’t have the wisdom to use it in the right manner. Early in my career, I bought into the notion, espoused by science-fiction author William Gibson, that all cultural change is driven by technology.
I’ve now witnessed enough of both technological and social change to understand that the reverse is also—and perhaps more often—true."
The first thing I, and many others, get wrong in predicting the future:
1. Disruption is overrated
"The most-worshiped idol in all of tech—the notion that any sufficiently nimble upstart can defeat bigger, slower, sclerotic competitors—has proved to be a false one.
It’s not that disruption never happens. It just doesn’t happen nearly as often as we’ve been led to believe."
“If we have these [generative AI] tools, and large volumes of people are doing dangerous things as a result of receiving garbage information from them, I’d argue it isn’t necessarily a bad thing to assign cost or liability as a result of these harms, or to make it unprofitable to offer these technologies.”
-- Michael Karanicolas, executive director of the Institute for Technology, Law & Policy at UCLA
At least one litigator I talked to said that without action from Congress (which is unlikely) the threat of legal liability from using generative AI could become an "unsustainable burden" for many companies.
(Others argued it would mean only the biggest would continue to offer it to the public.)
Could this be an existential threat to today's public chat-bot LLMs?
I mean, yes! Which is why it's wild that so few people are talking about it, as far as I can tell.
“Generative AI is the wild west when it comes to legal risk for internet technology companies, unlike any other time in the history of the internet since its inception.”
-- Graham Ryan, a litigator at Jones Walker who will soon be publishing a paper in the Harvard Journal of Law and Technology the legal risks of generative AI and why Section 230 doesn't protect companies that use it
“If in the coming years we wind up using AI the way most commentators expect, by leaning on it to outsource a lot of our content and judgment calls, I don’t think companies will be able to escape some form of liability.”
-- Jane Bambauer, professor of law at the University of Florida
She's written a whole paper on yet a *third* category of legal risk using generative AI could open companies up to, which I didn't even have space for: