TL;DR: induced demand isn’t just for highways
(but if you DR, well, it’s a good read)
From @davidzipper:
https://mastodon.social/@davidzipper/113068475472028605
TL;DR: induced demand isn’t just for highways
(but if you DR, well, it’s a good read)
From @davidzipper:
https://mastodon.social/@davidzipper/113068475472028605
My own position on self-driving cars has gradually shifted over the last decade.
I used to think that they were promising not because they’re good, but because human drivers are so incredibly bad. A consistent machine with consistent attention, while still fallible and still dangerous, at least isn’t drunk or texting while driving.
Two things have shifted my feelings on the matter:
The first is what the article in the OP lays out: induced demand. If you make something easier, people will generally do more of it.
It’s dangerous to have people not paying the true cost of their own decisions. That’s already the case with driving, to an extreme (carbon tax now!!), but at least driving is really, really annoying. Annoyance is a poor proxy for driving’s true cost, but it’s •something•. Removing that backstop is dangerous.
The second is the difference between what I expected from self-driving vs how it’s actually evolving.
I imagined that self-driving cars would involve a narrowing of the parameter space: more consistent driving behavior, maybe some fixed and standardized cues/signals on roads, the expectation of human takeover for novel situations like construction zones, maybe even a shift toward a sort of herd efficiency in traffic.
Consistency. Automation that works better because it tries to •do less•. But…
…instead, AI is off on this ridiculous investor-driven “rEpLAcE hOoMAnS GAI BABEEEE” hype bender, in which self-driving cars have to be like human drivers except more magicaler.
What that means in practice is that consistency — the •one thing• that machines really have on humans in this space! — goes out the window. Self-driving cars are full of weird failure modes and bizarre quirks. They’re drunk and texting all the time, except they behave in ways that are even less predictable than humans.
One of the good-and-bad things that happens when we move human activity into software is a •narrowing of the problem space•.
Humans are full of ad hoc decisions. We fudge. We finagle. We mess up, but we also fix up. Humans are the adaptable part of complex systems. Human are both producers of and defenders against failure. (https://how.complexsystems.fail/)
When you moving a task into software, one of the central questions is, “What happens to that human flexibility?”
@inthehands Will read the article later, but you've nailed my thoughts. I was really hopeful for self driving cars, figuring the predictable nature would be exactly what we need. I rarely drive these days, but on a short school run broke hard twice this week to avoid T boning other cars. That's what I hoped computers would avoid.
What we got is Teslas rolling through stop signs and doing % over the posted limit because that's "what we all do anyway".
If I could never drive again, I would.
Usually, at least if we’re doing a good job, the answer is “we split it:”
One part of the problem becomes simpler, less flexible, more consistent. We make up rules: “every item has exactly one price,” or “every has one price per discount-item combination,” or “every item has N SKUs, each of which has one price per….“ The rules evolve, they adapt, they grow — but they remain consistent until we update them.
The beauty and the peril of software is consistency: •it follows those rules we invent•.
Beauty? Because consistency can really pay off.
Peril? Because sometimes we need exceptions.
I said we “split” the problem. Software takes one part of the job, a version of the problem that is simplified so that machine consistency is •possible•. The other part of the job: human intervention. We build software to loop in humans to say, “eh, damaged item, I’m giving you a discount” or whatever. •If• we’re doing it right.
Consistency with a dash of human intervention.
One classic way this goes wrong is when we forget the “human intervention” part.
You end up with these Kafkaesque nightmares where somebody is stuck in an infinite product return loop or their insurance claim is denied or the state thinks they’re dead or they get a zillion parking tickets because their custom license plate spells “NULL” (https://arstechnica.com/cars/2019/08/wiseguy-changes-license-plate-to-null-gets-12k-in-parking-tickets/)…and a human is stuck in process hell because •the software just does that• and software is hard to change.
I thought •that• was where self-driving cars were going to land: narrowed problem space, sometimes they fail, but at least they’re really consistent. Not great, but again, arguably an improvement over human drivers.
But nooooo. Now, thanks to the Glorious Dawn of AI Megahype, we have companies falling over themselves to replace all those annoying expensive humans…with •randomness•.
This is just bonkers to me.
I mean, software is…kind of terrible. It’s expensive to build and maintain. It constantly throws our bad assumptions back in our faces. It removes the human flexibility that keeps systems afloat, unless we work hard to prevent that.
But at least it’s consistent.
Whatever it does, it •keeps doing that thing• with a high degree of reliability. It doesn’t forget to write things down, or lose that scrap of paper, or show up to work high. When it fails, 99.9% it’s because humans told it to.
That consistency is the whole appeal of computers. Without that, why would any organization ever want to delegate anything to software?!
And now we have executives falling over themselves to replace it with “random human-imitating chaos machine?”
Really?
Really?!?
I just…Do you even…What do you think…
[the remainder of this thread is incoherent muttering]
@donw
Making •drivers• liable for accidents they cause, regardless of whether they chose to delegate their driving to a machine, would whip this whole thing into shape real damn fast.
@inthehands I think you’re right on all of this. I do wonder if perhaps it would still represent an improvement. After all, one of the biggest mistakes people make (IMNSHO) is they compare a flawed outcome to a utopian possibility, not the way something will actually happen otherwise.
I’m less worried right now about the flawed self drive solutions than I am flawed legal structures around them. This situation in Cali where nobody currently can be fined for their misbehavior is untenable.
Replies from @stfp and @donw highlight the issue of liability and accountability, which is spot on, a central question here:
https://h4.io/@stfp/113068916131522497
https://mastodon.coffee/@donw/113068874134659387
Re this from @thedansimonson, the phrase “information pollution” has been rattling around in my head a lot lately:
https://lingo.lol/@thedansimonson/113068984297050648
AI-generated nonsense. Google results filling with content-farmed garbage (written by humans and by AI). Steve Bannon’s “flooding the zone with shit.” GIGO.
→ all “information pollution”
Re this from @thatandromeda, I also think that there’s •still• immense promise in automated driver assistance for accident prevention. For example, I’ve driven a couple of cars with radar cruise control that prevents rear-ending people at speed, and found it more helpful than not.
But that sort of thing doesn’t seem to be where the money is flowing.
@fgcallari @thedansimonson
There is indeed tremendous unexplored potential in that space. Classifier systems (ML or not) can outperform humans for some problems, and can give an expedited first step for others. When the model turns to human augmentation instead of human replacement, things get a lot more sensible. Maybe we’ll get there on the other side of this hype cycle.
@thedansimonson @inthehands But the problem there is conflating "generative AI" with all of machine learning, no? It is quite possible to build reliable (safety critical) software systems that solve hard problems using machine learning AND do not "hallucinate" anything. But there is no known way to do it cheaply.
@inthehands one of the big assumptions behind AI hype -- the unspoken presupposition -- is that the 99.9% reliability of traditional software will be complemented by the apparent capacities of generative systems and all the exponential possibilities entailed therein
in practice, because the generative systems are making stuff up, they're going to pollute traditional software into uselessness with absolute garbage inputs.
they're fundamentally two different things, and they cannot interface
@belehaa
I mean, yes, for sure, that’s my tune too.
Also cars will be with us for a long time, and I’m all for reducing the harm they cause. If self-driving were a route to reducing the number of pedestrians and cyclists killed by cars (and thus making walking and biking more attractive), I’d be all on board.
@inthehands Public mass transit > self-driving cars any day
Good point from @mkj here:
https://social.mkj.earth/@mkj/113069128757909759
99 Percent Invisible did a good episode about this:
https://99percentinvisible.org/episode/children-of-the-magenta-automation-paradox-pt-1/
(As always, web text is a summary; full story is in the audio)
@belehaa
Yes, per the rest of the thread, it’s very much a counterfactual right now, not a reality that’s just around the corner.
@inthehands Same! It just feels like a mighty big and as-yet-unsupported If
@fgcallari @thedansimonson
I never thought I’d miss the consumer-driven version of capitalism, but the investor-driven version sure makes it look good.
@thedansimonson @inthehands sadly safety is not a "feature" amenable to scaling in a short VC-funded development cycle. Fundamentally, safety must be backed into the entire development culture, in an org willing to experiment (and lose money) until your safety-critical widgets are really ready. Where "readiness" is decided by a customer with the financial clout to shut you down if you get it wrong, or a government that'll jail you if you lie about performance.
@fgcallari @inthehands yes. from a cost perspective, a lot of applications of older techniques are simply ignored. the problem space wasn't exhausted, but few were willing to invest in fully exploring it from a commercial perspective.
@donw
That story is such a classic example!
@inthehands The whole LLM situation makes me flash back on the daily; I remember clearly what it looked and felt like siting in that software class in the early 90s, talking about expert systems vs neural networks.
The neural networks part shared the story of the tank spotting system that they trained to perfection till it consistently found the tanks. Till they “tried it in prod” and it turned out the training set tank photos were all on a cloudy day.
That data set was at least consistent.
@inthehands @davidzipper I didn't realize this phenomenon had a name, and there must be a million examples now of highway expansions leading to more traffic. He says, as they tirelessly expand all the interstates that swing by here.
The LED example is excellent, though. It's recent, and it's probably relatable. Hands up if you got told off as a kid for leaving lightbulbs on. These days? Not at much. Plus, we've demonstrably made light pollution worse.
@inthehands @thatandromeda oh, absolutely. I have Subaru EyeSight and it's really good. It reduces fatigue on long drives in ways standard cruise control can't, and keeps a reliable, safe distance.
Although, I've regularly driven the system in two cars, 4 years difference in age, and it's interesting how they've tuned it differently. Where I'm used to it doing one thing in one car, it's subtly different in the other. "Same" system. Doesn't bode well for complete automation.
@inthehands with apologies for the quick double reply, I just got around to reading that link. Absolutely fascinating. Couldn't help think about accidents I'm vaguely aware of, and ones I happen to know in depth because I've studied them for work, etc.
Will share that with my colleagues.
@tehstu
It’s a classic. I always keep coming back to it. Glad it’s found a new fan!
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.