When Casualties are Inevitable, Who Should Self-Driving Cars Save?

  • Thread starter Eh Team
  • 145 comments
  • 6,585 views
Even if companies skip Level 3 and go to Level 4, it still doesn't remove the human element. We are still a very long ways off from full, Level 5 autonomous cars.
 
I don't think it's a grand idea to add autonomous cars to the mix if they just bring in their own different blind spots and weaknesses. But you're right, what bugs me more is the thought of the person inside the autonomous car being a better driver than the computer. That isn't right to me, and it does make the road more dangerous on an individual basis, by replacing a better driver with a computer.

If you're going to cherry pick the situation where a better driver is replaced by this equivalent-to-an-average-driver AI, then there's no discussing things with you. Come on, man.

We live in a world where autonomous cars would replace all sorts of drivers, from good ones to awful ones. In fact, I'd suggest that it's likely that it will preferentially replace worse drivers first. Bad drivers tend not to really enjoy driving, whereas good drivers often do. I'd suggest that bad drivers would be more eager to get an autonomous car that removes a task that they don't like than good drivers who could take it or leave it. But that's merely a suggestion, I don't claim that it would necessarily be the case.

Still, we're talking about autonomous cars in generalities here which means working with the general population. That means I need you not to cherry pick situations in which you put an excellent human driver up against a middling autonomous AI and ignore the overall population statistics.

To me it's not that straightforward. I don't consider something like this incident equatable to a human accident, partly because the technology should been able to "see" the woman even in the dark, as has been said. I find it harder to accept than if the Uber employee was simply texting behind the wheel of a normal car. Maybe that's just me.

It's a straightforward question that you're avoiding answering, presumably because it means that you'd have to say that you'd prefer more deaths done by humans. I don't know because you didn't answer, so I guess I'll just assume unless you want to clarify.

Accusing me of a fallacy while strawmanning me in the same breath?

I'm sorry, did you or did you not say "what value is there in a computer's millisecond-scale perfect reactions if the computer may fail to act in the first place, or act erroneously?"

Characterising that as "not implementing a technology until it's perfect" is pretty accurate. You're suggesting that computer reactions have no value if they can fail or act in error, or at least that's what I got from that sentence. Perhaps I misunderstood, in which case you're welcome to clarify that too.

Mischaracterizing my views as black and white and then telling me I lack nuance?

See above. If the view is truly that computers are of no value until they can perfectly react, that's a pretty black and white view. A nuanced view would be that there's a certain level of performance that a computer system would need to meet, and that would be acceptable regardless of whether the computer system in question is performing to the limits of it's theoretical capabilities.

What's with the attitude?

What's with accepting a less safe driver simply because it happens to be human? I dislike people who espouse views that would result in more people being killed than otherwise. That's why I asked you above whether you'd accept computers if they were to result in less deaths, something that you dodged.

If you won't answer, I'll assume that it's because you'd rather have more people die at human hands than trust a machine.

Of course the technology doesn't have to be perfect, but so long as the average consumer believes that autonomous cars will do what it says on the tin, figuratively speaking -- and wisdom holds that companies should skip Level 3 autonomy and work on Level 4 autonomy for that reason -- it should be close enough to do better than the average driver. I think people should be able to depend on the technology being at least as safe as if they were driving themselves, whether they're someone who's always glued to their phone or a defensive driver.

How can you say in the same paragraph that autonomous cars should be better than the average driver and that people should be able to depend on the technology being at least as safe as if they were driving themselves? Do you expect autonomous cars to be better than all human drivers, or do you just not understand statistics?

I do find it amusing that you've backpedaled to pretty much exactly what I was saying though: autonomous cars really only need to be equivalent to or better than the average driver to provide a benefit. I'm glad we could see eye to eye on this.

I'm hoping you come back and post that you were in a poor mood when you wrote this reply.

I was afterwards. Replying to people who are willing to accept extra deaths because they're uncomfortable with technological advance make me angry.

I'll be honest, I still think you're an [insert word here] for being willing to accept humans driving and a higher road toll rather than have autonomous vehicles that make you uncomfortable but kill less people. But I'm human, and I'm free to judge people for their espoused opinions.

Perhaps one day I'll learn more about you and develop a more nuanced view, but at the moment I've only got a few data points and I frankly don't like your acceptance of road deaths just because they're caused by humans.

Autopilot doesn't have to scan the skies for pedestrians or navigate cross traffic in a very close space. It is employed in a relatively controlled environment, and does little more than monitor specified instrument readings and operate the plane's control surfaces to maintain those readings. Computers are good for singular mundane tasks like that.

Driving a car down here on the ground is complex by comparison, requiring a whole new dimension of awareness and cognition (or a digital mimicry of it) to navigate a range of hazards. It's not the same.

It's still a computer controlling your vehicle, and so there's "no one behind the wheel". It still has the same problems of not having perfect reactions, that may fail to act or act in error. But that's less of a problem in a plane because it doesn't result in an accident before the pilot can act (most of the time). I don't disagree, autopilots have been around for a long time because the environment is far better suited to computer control. Autonomous vehicles are only starting to become common now because sensor and computing technology is just starting to become capable of dealing with the more complex environment.

Now we're starting to get to a nuanced opinion. You can accept that a computer system only needs to be capable to a level appropriate for the situation that it's in. It doesn't need "milli-second scale perfect reactions", it simply needs reactions appropriate to deal with the hazards that it would normally face. Realistically for a car, it needs reactions appropriate to the time scale of the controls of the car. Likely tenths to hundredths of a second at best, because below that it doesn't make any meaningful difference to the movement of the car. Once any driver makes the choice to brake it takes seconds to slow to a stop from speeds that would be fatal, and so there's severely diminishing returns on upping the speed of reaction. A tenth of a second after detection of a hazard is more than adequate, and is way higher than any human could hope to accomplish without mad reflexes and hovering the brake.

So perhaps instead of your computer system that needs millisecond perfect reactions without the failure or error, perhaps we could agree that what an autonomous car actually needs is adequate reactions with a reasonable error rate that would perform equal to or better than the majority of humans and tends to fail in a safe way?

This is essentially what a plane has, it's a system that mostly reacts capably within the confines of it's expected environment, has few fatal errors and when it does fail tends to do so in a way that is either fundamentally safe or returns control to a human in a way that they can be expected to deal with it. It is as safe or safer than your average pilot (which is something considering all the training that they go through and the safety expectations).

Does this sound like a more reasonable expectation for an autonomous car?
 
Now we're starting to get to a nuanced opinion. You can accept that a computer system only needs to be capable to a level appropriate for the situation that it's in. It doesn't need "milli-second scale perfect reactions", it simply needs reactions appropriate to deal with the hazards that it would normally face. Realistically for a car, it needs reactions appropriate to the time scale of the controls of the car. Likely tenths to hundredths of a second at best, because below that it doesn't make any meaningful difference to the movement of the car. Once any driver makes the choice to brake it takes seconds to slow to a stop from speeds that would be fatal, and so there's severely diminishing returns on upping the speed of reaction. A tenth of a second after detection of a hazard is more than adequate, and is way higher than any human could hope to accomplish without mad reflexes and hovering the brake.

So perhaps instead of your computer system that needs millisecond perfect reactions without the failure or error, perhaps we could agree that what an autonomous car actually needs is adequate reactions with a reasonable error rate that would perform equal to or better than the majority of humans and tends to fail in a safe way?
Apparently you misinterpreted that sentence about millisecond-scale perfect reactions.

I was referring to the common argument that computers ought to be naturally superior because they can act near-instantly and perfectly -- as you just touched upon here. I'm saying that's great, computers can do that, no problem. The difficult part for a computer is analyzing the input from the cameras/radar/sensors and acting in the first place, without reacting erroneously to a misinterpreted or false signal. We seem to agree on that.

Regarding the death toll, I don't really have a hard answer, which is why I posed the question in the first place and "dodged" it before. If an autonomous system can execute its task perfectly most of the time but is liable to clock out, as it did in this accident, at a rate that puts it on the level of an average driver, I don't think that's an acceptable standard. But I don't have an answer for how many fewer deaths would be enough.

As agitated as you are or were, I'm just as surprised by your implication that I have no empathy if I don't just accept the cold, hard statistics. The prospect of an autonomous car blanking out on something stupid is inexcusable to me, and I don't want anyone to die over something so stupid. The possibility should be negligible, within reason. Accidents will happen, but I believe almost none of them should be the fault of the computer dropping the ball like it did in this case. We might agree, just coming at it from different ends. Is that clearer?
 
...The difficult part for a computer is analyzing the input from the cameras/radar/sensors and acting in the first place, without reacting erroneously to a misinterpreted or false signal ... The prospect of an autonomous car blanking out on something stupid is inexcusable to me, and I don't want anyone to die over something so stupid. The possibility should be negligible, within reason. Accidents will happen, but I believe almost none of them should be the fault of the computer dropping the ball like it did in this case.

Isn't the flipside of that more accidents caused by humans reacting erroneously, misinterpreting signals, blanking out or becoming distracted? The best solution is surely whichever solution results in those things happening the fewest times?
 
Isn't the flipside of that more accidents caused by humans reacting erroneously, misinterpreting signals, blanking out or becoming distracted? The best solution is surely whichever solution results in those things happening the fewest times?
I said: "I don't have an answer for how many fewer deaths would be enough."

There won't be drunk AVs, texting-and-driving AVs, or AVs falling asleep at the wheel. They're perfectly alert, vigilant, and sober, so to speak. Shouldn't the odds of an AV causing an accident be at least as slim as that of the ideal human driver?
 
I said: "I don't have an answer for how many fewer deaths would be enough."

There won't be drunk AVs, texting-and-driving AVs, or AVs falling asleep at the wheel. They're perfectly alert, vigilant, and sober, so to speak. Shouldn't the odds of an AV causing an accident be at least as slim as that of the ideal human driver?

You're working the wrong end of the problem. The ends do not justify the means. The benefits of an AV do not justify the deaths caused.... which means that the number of deaths that would be acceptable is never going to fall out of any of the analysis you're trying to do. There is no cost-benefit analysis of human lives.

The question in each case for an AV-caused crash is what went into and led up to the crash. Was it malicious? An accident? How negligent was the accident? Was there reckless disregard for human life? The company designing the AV must answer for these questions. It is no different than a case where a car kills someone because of a defect from GM, BMW, or VW. If someone enters a turn and their car gets shut off because the wheel turns the key causing the ignition to cut off, and the car crashes killing several people, that is the same exact scenario as an AV failing to read a situation and killing several people. In both cases, the manufacturer introduced a fatal defect.

You do not need to compare AVs to humans. You do not need to figure out how many deaths is the right number. You do not need to figure out how proficient your AV is compared to a standard, ideal, or highly competent driver. This is a fatal manufacturing/design flaw, which is something human kind has been dealing with for many decades.
 
@Danoff -- That's one way I considered expressing my response to @Imari. If an AV can't meet a higher standard, that to me makes it defective or underdeveloped.

Depends on the wrongful death lawsuits. Medical devices are often involved in lawsuits over deaths and still aren't considered defective or underdeveloped. But yes, I would imagine that the company in this case considers their software defective and will be fixing it as soon as possible. Every product has the potential for design defects, the manufacturer has to weigh the risks when bringing the product to market. In this case, they found a defect the hard way and will be scrambling to deal with the resulting lawsuit, PR fallout, and R&D fixes for the defect going forward.

My take on it is that people are kind losing it on this one. Where's the same outrage when Ikea dressers are falling on kids (that killed someone right?)?
 
In this case, they found a defect the hard way and will be scrambling to deal with the resulting lawsuit, PR fallout, and R&D fixes for the defect going forward.
Let's not forget that future testing may well have been made more difficult, seeing as fingers are being pointed in the direction of those who permitted testing that resulted in this incident. I suspect there will be more hoops to be jumped through now.
 
Regarding the death toll, I don't really have a hard answer, which is why I posed the question in the first place and "dodged" it before. If an autonomous system can execute its task perfectly most of the time but is liable to clock out, as it did in this accident, at a rate that puts it on the level of an average driver, I don't think that's an acceptable standard. But I don't have an answer for how many fewer deaths would be enough.

More than one, obviously. You'd watch people die due to human error rather than put a machine in control with an error rate that was known to be lower but non-zero. That's what it boils down to.

That's your choice. But if it's my choice, I won't sit and watch people die when there's technology available that could have made them safer.

As agitated as you are or were, I'm just as surprised by your implication that I have no empathy if I don't just accept the cold, hard statistics. The prospect of an autonomous car blanking out on something stupid is inexcusable to me, and I don't want anyone to die over something so stupid. The possibility should be negligible, within reason. Accidents will happen, but I believe almost none of them should be the fault of the computer dropping the ball like it did in this case. We might agree, just coming at it from different ends. Is that clearer?

It's clear that my assessment of you is correct, that you're willing to accept higher casualties as long as they result from human actions (or the lack thereof). I find that to be a disgusting disregard of human life simply because you're unwilling to accept novel modes of failure, such as AVs "blanking out on something stupid".

I would gladly replace all instances of human drivers "blanking out on something stupid" with half the number of AVs doing the same. But that's just me not requiring the technology to be perfect or near to it before it can be used. I don't think the possibility of error needs to be "negligible, within reason" in order to be usable, simply equal to or better than what is currently available.

We don't agree, and we're not coming at it from different ends. I'm willing to accept a system still in development that has the demonstrated ability to perform equivalently or better than an average human (ie. making the roads overall safer), you want a system that has no major flaws at all before it can be used. You're completely against machine "stupidity", to the point of allowing human stupidity to do far worse.

Shouldn't the odds of an AV causing an accident be at least as slim as that of the ideal human driver?

Where is this "ideal human driver" coming from? What capabilities do they have? AVs can perform actions that no human would be capable of, can see in situations where a human couldn't possibly, and can react faster. But they only have to perform like the best human could?

So situations like the one with Uber where even the best human driver couldn't reasonably have been expected to see or avoid it would be fine? Because that doesn't seem like what you're saying in your other posts. You seemed to be saying that the machine should be performing to the limit of it's capabilities as a machine, without error or misjudgement.
 
...if it's my choice, I won't sit and watch people die when there's technology available that could have made them safer.
I don't want to watch people relinquish their self-responsibility to technology if it does not make them safer. It's not like a passive safety device or intervention system -- it doesn't just improve your odds of surviving or avoiding an accident. It defines your odds of getting into an accident. It doesn't just supplement good defensive driving practices. It replaces defensive driving and your agency as a driver.

If the best we can get out of it is mitigating the risks of average or below-average drivers, or if it can only excel in a relatively controlled environment, then implement the technology accordingly. Promote AVs principally as vehicles for the disabled, the elderly, and inept drivers. Construct AV-only lanes or routes, like carpool lanes or bicycle paths. Whatever, so long as the technology is not foisted on people who are better off driving themselves (even if they don't think they are), by marketing or by legislative mandate. That's my primary concern.

I would gladly replace all instances of human drivers "blanking out on something stupid" with half the number of AVs doing the same.
Maybe I would too. I never ruled that out.

Where is this "ideal human driver" coming from? What capabilities do they have? AVs can perform actions that no human would be capable of, can see in situations where a human couldn't possibly, and can react faster. But they only have to perform like the best human could?

So situations like the one with Uber where even the best human driver couldn't reasonably have been expected to see or avoid it would be fine? Because that doesn't seem like what you're saying in your other posts. You seemed to be saying that the machine should be performing to the limit of it's capabilities as a machine, without error or misjudgement.
The ideal human driver is alert, awake, unimpaired, not distracted, eyes on the road, hands on the wheel, etc. An autonomous system is analogous to all those things, plus the superhuman abilities you just listed. So it seems reasonable to me to expect AVs to perform at least as well as a human who is alert, awake, unimpaired, not distracted, etc. at all times.

My point of holding the machine to its superior capabilities is not to insist that it should operate perfectly without error at the full scope of its theoretical capabilities. The point is that there should be a negligible risk of it committing potentially fatal errors that a human who is paying attention would not make, and its superior capabilities leave it with little excuse not to do better.

I'm not convinced that a human would have been unable to avoid this Uber accident. That's the thing. But I don't want to get into whether that's true, because now my anxiety has had about all I can take from the immediately hostile derailment of what I thought was a relatively straightforward point. In short, I think there should be some assurance that AVs won't be unnecessarily prone to lapses like this -- the equivalent of a fully alert, awake, unimpaired, not distracted, eyes on the road, hands on the wheel driver (with superhuman nightvision if you like) looking directly at the woman and driving straight into her anyway. I don't think that's acceptable.

If you still disagree, then we'll agree to disagree.
 
Autonomous cars in their current form do make people safer though and will continue making people safer as they advance. Are they 100% foolproof? Not by a long shot, but in the current form they will step in if you when your mind wanders, you look at the radio, something shiny passes you, or even if you passenger decides to talk to you. All of those things distract you from the road, if only for a brief moment. It's something we all do.

Too many people seem to get hung up that an autonomous car should do everything for you. That's not even in the realm of possibilities yet, at least not without several thousand pounds of sensor equipment and completely ideal conditions. Even the most advanced autonomous cars on the market today can only do so much and every manufacturer clearly states that you need to be behind the wheel and paying attention. You can just pay less attention than someone with a manual car.

===

An interesting aside though. Some of the Volvo engineers in a Facebook group I'm apart of are almost certain Uber disabled the OEM system on the XC90. According to them, in those conditions, it should've at least attempted an emergency stop. Whether it's true or not, I can't say, but I'm inclined to believe them just based off how my five-year-old car performs around pedestrians who don't bother to look when crossing the street.
 
Too many people seem to get hung up that an autonomous car should do everything for you. That's not even in the realm of possibilities yet...
That's my point. :odd: They're nowhere near ready for that, and this accident is proof of it.
 
That's my point. :odd: They're nowhere near ready for that, and this accident is proof of it.

Which is why the paragraph before that I said their purpose currently is to step in when the driver zones out, even momentarily. But what I'm gathering from your posts is that you think an autonomous car should be as good as a good driver. I'm saying that's not the point of the technology. It's not meant as a replacement, at least not yet, so wanting it to do that is a moot point. It's merely a safety feature that works surprisingly well.

Think about it, how many people have died due to an autonomous car? One. How many people die daily due to manually operating a car? Probably thousands. In Utah alone, we average 2 deaths a day on the roads and we have a state with a super low population. Go to a state that's more built up with a larger population and chances are that number soars.

And what's one of the biggest contributors to motor vehicle deaths? Besides the influence of drugs or alcohol, it's not paying attention. If more people had autonomous cars, the technology behind them would greatly reduce the number of collisions caused by not paying attention. And even if a collision did occur, chances are the vehicle would be moving slower due to an attempt to brake as fast as it could, thus resulting in a greater chance of survival.
 
...what I'm gathering from your posts is that you think an autonomous car should be as good as a good driver. I'm saying that's not the point of the technology. It's not meant as a replacement, at least not yet, so wanting it to do that is a moot point.
The cars with driver intervention systems you're talking are not what I call autonomous cars. My definition of an autonomous car is one that's meant to be able to take full control. As I understand it, that's the goal now, and what Uber is developing.

I agree with you that they're perceived to be way more capable than they are at this point. I think that includes some of the people working on them, like within Uber.
 
I don't want to watch people relinquish their self-responsibility to technology if it does not make them safer.

Not your call to make.

If you think a company is being reckless with human lives, that's your beef with the company. The people who buy their products are buying the promise. If the product is too dangerous, it should not be on the market. That will be sorted out quickly by the extreme cost of killing customers and innocent people.

In a world of autonomous vehicles, avoiding being at fault is not enough though. A big part of marketing is being able to claim that the user is safe, full stop. Being at fault causes liability for the company, and will be expensive and drive off customers who really don't want to be in an accident. But beyond that, an AV needs to keep out of accidents that aren't even its fault because people just outright don't want to be killed, maimed, or inconvenienced by a crash. If your goal is to stay out of a crash, or not be killed by a nearby motorist, you really really want the market to take care of this for you, the way it has been indirectly for quite some time, and is about to quite directly.
 
Just saw this thread and I didn't read all the posts.

To answer the thread question, I'd say that if the car is mine, i.e. I paid for it, I don't want it to make moral decisions on my behalf that could potentially put my life in danger.

I think it's a non starter for autonomous vehicles if their owners will be "sacrificed" for "the greater good".

I just don't like that. I'd prefer to go on a bike or drive a broken conventional car tbh.

For public transportation, I think there's a better case to be made for autonomous vehicles.

Edit: just to add to my last point. Public transportation is already contingent on moral decisions being made for me, so I think an AV wouldn't be such a problem (would probably be the better option).
 
Last edited:
The thing with the Tesla crash is that the car repeatedly told the driver to put his hands on the wheel both audibly and visually and he ignored it. I'm not even sure why the NSTB is investigating it in the first place. All they need to do is look at the data from Tesla, confirm that there was several warnings that were ignored, and be done with it.
 
Just saw this thread and I didn't read all the posts.

To answer the thread question, I'd say that if the car is mine, i.e. I paid for it, I don't want it to make moral decisions on my behalf that could potentially put my life in danger.

I think it's a non starter for autonomous vehicles if their owners will be "sacrificed" for "the greater good".

I just don't like that. I'd prefer to go on a bike or drive a broken conventional car tbh.

For public transportation, I think there's a better case to be made for autonomous vehicles.

Edit: just to add to my last point. Public transportation is already contingent on moral decisions being made for me, so I think an AV wouldn't be such a problem (would probably be the better option).

I think the real question is not whether you'd be willing to buy a car that would sacrifice you for the greater good, but whether you'd voluntarily buy a car that required you to agree to an additional 0.01% risk of being in an accident if the increased risk were determined to significantly boost the safety of the surrounding public.
 
The more I look at that video of the car slamming into the woman with the bike the more surreal it feels.
The car doesn't budge a millimetre left or right. For several seconds it heads right at her.

I have to stop twitching - several times - from seeing the woman to actually hitting her.
Guess it's all this GT muscle-memory.
 
I dont feel car/bike accident avoidance is an issue of utilitarian ethics, accidents and situations happen at too rapid a rate for some kind of experimenal phillisophical idea to be made up. Computers on the other hand, are likely to simply avoid or evade or cut the engine in cases of vehicle being used as weapon like in terrorist or angry attacker situations. Braking, engine cutting, etc.

Choosing to kill one group to save another, thats a human construct purely imo.
 
Back