Bad First Day: Navya Self-Driving Shuttle Ends Involved in Accident With a Semi

Driverless vehicles are coming as we know. And somebody pointed out…that they will have to make from time to time, ethical decisions. You’re heading towards an accident; it’s going to be fatal. The only solution is to swerve onto the pavement. But there are two pedestrians there. What does the vehicles do? Basically you will have bought a vehicle that must be programmed in certain situations to kill you. And you’ll just have to sit there…and there’s nothing you can do. These driverless vehicles , everybody goes ‘oh aren’t they clever they can stop at red lights’. They are going to have to face all sorts of things like who do I kill now. [Humans] are programmed to look after ourselves and these driverless vehicles are going to be programmed to do the maths, and say, lots of people over there, I’m going to kill you.
 
Last edited:
Driverless vehicles are coming as we know. And somebody pointed out…that they will have to make from time to time, ethical decisions. You’re heading towards an accident; it’s going to be fatal. The only solution is to swerve onto the pavement. But there are two pedestrians there. What does the vehicles do? Basically you will have bought a vehicle that must be programmed in certain situations to kill you. And you’ll just have to sit there…and there’s nothing you can do. These driverless vehicles , everybody goes ‘oh aren’t they clever they can stop at red lights’. They are going to have to face all sorts of things like who do I kill now. [Humans] are programmed to look after ourselves and these driverless vehicles are going to be programmed to do the maths, and say, lots of people over there, I’m going to kill you.
No, it doesn't.
 
Driverless vehicles are coming as we know. And somebody pointed out…that they will have to make from time to time, ethical decisions. You’re heading towards an accident; it’s going to be fatal. The only solution is to swerve onto the pavement. But there are two pedestrians there. What does the vehicles do? Basically you will have bought a vehicle that must be programmed in certain situations to kill you. And you’ll just have to sit there…and there’s nothing you can do. These driverless vehicles , everybody goes ‘oh aren’t they clever they can stop at red lights’. They are going to have to face all sorts of things like who do I kill now. [Humans] are programmed to look after ourselves and these driverless vehicles are going to be programmed to do the maths, and say, lots of people over there, I’m going to kill you.
People get drunk, which degrades their ability to look after anything, and still drive. They also get in cars and go insane on occasion:



Driverless cars are fine.

Driverless vehicles should always consider saving their occupants as their main priority in the event of a crash.
That potentially makes them dangerous to everyone outside the vehicle. They should just avoid actively putting anyone in danger.
 
And that's if the car even gets into a situation where such an accident is likely.

I like this anti-AVs theory that somewhere in the software of an autonomous vehicle will be an algorithm for whether to clatter through a triangle of cheerleaders or whether to plunge you off the edge of the Grand Canyon, when the software is actually designed to stop the car from getting into a situation where such a decision is required in the first place.
 
And that's if the car even gets into a situation where such an accident is likely.

I like this anti-AVs theory that somewhere in the software of an autonomous vehicle will be an algorithm for whether to clatter through a triangle of cheerleaders or whether to plunge you off the edge of the Grand Canyon, when the software is actually designed to stop the car from getting into a situation where such a decision is required in the first place.

I agree. I find these "ethics" questions to be kind of a false binary. It's not even an ethical question in my opinion, it's a calibration question. The 'question' is not between 'careen into the concrete pole' or 'murder innocent pedestrians' but more about how to not do either of those things...I have difficulty imagining a real scenario when there isn't a third option with less dismemberment, and a computer with gigs of ram should be able to find that third option rather more quickly than our lizard brains. I imagine in most cases the car will just be instructed to brake 100%, as swerving is seldom advisable in any case. If that's not enough, then it's likely it was unavoidable.

That all being said, I think the real problem (possibly intractable) is making autonomous cars work in situations they were not designed to encounter. Do autonomous cars have any sort of intelligence? Or do they just follow pre-programmed commands based on sensor input? I see this as the biggest obstacle for autonomous cars going faster than 20mph.
 
I have difficulty imagining a real scenario when there isn't a third option with less dismemberment, and a computer with gigs of ram should be able to find that third option rather more quickly than our lizard brains. I imagine in most cases the car will just be instructed to brake 100%, as swerving is seldom advisable in any case. If that's not enough, then it's likely it was unavoidable.

Thing is a human could do some crazy steering stuff and achieve 'the third option' but the computer couldn't because the numbers wouldn't make sense. Extreme manoeuvres are not logical to a computer because they can't be computed with any certainty. So where a human swerves and somehow gets lucky the computer wouldn't even attempt that move because there no such thing as 'luck'. So as a result your ending up in the pole.
 
Thing is a human could do some crazy steering stuff and achieve 'the third option' but the computer couldn't because the numbers wouldn't make sense. Extreme manoeuvres are not logical to a computer because they can't be computed with any certainty. So where a human swerves and somehow gets lucky the computer wouldn't even attempt that move because there no such thing as 'luck'. So as a result your ending up in the pole.
What is this based on?

If you are just talking chance and the computer ignores the unlikely options, that is good. You'd have 1/10 piloted cars that escape a disaster by doing something that doesn't make sense vs 9/10 AI cars that make it out safely because they don't choose the statistically unfavorable option. Some self driving cars will crash but that's not even remotely scary when human driven cars crash a lot.

Practically though, I don't think I can agree with you at all, at least in the long run. AI is already faster at calculating than people and it will continue to get faster. It will also get smarter and possibly begin to learn on its own to account for the most unlikely situations.
 
What is this based on?

If you are just talking chance and the computer ignores the unlikely options, that is good. You'd have 1/10 piloted cars that escape a disaster by doing something that doesn't make sense vs 9/10 AI cars that make it out safely because they don't choose the statistically unfavorable option. Some self driving cars will crash but that's not even remotely scary when human driven cars crash a lot.

Practically though, I don't think I can agree with you at all, at least in the long run. AI is already faster at calculating than people and it will continue to get faster. It will also get smarter and possibly begin to learn on its own to account for the most unlikely situations.

I'm not doubting the competence of a machine to get you out of most accidents but Eunos_Cosmo was talking about no win scenarios for the AI where its a choice, as he put it, between the pole or killing a pedestrian. In cases like this the only realistic 3rd outcome is achieved by a completely illogical set of manoeuvres which machines do no recognise as a rational option.

You only have to look at those dash cam car crash TV programmes to see how people get out of seemly unwinnable accidents by pure luck. The AI would have likely not avoided such accidents and there would have been a crash. It may calculate faster but it doesn't do nonsense and nonsense has saved a humans skin quite a few times throughout history.
 
Last edited:
In an ideal world AI should be like that but it never will be. Think this first trip is an example of how AI can't react as it should. Not sure how it will get better with so much going on around us.

What is this based on?

If you are just talking chance and the computer ignores the unlikely options, that is good. You'd have 1/10 piloted cars that escape a disaster by doing something that doesn't make sense vs 9/10 AI cars that make it out safely because they don't choose the statistically unfavorable option. Some self driving cars will crash but that's not even remotely scary when human driven cars crash a lot.

Practically though, I don't think I can agree with you at all, at least in the long run. AI is already faster at calculating than people and it will continue to get faster. It will also get smarter and possibly begin to learn on its own to account for the most unlikely situations.
 
Thing is a human could do some crazy steering stuff and achieve 'the third option' but the computer couldn't because the numbers wouldn't make sense. Extreme manoeuvres are not logical to a computer because they can't be computed with any certainty. So where a human swerves and somehow gets lucky the computer wouldn't even attempt that move because there no such thing as 'luck'. So as a result your ending up in the pole.
Read back what you've just written, but very slowly and thinking about each point.

You're talking about a device that can calculate thousands of different permutations for things every second and suggesting it's more likely to come a cropper than a human who got themselves into a bad situation and is basically fluking their way out of it on sheer cosmic chance.

You should probably also understand that those dashcam TV programs don't show the situations where blind luck doesn't work and the driver gets decapitated by some roadside furniture or hits a pedestrian at 60mph... those ones don't make for such good TV.
 
Driverless vehicles should always consider saving their occupants as their main priority in the event of a crash.
Certainly, let's steer the car into a crowd of people just to save one or two passengers.
 
Certainly, let's steer the car into a crowd of people just to save one or two passengers.

That seems a fundamentally bad idea, your logic is flawed.

Think this first trip is an example of how AI can't react as it should.

Not really - it reacted exactly as it should. It saw the danger and stopped. Compare that to a human driver (like the one in the truck) that continued moving and eventually hit the AI vehicle. Hmmm.
 
In an ideal world AI should be like that but it never will be. Think this first trip is an example of how AI can't react as it should. Not sure how it will get better with so much going on around us.

The AI vehicle was stopped, the truck hit it. If you hit a stopped vehicle because you weren't paying attention it's entirely your fault.
 
I don't think this shuttle was at fault based on the account of what happened, but I would like to see video if it existed.

That all being said, I think the real problem (possibly intractable) is making autonomous cars work in situations they were not designed to encounter. Do autonomous cars have any sort of intelligence? Or do they just follow pre-programmed commands based on sensor input? I see this as the biggest obstacle for autonomous cars going faster than 20mph.
This is why I don't think it matters how quickly an autonomous car can process or react to something, or how much machine-learning you throw at the problem. The computer doesn't actually comprehend anything, and it relies on sensors that are too fallible. It can scan for objects, but could it ever infer what drivers or pedestrians are thinking or where they're looking? It can track painted lines, but how reliable could it ever be on washed-out gravel roads or a blanket of snow?

Driving involves a lot more than just reacting quickly and confidently to something. It evokes millennia of evolved and instinctual awareness and comprehension. Computers are figuratively brain-dead, and nowadays we spend all our lives dealing with their screw-ups. The more sophisticated they get, the more trouble they create. It's baffling to me how it could be any different in giving one control over a car.
 
AJHG1000
Driverless vehicles should always consider saving their occupants as their main priority in the event of a crash.

Certainly, let's steer the car into a crowd of people just to save one or two passengers.

You're completely misunderstanding my comment.

Seems clear to me - you're suggesting that the car would be programmed to take a greater amount of life to save the passengers whatever. I think it's you who misunderstood the comment that you were initially responding to.

Of course @AJHG1000 is right: driverless vehicles (or any safety protocol charged with human life) should always consider saving their charges. What sort of system wouldn't consider that? You seem to have prejudged every case in which such a consideration is made as preferring the lives of the passengers, one might say that you've actually removed the consideration aspect and instead presumed a default outcome. That was logically incorrect, as I've already pointed out.

The fallacy around driverless vehicles is that their operation is a zero-death undertaking. Such an outcome is a veeeery long way away. A much closer outcome is a huge reduction in the number of deaths through the elimination of human error.
 
Of course @AJHG1000 is right: driverless vehicles (or any safety protocol charged with human life) should always consider saving their charges. What sort of system wouldn't consider that?
It's about priorities.
You seem to have prejudged every case in which such a consideration is made as preferring the lives of the passengers, one might say that you've actually removed the consideration aspect and instead presumed a default outcome. That was logically incorrect, as I've already pointed out.
He said "as their main priority". If it calculated that the ONLY option would be to steer towards an open place with people on it, then it would do so, because it would follow the highest priority in such a case.
 
He said "as their main priority". If it calculated that the ONLY option would be to steer towards an open place with people on it, then it would do so, because it would follow the highest priority in such a case.

Whichever of the verbs you fasten "consider" to you still have the word "consider". It's a consideration, being considered.
 
This is why I don't think it matters how quickly an autonomous car can process or react to something, or how much machine-learning you throw at the problem. The computer doesn't actually comprehend anything, and it relies on sensors that are too fallible. It can scan for objects, but could it ever infer what drivers or pedestrians are thinking or where they're looking? It can track painted lines, but how reliable could it ever be on washed-out gravel roads or a blanket of snow?

Is that really all that different from a human driver though? Humans have fallible sensors for sure. I'll give you that we're better at understanding each other than AI for now, but far from perfect at it. Painted lanes don't necessarily help humans anyway. I know that I've seen a fair number of drivers ignore lanes, not to mention lights, stop signs, and turn signals.

There might be a few areas where humans have advantages, but the bottom line is the accident rate. You can't know ahead of time when you're going to end up in a bad situation. If you could know, you would just avoid the problem. The only rational way to lower your chance of harm is to take the statically favorable option. Eliminating human error and replacing it with less likely machine error could achieve that.

Driving involves a lot more than just reacting quickly and confidently to something. It evokes millennia of evolved and instinctual awareness and comprehension. Computers are figuratively brain-dead, and nowadays we spend all our lives dealing with their screw-ups. The more sophisticated they get, the more trouble they create. It's baffling to me how it could be any different in giving one control over a car.

Technically we didn't evolve to drive. We're going much faster than evolution shaped us for and we're not in direct control of anything (we operate the car through the wheel and pedals, etc). The machine in this article is doing a fair job despite not being around nearly as long as we have.

Computer have also bested us at complex tasks despite being "braindead". Chess would be one classic example. You have to remember that we're machines too. Our brains run calculations and is not too far off from an AI's processor conceptually. If intelligence is just an emergent property then it won't be exclusive to humans forever.
 
Is that really all that different from a human driver though? Humans have fallible sensors for sure. I'll give you that we're better at understanding each other than AI for now, but far from perfect at it. Painted lanes don't necessarily help humans anyway. I know that I've seen a fair number of drivers ignore lanes, not to mention lights, stop signs, and turn signals.

There might be a few areas where humans have advantages, but the bottom line is the accident rate. You can't know ahead of time when you're going to end up in a bad situation. If you could know, you would just avoid the problem. The only rational way to lower your chance of harm is to take the statically favorable option. Eliminating human error and replacing it with less likely machine error could achieve that.
Personally, I'm not swayed by this. I'd rather be hit by an inattentive driver than a computer that failed to react despite "looking" right at me. Similarly, I'd rather accept responsibility for my own driving over the possibility of being killed or injured in an autonomous car that could not manage a situation I could have managed myself. I can't get over the absurdity and tragedy of such a prospect, however remote the possibility may be. Negligence is human. A machine is oblivious.

Lots of people are terrible drivers, but I would sooner advocate for another solution like investing in mass transportation or an overhaul of drivers' education.

Technically we didn't evolve to drive. We're going much faster than evolution shaped us for and we're not in direct control of anything (we operate the car through the wheel and pedals, etc). The machine in this article is doing a fair job despite not being around nearly as long as we have.

Computer have also bested us at complex tasks despite being "braindead". Chess would be one classic example.
We didn't evolve to drive, but we evolved to perceive. A computer will never perceive anything, it can only "see" what it is programmed to interpret from its sensors.

Computers are good at singular tasks and operating in controlled environments; chess is one example, autopilot in an airline is another. The way a computer plays chess isn't really what I would call complex, because it just brute-force processes all permutations of a game and chooses the winning series of moves. Similarly, autopilot just manipulates the plane's control surfaces to maintain specified instrument readings. Something relatively taxing for us, but mundane for a computer.

Driving is very chaotic and multifaceted by comparison, unless you restrict it to a limited, controlled environment (like low-speed urban shuttling).

You have to remember that we're machines too. Our brains run calculations and is not too far off from an AI's processor conceptually. If intelligence is just an emergent property then it won't be exclusive to humans forever.
We're chemical beings, not mechanical. I don't think we're that relatable and I don't believe intelligence will ever emerge from computers or AI as they're currently designed. But that's getting into a whole other topic, though it may help explain where I'm coming from. :)
 
As the guy who built the engine-brakes-transmission on that thing , I'm really happy that, at least, the emergency brake worked well!
 
I work for the italian company who designed and built the entire transmission, brakes and e-brake included. Navya is one of our customers, we sell our products literally all over the world.
 
Personally, I'm not swayed by this. I'd rather be hit by an inattentive driver than a computer that failed to react despite "looking" right at me. Similarly, I'd rather accept responsibility for my own driving over the possibility of being killed or injured in an autonomous car that could not manage a situation I could have managed myself. I can't get over the absurdity and tragedy of such a prospect, however remote the possibility may be. Negligence is human. A machine is oblivious.

Lots of people are terrible drivers, but I would sooner advocate for another solution like investing in mass transportation or an overhaul of drivers' education.


We didn't evolve to drive, but we evolved to perceive. A computer will never perceive anything, it can only "see" what it is programmed to interpret from its sensors.

Computers are good at singular tasks and operating in controlled environments; chess is one example, autopilot in an airline is another. The way a computer plays chess isn't really what I would call complex, because it just brute-force processes all permutations of a game and chooses the winning series of moves. Similarly, autopilot just manipulates the plane's control surfaces to maintain specified instrument readings. Something relatively taxing for us, but mundane for a computer.

Driving is very chaotic and multifaceted by comparison, unless you restrict it to a limited, controlled environment (like low-speed urban shuttling).


We're chemical beings, not mechanical. I don't think we're that relatable and I don't believe intelligence will ever emerge from computers or AI as they're currently designed. But that's getting into a whole other topic, though it may help explain where I'm coming from. :)

I agree.

For instance, if I'm driving down a 2 lane road, and I see a car up ahead (say, 10 seconds ahead at the speed I'm traveling) at a t-junction I can anticipate possible movements of that car even if it hasn't started moving. An autonomous car will likely continue barreling along only reacting when that vehicle moves, which could potentially be too late. Furthermore, if I see a "student driver" sign on the car, or if it's a lifted F250 on 40" tires, or if it's an mid 90s Buick with an octogenarian behind the wheel, I may drive more cautiously. Perhaps after years of machine learning (based on PHD candidates I know who study machine learning, this application is not really suitable) an AI could start to anticipate in a similar manner, but only after truly staggering amounts of input data. Maybe they should just collect Russian dash cam footage and feed it into the neural net..:lol:
 
Back