Autonomous Cars General Discussion

Been busy so I haven't posted much in this thread. I had a bunch of tabs open for months now that I wanted to post here:


https://www.theverge.com/2020/10/30...driving-car-data-miles-crashes-phoenix-google

This Verge article was published around the time when Tesla updated their Autopilot beta program last year. At the time, a lot of people were very unhappy that they were publically beta testing the algorithm on public roads.

(language warning for this Twitter username)


The article details two papers Waymo published at the time. The first discusses safety and the second reports their data from testing in Phoenix, "this is the first time that Waymo has ever publicly disclosed mileage and crash data from its autonomous vehicle testing operation in Phoenix."

Firstly, they use 3 layers for safety:
  • Hardware, including the vehicle itself, the sensor suite, the steering and braking system, and the computing platform;
  • The automated driving system behavioral layer, such as avoiding collisions with other cars, successfully completing fully autonomous rides, and adhering to the rules of the road;
  • Operations, like fleet operations, risk management, and a field safety program to resolve potential safety issues.

Secondly, Waymo says between January and December 2019, their vehicles drove 6.1 million miles and from January 2019 to September 2020, their fully driverless vehicles drove 65,000 miles. During this time period, they had 47 "contact events" of which 29 of them occurred in simulations. None of them resulted in severe injuries and most of them were at the fault of a human driver or pedestrian.



In this article written by the CEO and co-founder of Voyage, he describes how their system makes decisions.

https://news.voyage.auto/teaching-a-self-driving-a-i-to-make-human-like-decisions-a9a9597dd156

High-Quality Decision Making is fueled by two models, one optimization-based (i.e., reliable) and one machine-learned (i.e., intelligent), with each serving different responsibilities. The optimization-based model is responsible for ensuring our vehicle always adheres to the rules of the road (e.g., preventing the running of stop-lines, or getting too close to pedestrians), while the machine-learned model—trained on rich, historical driving data—is responsible for tapping into its vast history of experience to select the most human-like decision to make from a refined list of safe options.





Finally, here's a video of CommaAI's OpenPilot driving a truck in American Truck Sim!

 
I find it hard to draw any meaningful conclusion from those clips other than the driver of the BMW was pretty reckless. I do think that braking is almost always the safer course of action versus swerving and I wonder why autopilot's response was to swerve. If I were to speculate, I'd guess it has to do with the degrees of extrapolation required to achieve the desired result. If something is about to hit you, the action with the least logical extrapolation is to get out of the way - to swerve. Braking isn't that simple because it doesn't immediately respond to the problem. It requires you to extrapolate that reducing your speed while the other vehicle maintains it's trajectory will create a gap for that other vehicle to safely change into the lane - it's a degree further. In human terms that is intuition and experience. For machine learning...I don't know if it would ever make it into the decision tree because the swerving response is probably "good enough". The "dumb" response to the BMW lane intrusion is "there's an object about to hit, I can steer away from it". It's the response that carries the fewest assumptions and the one in which autopilot has the most control over the situation.

I found this video very helpful for understanding machine learning, particularly with regards to autonomous cars:


"Note that the AI doesn't know it's driving cars, it's not aware of the track or the effect of it's choices"
 
Last edited:
I think it definitely could get there.

In other news:

Autonomy Level 3 has arrived
https://www.thedrive.com/news/39609...production-car-with-level-3-self-driving-tech

Really curious to see how this works out. How can somebody watching a movie possibly be ready to take control of a car in an emergency?

I also think its amusing that the most advanced driver assistance technology is debuting in a car that hasn't had an interior update in 7 years. :lol:
5.jpg


Though, to be fair, Tesla hasn't substantially updated the Model S interior, ever, and it's even older than the RLX/Legend.
 
How can somebody watching a movie possibly be ready to take control of a car in an emergency?

The article was saying that the car is capable of emergency action, putting itself in a safe place, and annoying the hell out of the driver until the driver takes over.
 

Self-driving cars rely on video cameras, radar sensors, lidar sensors, GPS antennas, and other tools to read street signs and continuously map their surroundings. To perform as well as a human driver, the cars must quickly process and respond to a constant stream of ever-changing information. A lost dog, the sudden onset of a rain shower, or a broken traffic light can all throw them off. To prepare for these and millions of other possibilities, the complex software and algorithms powering self-driving cars need immense amounts of highly accurate data — and an army of humans to feed it to them.

As the race to develop autonomous vehicles heated up, suddenly companies found themselves hungry for workers who could build training datasets, which typically contain hundreds of thousands of images and videos that self-driving cars captured during test drives. The workers are tasked with labeling what is depicted in them, so that a machine-learning algorithm can slowly learn to differentiate a tree from a stop sign. To complete all this tedious work, many companies turned to the existing global crowdsourcing industry, which allows people to make money online doing piecemeal tasks, like evaluating restaurant reviews or answering survey questions.

“I would argue that the influx of the money from the car industry has actually substantially changed the crowdsourcing industry,” said Florian Alexander Schmidt, a professor at the Dresden University of Applied Sciences in Germany, who has studied the microtasking industry and autonomous vehicle training. Previously, companies primarily provided access to large pools of workers, who could answer surveys in bulk or complete lots of work quickly and cheaply. The problem was that the results weren’t necessarily very accurate. “A lot of [the data] was trash,” Schmidt explained. “That is not acceptable in the self-driving vehicle field.”

My friend who's into machine learning told me about this a while ago. All big ML and AI tech are built off of training data that were made by outsourced labored. It's a big problem, because many times these people make mistakes and then you're training a huge algorithm based off of incorrect data. But it's a job that has to be done as there's no other way to get training data

Here's an interesting paragraph from the article:

Over time, Timm Ndirangu Gachanja, a former CloudFactory employee in Nairobi who now works at Remotasks, said he noticed the things he and his colleagues were being asked to identify had changed. “You find that they are introducing other, new labels,” he said. “For example, if it’s drizzling, all the cameras are so strong that they can capture the tiniest water drop in the atmosphere.” In a category called “atmospherics,” workers may be asked to label each individual drop of water so the cars don’t mistake them for obstacles.
 
Last edited:
The NHTSA released two reports on ADAS (Level 2) and ADS (Level 3 and above) accidents last week



For ADAS, the report states that Tesla has way more accidents than any other manufacturers. However, these are absolute numbers and are not normalized for number of vehicles, miles driven, etc.

1655672897792.png

Although, interestingly, in terms of crash locations, Tesla vehicles are way more likely to hit things compared to other manufacturers. So much so that it completely skews the data for ADAS accidents:






When looking at the ADS report, Google's Waymo has had the most amount of accidents

1655673258530.png


Additionally, it is noted that ADS accidents are more likely to be rear end accidents compared to ADAS, although this is with Tesla's outlying data

 
Last edited:
The NHTSA released two reports on ADAS (Level 2) and ADS (Level 3 and above) accidents last week



For ADAS, the report states that Tesla has way more accidents than any other manufacturers. However, these are absolute numbers and are not normalized for number of vehicles, miles driven, etc.


Although, interestingly, in terms of crash locations, Tesla vehicles are way more likely to hit things compared to other manufacturers. So much so that it completely skews the data for ADAS accidents:






When looking at the ADS report, Google's Waymo has had the most amount of accidents


Additionally, it is noted that ADS accidents are more likely to be rear end accidents compared to ADAS, although this is with Tesla's outlying data


Tesla detects approaching vehicles from behind and accelerate away when it’s about to be rear-ended, so it’s not surprising that almost all of their accidents are front-end collisions.

Note that this does not mean that Tesla is more likely to hit things compared to other manufacturers, because the data is in absolute numbers. All it means is that when a Tesla is involved in an accident, it’s very likely to be a front collision.
 
Ford and VW are shutting down Argo AI, an autonomous vehicle company funded by them


Argo AI, an autonomous vehicle startup that burst on the scene in 2017 stacked with a $1 billion investment, is shutting down — its parts being absorbed into its two main backers: Ford and VW, according to people familiar with the matter.

During an all-hands meeting Wednesday, Argo AI employees were told that some people would receive offers from the two automakers, according to multiple sources who asked to not be named. It was unclear how many would be hired into Ford or VW and which companies will get Argo’s technology.

“In coordination with our shareholders, the decision has been made that Argo AI will not continue on its mission as a company. Many of the employees will receive an opportunity to continue work on automated driving technology with either Ford or Volkswagen, while employment for others will unfortunately come to an end,” Argo said in a statement.

Ford said in its third-quarter earnings report released Wednesday that it made a strategic decision to shift its resources to developing advanced driver assistance systems, and not autonomous vehicle technology that can be applied to robotaxis. The company said it recorded a $2.7 billion non-cash, pretax impairment on its investment in Argo AI, resulting in an $827 million net loss for the third quarter.

That decision appears to have been fueled by Argo’s inability to attract new investors. Ford CEO Jim Farley acknowledged that the company anticipated being able to bring autonomous vehicle technology broadly to market by 2021.

“But things have changed, and there’s a huge opportunity right now for Ford to give time — the most valuable commodity in modern life — back to millions of customers while they’re in their vehicles,” said Farley. “It’s mission-critical for Ford to develop great and differentiated L2+ and L3 applications that at the same time make transportation even safer.”

Farley also insinuated that Ford would be able to buy AV tech down the line, instead of developing it in house. “We’re optimistic about a future for L4 ADAS, but profitable, fully autonomous vehicles at scale are a long way off and we won’t necessarily have to create that technology ourselves,” he added.

VW, Argo’s other primary backer, has also indicated plans to shift resources and will no longer invest in Argo AI. The company said it will use its software unit Cariad to drive forward development of highly automated and autonomous driving together with Bosch and, in the future, in China with Horizon Robotics.

In VW's press release, the CEO says:

Oliver Blume, CEO Volkswagen AG: "Focus and speed are what count, especially when it comes to developing technologies of the future. Our goal is to offer our customers the most powerful functions at the earliest possible time and to set up our development as cost-effectively as possible."


The CEO's statement sounds a lot like what Tesla has been doing. And we all know how well they're doing...


(Incredible timing as the Reuters news came only an hour or so after Argo's demise)


Ed Niedermeyer, author Ludicrous and co-host on the podcast Autonocast, has already posted some of his thoughts




 
Last edited:
From the Tesla thread.

As someone that enjoys driving, I see tremendous value in a self driving car that is effectively door to door capable in any reasonable scenario, but what we're talking about for now, and in the short to medium term, and likely the long term, is a system that isn't perfect, nor close enough to being perfect that it would be suitably approved for the type of use that would justify the risk, or perhaps cost. And of course, at the point the system is better than a dedicated, attentive driver, who's the bigger nuisance on the road?

Personally I think the approach to autonomous driving is wrong. It needs to start with an agreed standard framework of physical and digital infrastructure, and a minimum standard of sensory technology. Cars models shouldn't be allowed that don't meet the standards, and autonomy should be geo-fenced to areas where the physical infrastructure meets the standard. I also think that standardised car to car and car to road communication would be a tremendous benefit. It needs to be thought of as a new type of national infrastructure first - not a problem that needs evermore complex bandaids applying to an algorithm. Make the roads be able to be 'understood' by a standard sensor set, mandate a set of sensors to look for things that might contradict that information, and make cars communicate that information to each other.

If we had some idea of what the infrastructure really needed to look like, it might be worth the enormous cost. The technology is changing fast, so designing something like that has a high chance of just being a waste of money. In the US, we're not just going to outlaw existing road cars either, so it has to work with everything we have, including farm equipment.

Lately, I've been watching a lot of videos by Mentour Pilot where he discusses accidents in detail, and from the handful of videos I've seen, many times it is when an edge case occurs and the pilot is unable to react properly in the situation due to the high levels of mental stress in the sudden situation.

To be able to use these automation systems, I believe that you need proper training to understand the system and situations where the systems can fail and what is needed to safely recover. However, as the system improves, the less intervention is needed, and the less attentive the driver/pilot may become, increasing the difficulty to safely recover, especially in a high stress and time sensitive situation.

If that's the case then the system just isn't safe and shouldn't be in place. But that doesn't mean it can't work, just that some manufacturer has a system in place which is not yet ready for public use.

I agree with this completely, but I would argue that due to a lack of understanding, lack of training, etc. that automation makes it much easier for drivers to be less vigilant.

When airbags, seatbelts, helmets, etc. were introduced, I assume no one advertised these new amazing inventions that you no longer have to pay attention or that you can take more risks. These safety advancements were made for you to be safer in case of an accident. The person is still in control of the situation at all times. The problem with automation is that you are slowly taking the person outside of the equation which is the dangerous part.

Many of those technologies are slowly taking the person outside of the equation - especially something like stability control or ABS. I will agree that asking someone to just pay attention and take over if the situation is dangerous is an attempt to defy what people are generally good at and probably a bad idea. But that's not the only way to implement the system, and it's something that we should be careful about moving to rather than rushing there as we seem to be today.

What I'm trying to argue is that if we are going the automation route, there should be proper training to be able to use these systems, similar in rigor to those in aviation. Driving a car is a dangerous activity and if we are able to completely automate it, then I am all for it, it would save many lives. However, at the current level of automation, it is dangerous since drivers are still in control while not having to actively participate.

I agree that automation is good, especially for those who do not want to drive.

Although I have no hard facts to back this up, but I would argue that it may be better for these inattentive drivers to be fully in control at all times when the current level of automation have a lot of edge cases where the driver needs to intervene. If they are fully in control, then that limits have inattentive they can be. This is why we have driver assistant systems like auto braking and lane keep systems to catch them if they fail. But, again, these systems are there to assist rather than take over the job.

I agree that this is where development should be focused.

I understand completely this is no feasible for a lot of people, especially those who live in suburban sprawl hell and more rural areas. This is why I think America needs a better public transit system. These people will not have to buy an expensive new luxury vehicle just so they can be inattentive (assuming automation technology becomes widespread and affordable enough for this to even be possible).

Honestly it might be worth focusing on automating public transportation, complete with infrastructure to support it.

People in general should be taking public transit more.

People don't like taking public transit. So it will remain an uphill battle.


I think this is a great idea but the middle ground we're stuck with at the moment is causing more problems than it solves. Besides driver unpredictability, the second biggest obstacle seems to be a lack of standardization in road design and marking. That needs to be resolved promptly. Go browse Maps and see how much variation you see in the design of airport taxiways and markings, etc. You'll find some but not many and that level of standardization is what's required to be able to operate safely and efficienctly anywhere you go without thinking twice about it. Roads don't seem to work like this and I can't see a good reason for that.

I think it's changing a little too fast for this, but maybe it's the thing to do. Automated vehicle lanes, or automated public transport routes, etc.
 
If we had some idea of what the infrastructure really needed to look like, it might be worth the enormous cost. The technology is changing fast, so designing something like that has a high chance of just being a waste of money. In the US, we're not just going to outlaw existing road cars either, so it has to work with everything we have, including farm equipment.
This is not an insurmountable problem, and I'm not suggesting doing the entire country at once. We accept the idea that the roads have to be 'human readable' to work, and that is somewhat standardised, why not make it machine readable also? You can convey a lot of information quickly, visually to a computer. I'm not suggesting QR codes are an answer, but as an example, a QR code can contain a lot of information, you could put one on each sign and convey the information held for human drivers, but also the relevant information of 10 or 20 surrounding signs, optically easily recognisable symbols as lane markings etc... signs that provide context for a machine that doesn't know the context of its surroundings.

Start with a major interstate. Then another, then another, geofence the use of autonomy to these roads only and branch out from there... make it work by design, not coincidence.
 
This is not an insurmountable problem, and I'm not suggesting doing the entire country at once. We accept the idea that the roads have to be 'human readable' to work, and that is somewhat standardised, why not make it machine readable also? You can convey a lot of information quickly, visually to a computer. I'm not suggesting QR codes are an answer, but as an example, a QR code can contain a lot of information, you could put one on each sign and convey the information held for human drivers, but also the relevant information of 10 or 20 surrounding signs, optically easily recognisable symbols as lane markings etc... signs that provide context for a machine that doesn't know the context of its surroundings.

Start with a major interstate. Then another, then another, geofence the use of autonomy to these roads only and branch out from there... make it work by design, not coincidence.

I do think that interstates are probably where this starts. Automation occurs maybe in a single lane through a straight stretch of road and has an exit (unless a human takes over) and parks itself until a human intervenes. It will take some investment, and you and I don't really have much of a chance of spitballing the solution, I'd imagine that it involves meeting between city engineers and self-driving experts to develop some kind of standard, preferably one that can be seen by sensors through snow, fog, ice, etc. and is somewhat resistant to spoofing by bad actors.

But it is a big challenge, and it's one thrown directly into a rapidly changing technological space. Which makes it less likely to stick.

A double and triple-differenced GPS signal from localized towers might be interesting too. Put up a network of GPS towers through a corridor so that you can maintain a super accurate GPS lock on your position and you've have a solution good to less than a lane width, and you can look up the speed limits from there.
 
I'm imagining a freeway where the far left lane is open to only autonomous vehicles which have a high level of integration with both each other and the road network. The left lane is literally bumper to bumper for miles, but moving at 60mph like a train. The vehicles in the line accelerate and brake in lock step. Carefully coordinated, super precise movements allow individual cars to slip in and out of the line to exit where needed.

To be honest, this seems like a challenge that is plausibly solvable but would probably require too much effort and cooperation between manufacturers and various government agencies and significant public investment to happen. Then you need to make sure all the privately owned equipment is compatible and maintained properly (1 car malfunctions and the whole thing could fail). Americans absolutely suck at cooperation, especially Americans outside of major metros where this kind of thing would actually make more (dubious) sense than dedicated mass transit.

Basically - the amount of cooperation, investment and work needed to achieve any kind of safe, practical, and durable/robust connected system for autonomous vehicles (IE one that provides some kind of measurable good) would probably exceed traditional transit solutions and once all of the infrastructure is in place, it would start to resemble traditional transit solutions anyways.
 
If that's the case then the system just isn't safe and shouldn't be in place. But that doesn't mean it can't work, just that some manufacturer has a system in place which is not yet ready for public use.
Right, which is important to call it out and to make people aware of the limitations of the current technology and to not blindly defend what those manufacturers are doing

Basically - the amount of cooperation, investment and work needed to achieve any kind of safe, practical, and durable/robust connected system for autonomous vehicles (IE one that provides some kind of measurable good) would probably exceed traditional transit solutions and once all of the infrastructure is in place, it would start to resemble traditional transit solutions anyways.
I agree. I think any sort of argument for autonomous vehicles will result in this conclusion unless there's some sort of major breakthrough which I doubt will happen any time soon

When I applied for university, my college essay was about how I wanted to work on autonomous vehicles, but when I took courses on machine learning and computer vision, I quickly realized that all the necessary math bored me way too much lol
 
Right, which is important to call it out and to make people aware of the limitations of the current technology and to not blindly defend what those manufacturers are doing

...and yet we should not give up on the potential.

I agree. I think any sort of argument for autonomous vehicles will result in this conclusion unless there's some sort of major breakthrough which I doubt will happen any time soon

Seems overly pessimistic in light of what is out there now and is currently in development.
 
Seems overly pessimistic in light of what is out there now and is currently in development.
I do think people are making really good progress right now and it's exciting to see what is currently possible. Driving a car in the middle of a road with no obstacles or surprises is a solved problem. But it's that last 20% to 10% that hasn't be solved that is the difficult part that I am honestly skeptical about how feasible it is with current methods of machine learning. There will inevitably be new scenarios that the algorithm was not trained on so it will have no idea what to do
 
There will inevitably be new scenarios that the algorithm was not trained on so it will have no idea what to do

It should fail as gracefully as possible. Meanwhile, it will have more hours than any human that has ever driven, and have seen more than any human driver ever will.
 
Back