Elon's Antics

  • Thread starter Danoff
  • 2,213 comments
  • 177,205 views
grok-control.jpg
 
AAAAAHHHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!!!!!


Truly diabolical
 
There's only 200 employees working for xAI & I'm sure only a select few would be allowed access to make such a "modification".


Reality speaking, I'm not going to believe it was any other employee than the one in this position.
boss.jpg
 
There's only 200 employees working for xAI & I'm sure only a select few would be allowed access to make such a "modification".


Reality speaking, I'm not going to believe it was any other employee than the one in this position.
boss.jpg
And how many of those with access rights would want to be pushing White South African talking points without the approval of the musk-flavored White South African.
 
TL;DR

“It’s Going To Fail For Sure”: Dan O’Dowd Sounds His Latest Alarm on Tesla’s Robotaxi Rollout


In a Forbes interview, software safety pioneer Dan O’Dowd—founder of Green Hills Software and The Dawn Project—warns that Tesla’s upcoming Robotaxi pilot in Austin is dangerously unprepared. O’Dowd, whose firm provides secure operating systems for aerospace and defense applications, has spent years independently testing Tesla’s Full Self-Driving (FSD) software. His verdict after repeated failures in real-world trials: Tesla’s tech is not ready—and could be deadly without a human behind the wheel.

Tesla’s decision to skip lidar, release limited performance data, and rely on remote operators for safety intervention raises serious concerns. O’Dowd highlights disengagements, failures in sunlight, and erratic urban navigation. The staged demo at a Hollywood lot, he says, was more PR stunt than product readiness. With lives at stake, his message is blunt: this is not autonomy—it’s marketing over safety.

Here's the original

 
This guy will say anything to pump the stock.

Musk would have you believe that having multiple redundant sensor systems leads to a less safe outcome

Of course, in the real world, Waymo uses multiple sensor systems (Cameras, Radar and Lidar and maybe more) yet has an exemplary accident record. Tesla relies on cameras only and is being investigated due to the number of fatal accidents. (Or WAS being investigated. Maybe Trump has given him another "Get Out Of Jail Free" card.)

The following are lightly paraphrased quotes from the video...


The road system is designed for biological intelligence and eyes, it’s not designed for shooting lasers out of your eyes

When you have multiple sensors, they tend to get confused, so do you believe the camera or do you believe the Lidar?

If you get confused, that’s what can lead to accidents

We used to have radar, but didn’t know which to believe, so we turned it off.



 
The road system is designed for biological intelligence and eyes, it’s not designed for shooting lasers out of your eyes

When you have multiple sensors, they tend to get confused, so do you believe the camera or do you believe the Lidar?

If you get confused, that’s what can lead to accidents

We used to have radar, but didn’t know which to believe, so we turned it off.
Well it worked for Boein... oh wait no it didn't.
 
Last edited:
The road system is designed for biological intelligence and eyes, it’s not designed for shooting lasers out of your eyes
Two problems here:
Cameras are not as good as human eyes.
Artificial intelligence is not as good as human intelligence to interpret the data it gets from the cameras/eyes.

So if an artificial intelligence is going to stand a fair chance to use the roads, it might need the assistance of additional sensors.

And you don't design the roads for lidar - you design the lidar system for the roads. For example, airplanes weren't designed to be detectable by radar, yet radar is incredibly good at detecting airplanes.
When you have multiple sensors, they tend to get confused, so do you believe the camera or do you believe the Lidar?
So basically their AI is not good at handling conflicting sensor inputs. Huh.
If you get confused, that’s what can lead to accidents
So you eliminate accidents caused by sensor conflicts. But now you're unable to catch accidents caused by bad or incomplete sensor data. Because if lidar says one thing and camera says another, then at least one of them has to be wrong. And we know for sure that it's not always the lidar that is wrong, because then the conflict would be very easy to solve - just ignore lidar data when it conflicts with the camera.

Confusion can lead to accidents, sure, but you can't design a system in hope it would never get confused, you need to design it so that it handles the confusion in a safe manner. Just like how we teach human drivers. Because roads and traffic can be very confusing at times, we can't teach drivers to "not be confused", but rather we teach drivers to behave in a safe manner so that they can be confused without causing accidents.
We used to have radar, but didn’t know which to believe, so we turned it off.
They didn't trust the camera to be right, so they switched off other sensors so that nothing could contradict it. Huh.
 
Last edited:
Two problems here:
Cameras are not as good as human eyes.
Well that's not true as a blanket statement. Maybe for certain things.
Artificial intelligence is not as good as human intelligence to interpret the data it gets from the cameras/eyes.
That's definitely not true as a blanket statement. But maybe in certain instances.
So you eliminate accidents caused by sensor conflicts. But now you're unable to catch accidents caused by bad or incomplete sensor data. Because if lidar says one thing and camera says another, then at least one of them has to be wrong. And we know for sure that it's not always the lidar that is wrong, because then the conflict would be very easy to solve - just ignore lidar data when it conflicts with the camera.
Agreed, this is the dumbest way to fix what is essentially bad software programming. Combining different types of measurements in different ways with different uncertainties into one single measurement with one uncertainty is its own branch of math (estimation theory), and I used to do that math for spacecraft navigation. Spacecraft use several different kinds of measurements to tell where they are. There's no way we'd have thrown out that kind of data when flying a mission.

The really cool thing about having different kinds of measuring devices is that it can knock down orthogonal uncertainty. Let's say for example that you have a camera that can give you great angular resolution for a particular object, but kinda bad range information for that object (this is true for spacecraft as well, which would take optical navigation images using a camera to get angular information, and we'd use the starfield in the background to pinpoint the orientation), but then you can use a different kind of measurement to get range to the object (which is also something we did). Now you've dropped your angular uncertainty AND your range uncertainty for an overall better estimate of your position and where the object is relative to you.

It is a math problem. I'm not pretending I know exactly how to do it for a car, but I know the broad brush concepts for how it is done. The answer is definitely not to turn off your instruments. It's to sharpen your pencil.
 
Last edited:
Back