Elon's Antics

  • Thread starter Danoff
  • 2,217 comments
  • 177,422 views
grok-control.jpg
 
AAAAAHHHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!!!!!


Truly diabolical
 
There's only 200 employees working for xAI & I'm sure only a select few would be allowed access to make such a "modification".


Reality speaking, I'm not going to believe it was any other employee than the one in this position.
boss.jpg
 
There's only 200 employees working for xAI & I'm sure only a select few would be allowed access to make such a "modification".


Reality speaking, I'm not going to believe it was any other employee than the one in this position.
boss.jpg
And how many of those with access rights would want to be pushing White South African talking points without the approval of the musk-flavored White South African.
 
TL;DR

“It’s Going To Fail For Sure”: Dan O’Dowd Sounds His Latest Alarm on Tesla’s Robotaxi Rollout


In a Forbes interview, software safety pioneer Dan O’Dowd—founder of Green Hills Software and The Dawn Project—warns that Tesla’s upcoming Robotaxi pilot in Austin is dangerously unprepared. O’Dowd, whose firm provides secure operating systems for aerospace and defense applications, has spent years independently testing Tesla’s Full Self-Driving (FSD) software. His verdict after repeated failures in real-world trials: Tesla’s tech is not ready—and could be deadly without a human behind the wheel.

Tesla’s decision to skip lidar, release limited performance data, and rely on remote operators for safety intervention raises serious concerns. O’Dowd highlights disengagements, failures in sunlight, and erratic urban navigation. The staged demo at a Hollywood lot, he says, was more PR stunt than product readiness. With lives at stake, his message is blunt: this is not autonomy—it’s marketing over safety.

Here's the original

 
This guy will say anything to pump the stock.

Musk would have you believe that having multiple redundant sensor systems leads to a less safe outcome

Of course, in the real world, Waymo uses multiple sensor systems (Cameras, Radar and Lidar and maybe more) yet has an exemplary accident record. Tesla relies on cameras only and is being investigated due to the number of fatal accidents. (Or WAS being investigated. Maybe Trump has given him another "Get Out Of Jail Free" card.)

The following are lightly paraphrased quotes from the video...


The road system is designed for biological intelligence and eyes, it’s not designed for shooting lasers out of your eyes

When you have multiple sensors, they tend to get confused, so do you believe the camera or do you believe the Lidar?

If you get confused, that’s what can lead to accidents

We used to have radar, but didn’t know which to believe, so we turned it off.



 
The road system is designed for biological intelligence and eyes, it’s not designed for shooting lasers out of your eyes

When you have multiple sensors, they tend to get confused, so do you believe the camera or do you believe the Lidar?

If you get confused, that’s what can lead to accidents

We used to have radar, but didn’t know which to believe, so we turned it off.
Well it worked for Boein... oh wait no it didn't.
 
Last edited:
The road system is designed for biological intelligence and eyes, it’s not designed for shooting lasers out of your eyes
Two problems here:
Cameras are not as good as human eyes.
Artificial intelligence is not as good as human intelligence to interpret the data it gets from the cameras/eyes.

So if an artificial intelligence is going to stand a fair chance to use the roads, it might need the assistance of additional sensors.

And you don't design the roads for lidar - you design the lidar system for the roads. For example, airplanes weren't designed to be detectable by radar, yet radar is incredibly good at detecting airplanes.
When you have multiple sensors, they tend to get confused, so do you believe the camera or do you believe the Lidar?
So basically their AI is not good at handling conflicting sensor inputs. Huh.
If you get confused, that’s what can lead to accidents
So you eliminate accidents caused by sensor conflicts. But now you're unable to catch accidents caused by bad or incomplete sensor data. Because if lidar says one thing and camera says another, then at least one of them has to be wrong. And we know for sure that it's not always the lidar that is wrong, because then the conflict would be very easy to solve - just ignore lidar data when it conflicts with the camera.

Confusion can lead to accidents, sure, but you can't design a system in hope it would never get confused, you need to design it so that it handles the confusion in a safe manner. Just like how we teach human drivers. Because roads and traffic can be very confusing at times, we can't teach drivers to "not be confused", but rather we teach drivers to behave in a safe manner so that they can be confused without causing accidents.
We used to have radar, but didn’t know which to believe, so we turned it off.
They didn't trust the camera to be right, so they switched off other sensors so that nothing could contradict it. Huh.
 
Last edited:
Two problems here:
Cameras are not as good as human eyes.
Well that's not true as a blanket statement. Maybe for certain things.
Artificial intelligence is not as good as human intelligence to interpret the data it gets from the cameras/eyes.
That's definitely not true as a blanket statement. But maybe in certain instances.
So you eliminate accidents caused by sensor conflicts. But now you're unable to catch accidents caused by bad or incomplete sensor data. Because if lidar says one thing and camera says another, then at least one of them has to be wrong. And we know for sure that it's not always the lidar that is wrong, because then the conflict would be very easy to solve - just ignore lidar data when it conflicts with the camera.
Agreed, this is the dumbest way to fix what is essentially bad software programming. Combining different types of measurements in different ways with different uncertainties into one single measurement with one uncertainty is its own branch of math (estimation theory), and I used to do that math for spacecraft navigation. Spacecraft use several different kinds of measurements to tell where they are. There's no way we'd have thrown out that kind of data when flying a mission.

The really cool thing about having different kinds of measuring devices is that it can knock down orthogonal uncertainty. Let's say for example that you have a camera that can give you great angular resolution for a particular object, but kinda bad range information for that object (this is true for spacecraft as well, which would take optical navigation images using a camera to get angular information, and we'd use the starfield in the background to pinpoint the orientation), but then you can use a different kind of measurement to get range to the object (which is also something we did). Now you've dropped your angular uncertainty AND your range uncertainty for an overall better estimate of your position and where the object is relative to you.

It is a math problem. I'm not pretending I know exactly how to do it for a car, but I know the broad brush concepts for how it is done. The answer is definitely not to turn off your instruments. It's to sharpen your pencil.
 
Last edited:
Musk just makes up things to fit his narrative and hopes, even demands, he won't get fact checked, just like all alt-rightwingers. Obviously more sensors is better, you just need to have your algorithms be able to handle the incoming data.
 
Musk just makes up things to fit his narrative and hopes, even demands, he won't get fact checked, just like all alt-rightwingers. Obviously more sensors is better, you just need to have your algorithms be able to handle the incoming data.
As long as the algorithm isn't manually manipulated to talk about white genocide in South Africa.
 
Comments below from personal FSD experience on the latest HW4 hardware (or as Musk now likes to hype it, "AI4").

Treat this as "anecdotal", a sample size of 1 indeed, however I've seen many Redditors posting similar experiences.

Cameras are not as good as human eyes.
Agreed. On many occasions, in low light situations, the system has told me that side cameras are "occluded" when I can see detail.

Also, I have binocular vision, the Teslas do not. Mine has two forward-facing cameras, but they have two widely different focal lengths. This limitation is reflected in the UI when it shows vehicles crossing an intersection in front of me, their images move smoothly right/left but often jump around in their distance from me.

Also, in the interest of keeping costs down, not only did they remove radar and ultrasonics, they use pretty cheap cameras, which do not perform well in low light situations.

Artificial intelligence is not as good as human intelligence to interpret the data it gets from the cameras/eyes.
Agreed. To wit, "phantom braking" and blowing through red lights which the cameras have actually detected.
So basically their AI is not good at handling conflicting sensor inputs. Huh.
Exactly
Confusion can lead to accidents, sure, but you can't design a system in hope it would never get confused, you need to design it so that it handles the confusion in a safe manner.
Exactly
They didn't trust the camera to be right, so they switched off other sensors so that nothing could contradict it.
Based on his first White House experience, Trump used this as a model for selecting senior staff the second time around. No wonder Musk and Trump get on so well. They both rely broadly on sycophancy.
 
They didn't trust the camera to be right, so they switched off other sensors so that nothing could contradict it. Huh.
This is probably a factor in why Musk and Trump are BFFs - neither has any patience.

If you want result C, and your only path to the result is by combining variable A and variable B, but when you attempt to you get a completely different result, you should investigate why and identify the issue.

Instead, both Musk and Trump will decide C was never possible, make excuses and/or make it someone else's fault that it's not achievable.

This applies to pretty much everything that they both do. There is no will to actually invest time in anything beyond the superficial attempt that can often be used as a PR stunt to make them (in their opinions) look good to the public.
 

Latest Posts

Back