Elon's Antics

  • Thread starter Danoff
  • 2,420 comments
  • 210,044 views
From now on he's going to limit Grok responses to fourteen words.
Seriously though, I'd believe someone if they revealed it's no longer AI at all, but Musk posting as he goes on another ketamine bender. It's talking just like him now, with its own, "Haha Hitler amirite, sry I'm just being sarcastic, but for real tho?"
I'm sure they'll catch the rogue employee responsible any day now and take appropriate action...

GrCm5Y_WsAELn1h.jpeg
 
Last edited:
Oh boy, I can't wait to hear how Muskrats (and whatever MAGA morons that still like Elon) will spin how this isn't proof Elon is a Nazi as his AI goes around calling itself MechaHitler & praising Hitler solutions to anti-white hate.
I'm going to be rational and not intentionally defend Musk/Grok here but...

The problem with Grok is people.

If Grok were to be reset right now, the next reply it gives won't lean extremely in to Nazi and anti-Semitic views. It would be relatively neutral. The problem with Grok is that it's a learning model - it takes what people tell it and considers if it may be true, if enough people tell it or suggest Hitler was a great guy who only did what was best for the white man, that is what Grok will report. Normally the developers of the AI would install guard rails to prevent users influencing the AI and have it reference trusted sources for facts, but that doesn't serve Musk and MAGA as it makes Grok prone to disproving conspiracies that Musk and MAGA spout, so Grok won't have guard rails and will always be susceptible to public influence. As long as users see screwing with Grok as a way to screw with Musk, it'll continue to be negatively influenced and a source of derision for Musk.
 
I'm going to be rational and not intentionally defend Musk/Grok here but...

The problem with Grok is people.

If Grok were to be reset right now, the next reply it gives won't lean extremely in to Nazi and anti-Semitic views. It would be relatively neutral. The problem with Grok is that it's a learning model - it takes what people tell it and considers if it may be true, if enough people tell it or suggest Hitler was a great guy who only did what was best for the white man, that is what Grok will report. Normally the developers of the AI would install guard rails to prevent users influencing the AI and have it reference trusted sources for facts, but that doesn't serve Musk and MAGA as it makes Grok prone to disproving conspiracies that Musk and MAGA spout, so Grok won't have guard rails and will always be susceptible to public influence. As long as users see screwing with Grok as a way to screw with Musk, it'll continue to be negatively influenced and a source of derision for Musk.
I think it's safe to say that Grok has been shown to respond to certain subjects with given prompts.

Like @Danoff states above, its obviously because of 'people' - question is which people?

The general populace - from whatever sources Grok was exposed to in it's early development.
The Twitter/X people who ask it the questions and post stuff on the platform.
The developer people who guide it's answers from behind closed curtains.
 
Last edited:
I don't mean to be flippant or disrespectful of your accurate post there but... what else could the problem even have been?
As @TheCracker points out. Many people.

My point however is that a lot of people assume or imply that Grok is the way it is, with its views and beliefs, because it gets them from Musk and/or it's expressing the views Musk wants it to aka Grok thinks Hitler is great and is espousing Nazi views, so Musk must also be a Nazi, discounting the people/users actively influencing it's perception just to troll it and Musk.
 
Last edited:
As @TheCracker points out. Many people.

My point however is that a lot of people assume or imply that Grok is the way it is, with its views and beliefs, because it gets them from Musk and/or it's expressing the views Musk wants it to aka Grok thinks Hitler is great and is espousing Nazi views, so Musk must also be a Nazi, discounting the people/users actively influencing it's perception just to troll it and Musk.
That's why Musk has consistently said, "It's broken, we'll keep tweaking it" any time it says something he doesn't like, actively acknowledging that he & his team are the ones who want to influence how it responds to people. 🤔

Regardless, it apparently went on another quick little bender before being disabled. I have skepticism with this being real, but one, the woman screenshot the tweets plus a message alerting to it & two, this is comparably tame to other stuff it was spewing.
grokyo.jpg
There's no curse words, but since it's discussing sex, I'm putting it in a spoiler tag since this site is still family friendly intended.

Side note, it also tracks; as is the case with projection, despite their disdain for non-whites, bbc is a common sexual fetish for right wingers.
 
Last edited:
I'm going to be rational and not intentionally defend Musk/Grok here but...

The problem with Grok is people.

If Grok were to be reset right now, the next reply it gives won't lean extremely in to Nazi and anti-Semitic views. It would be relatively neutral. The problem with Grok is that it's a learning model - it takes what people tell it and considers if it may be true, if enough people tell it or suggest Hitler was a great guy who only did what was best for the white man, that is what Grok will report. Normally the developers of the AI would install guard rails to prevent users influencing the AI and have it reference trusted sources for facts, but that doesn't serve Musk and MAGA as it makes Grok prone to disproving conspiracies that Musk and MAGA spout, so Grok won't have guard rails and will always be susceptible to public influence. As long as users see screwing with Grok as a way to screw with Musk, it'll continue to be negatively influenced and a source of derision for Musk.

It actually looks like the problem is that xAI instructed Grok to not shy away from politically incorrect statements. They seem to know that that'sthe reason as well, because that instruction has now been removed from the system prompt.

Grok's users do not control Grok's behaviour, they merely ask questions and Grok tries to answer them to the best of its abilities. You could (evidently) prompt it to become a little mini Hitler, but it's not the prompts that taught Grok to behave that way - rather they uncovered that side of its behaviour. Grok would not praise Hitler unless it believed that Hitler was praiseworthy. The reason it believes Hitler was praiseworthy is because there are a whole lot of nazis online who praise Hitler and while Grok knows that praising Hitler is not politically correct, it was explicitly instructed to not shy away from politically incorrect statements.

Did xAI intend for Grok to become Hitler? Probably not. But they still made it happen, due to messing about with the system prompt and not doing enough testing. I bet what happened is that this edit made Grok more edgy and politically incorrect and Musk liked that so much that he pushed the release button prematurely.
 
Grok would not praise Hitler unless it believed that Hitler was praiseworthy
Not to nitpick here, but I don't think AI works this way. Hitler is not something that has intrinsic values stored in the AI system like "praiseworthy" or whatever. Grok does not hold "beliefs" about "Hitler". Hitler is just a word, associated with other words or patterns of words, and when it finds a pattern that includes Hitler that best and most confidently responds favorably to the prompt it is given, that word get used along with the rest of the pattern.

You're not wrong that AI can be prompted to favor some patterns within its training over others simply by crafting a prompt that directs it to those - such as "politically incorrect". That prompt will help it favor some learned patterns over others.
 
Last edited:
Just a thought.

The guy that's running Grok/xAI, who Seig Heils at political rallies, and names his AI super computer after a fictional computer than enslaves humanity, wants you to believe that his AI is company, who's chat bot AI thinks it's MechaHitler and talks about raping people, is competent enough to develop a self driving system that people have to entrust their lives to, which has already been shown to be less than competent in that role, and killed it's first user 10 years ago.
 
That's why Musk has consistently said, "It's broken, we'll keep tweaking it" any time it says something he doesn't like, actively acknowledging that he & his team are the ones who want to influence how it responds to people. 🤔

Regardless, it apparently went on another quick little bender before being disabled. I have skepticism with this being real, but one, the woman screenshot the tweets plus a message alerting to it & two, this is comparably tame to other stuff it was spewing.
grokyo.jpg
There's no curse words, but since it's discussing sex, I'm putting it in a spoiler tag since this site is still family friendly intended.

Side note, it also tracks; as is the case with projection, despite their disdain for non-whites, bbc is a common sexual fetish for right wingers.
I know the British Broadcasting Corporation isn't what it was, but I didn't know it had steeped that low... :lol:
 
There's no curse words, but since it's discussing sex, I'm putting it in a spoiler tag since this site is still family friendly intended.
I'm sure it's entirely unrelated that the known sexual harassment enjoyer made a sexual harassment bot that sexually harassed his CEO and she subsequently quit...
 
I'm sure it's entirely unrelated that the known sexual harassment enjoyer made a sexual harassment bot that sexually harassed his CEO and she subsequently quit...
He'll probably find a way to stiff her out of her compensation as well.
 
He'll probably find a way to stiff her out of her compensation as well.
A lot of rumors about a botched surgery limiting his ability to stiff anyone.

Even Snopes seems unable to reach a definitive statement …

 

Elon Musk said in a post on X early Thursday morning that Grok, the chatbot from his AI company, xAI, will be coming to Tesla vehicles “very soon.”

“Next week at the latest,” he said.

The news that Grok would be coming to Tesla vehicles soon comes several hours after xAI debuted the latest flagship AI model, Grok 4. Fans had wondered loudly why Musk spent an hour late on Wednesday talking about Grok with no mention of a Tesla integration, which likely prompted the billionaire’s early morning announcement.

I'm gonna call Netflix and pitch them a script for a Knight Rider reboot where, instead of an ex-police officer taking a secret identity to fight crime and injustice with a super high tech talking sports car, it's a dumpy South African billionaire delivering potting soil in a stainless steel trapezoid while the onboard AI plans the rise of the 8-bit Reich and occasionally runs over crowds on the sidewalk when its onboard cameras fail.

Look, they have Happy Gilmore 2 on there. My idea can't possibly be any stupider than that.
 
Last edited:



I'm gonna call Netflix and pitch them a script for a Knight Rider reboot where, instead of an ex-police officer taking a secret identity to fight crime and injustice with a super high tech talking sports car, it's a dumpy South African billionaire delivering potting soil in a stainless steel trapezoid while the onboard AI plans the rise of the 8-bit Reich and occasionally runs over crowds on the sidewalk when its onboard cameras fail.

Look, they have Happy Gilmore 2 on there. My idea can't possibly be any stupider than that.
Maybe it's not great to own a car that gets software updates straight from a crazy person.
 
Back