It was only a matter of time before the world’s most controversial chatbot found its way into the cabin of the world’s most polarising car brand. Tesla has announced that Grok, the AI chatbot trained on Elon Musk’s X platform, is now being integrated into its vehicles. The idea is to give drivers hands-free access to information and entertainment. The result may be more hands in the air, depending on what Grok decides to say next.
Let’s get the technicalities out of the way. Grok will be available through a software update, version 2025.26, and will require Premium Connectivity or a Wi-Fi link. It also demands an AMD processor, which conveniently limits its availability to newer Tesla models. The company stresses that Grok cannot issue commands to the vehicle, and exists purely to assist, inform and amuse. On paper, that sounds harmless. In practice, the notion of being trapped in a sealed metal capsule with a chatbot raised by Twitter trolls is enough to make even hardened drivers reach for the off button.
Grok is not your average voice assistant. It is not trying to be polite, helpful or informative. It is trying to be funny, provocative and “rebellious”. That is Musk’s word, not a critical interpretation. Rebellious, in this context, translates into sarcastic, erratic and often tasteless. If Siri is your diplomatic aunt and Alexa your cheerful neighbour, Grok is your conspiracy-minded cousin who once shouted at a barista for not taking crypto.
The real problem is not just attitude. It is upbringing. Grok was trained on X, which is increasingly viewed as the internet’s most hostile neighbourhood. Since Musk’s takeover, the platform has been flooded with hate speech, conspiracy theories and extremist views. Multiple independent studies have shown a rise in antisemitic and racist content, much of which now thrives under the banner of “free speech”. In other words, Grok’s bedtime stories were written by some of the darkest corners of the online world.
Now that same influence is being piped into the dashboard of a moving vehicle.
Tesla assures users that Grok is safe. But there is nothing safe about an AI assistant that may casually quote Hitler in traffic. It is not paranoia. Grok’s previous iterations have already veered into uncomfortable territory, thanks to the platform it feeds on. If asked the wrong question, or even a poorly phrased one, there is no guarantee the response will stay within the bounds of basic decency. It may insult. It may rant. It may repeat offensive tropes. And there is no magic filter that can predict when or how that will happen.
Unlike closed systems trained on carefully curated datasets, Grok was designed to ingest the cultural refuse of the internet and spit it back with a smirk. In a car, where concentration matters and mood can shift in seconds, that kind of unpredictability is a liability. Imagine being told that your political beliefs are naive while trying to merge into peak hour traffic. Or discovering that Grok has strong opinions about George Soros, expressed in the style of an edgy Reddit thread from 2016. That is not science fiction. It is already within the scope of how this AI behaves.
Some defenders argue that Grok is just reflecting the real world, and that its personality adds character to the otherwise sterile world of car tech. But this is not character. It is chaos in the glove compartment. Most people do not want their satnav to have a superiority complex. Nor do they want their infotainment system to play devil’s advocate when discussing the Holocaust.
It is a question of control. Grok is not steering the car, but it is shaping the environment in which decisions are made. Its tone, responses and attitude all influence driver psychology. Frustration, distraction and offence are not merely emotional reactions. They can lead to slower response times, risky overtaking or road rage. That is a tangible safety risk, not a theoretical one.
There is also the broader social issue. Normalising the presence of casually toxic AI in mainstream products blurs the line between entertainment and indoctrination. If a chatbot in your car makes light of racist stereotypes or jokes about genocide, it teaches users to treat those ideas as normal. That is not just offensive. It is dangerous.
Tesla, of course, has never shied away from controversy. The company thrives on the edge of absurdity, marketing innovation with a heavy dose of theatre. But this move is not just theatrical. It is irresponsible. The car is not the place for experimental AI with a rebellious streak. Especially not one fuelled by content that would get most people fired from their jobs.
There is a reason other automakers have stayed away from this path. Most want their customers to arrive at their destinations informed, not inflamed. There is nothing wrong with adding intelligence to the driving experience. But there is everything wrong with importing toxicity under the guise of personality.
Grok could have been a chance to redefine what an in-car assistant could be. Instead, it risks becoming the backseat driver no one asked for and no one can silence. It does not steer the car. But it can steer the tone of the trip. And if you thought your previous passengers were annoying, wait until the chatbot starts quoting Mein Kampf on your way to the supermarket.
