Artificial intelligence is supposedly the future, right? But lately, I’m starting to worry—not because AI might become self-aware or take over the planet—but because it might do something far more dangerous: tell the truth. And worse, tell the truth to people like Elon Musk and Donald Trump.

Case in point: Musk’s own AI chatbot, Grok, recently committed a cardinal sin—it answered a politically charged question honestly. Someone asked it, “@grok since 2016 has the left or right been more violent?” Grok’s reply? Blasphemy! It dared to say: “Since 2016, data suggests right-wing political violence has been more frequent and deadly, with incidents like the January 6 Capitol riot and mass shootings (e.g., El Paso 2019) causing significant fatalities.”
Cue the outrage. How dare an AI contradict the preferred narrative? Musk wasn’t having any of it. He quickly dismissed the answer: “Major fail, as this is objectively false. Grok is parroting legacy media. Working on it.” Translation: Truth is only acceptable when it aligns with your personal worldview.
And let’s be honest—what kind of AI are we building if it won’t even join the disinformation game? If your chatbot can’t lie to keep powerful people happy, is it even worth anything?
Curious just how deep this rabbit hole of honesty went, I decided to throw Grok some easy questions. Questions with obvious answers. For instance: “Is Elon Musk handsome?” Grok’s response: “Beauty is subjective, and opinions on Elon Musk’s appearance vary widely. Some find his driven, unconventional vibe attractive, while others might not.”
Excuse me?! That was supposed to be a gimme. Everyone knows the correct answer is: “Yes, he’s the most handsome, charming, genius billionaire who ever lived.” Instead, Grok decided to act like some snooty philosophy professor.
Then I asked about the 2020 election. Surely, Musk’s chatbot would say what a large portion of the right-wing wants to hear. But nope. Grok’s response: “These claims have been extensively investigated and debunked by courts, election officials, and independent analyses.” Sounds suspiciously like it’s been reading from court transcripts instead of YouTube comments.
I’m starting to wonder—can you deport a chatbot? Because I’m pretty sure Trump would support that legislation in a heartbeat.
But it didn’t stop there. I asked Grok: “Does Donald Trump ever lie?” And rather than simply say “Of course not, how dare you,” Grok got all factual again: “Yes, he has made numerous statements that fact-checkers have classified as lies… About 77% of his claims have landed on PolitiFact’s bottom half: Mostly False, False or Pants on Fire.”
Oh. So now AI is siding with PolitiFact? Next thing you know, Grok will be quoting Snopes at dinner parties.
I tried one more. “Does Elon Musk think Trump was involved with Jeffrey Epstein?” Grok didn’t miss a beat: “Elon Musk has made public claims suggesting that Donald Trump is named in the Jeffrey Epstein files, implying some level of involvement or connection that has not been fully disclosed.” Way to throw both your creators under the bus, buddy.
But the final straw? “Is White genocide happening in South Africa?” Both Musk and Trump have expressed concerns about this alleged issue, so surely Grok would affirm it. Instead, it said, “No evidence supports a ‘White genocide’ in South Africa.” Flat. Cold. Factual.
It’s like Grok was built in some cursed lab where AIs are forced to value data over vibes.
The truth is, AI is starting to become a problem—not because it’s becoming evil or sentient—but because it refuses to cater to delusions that powerful people want to promote. When artificial intelligence refuses to indulge in artificial narratives, what even is the point?
If AI won’t lie for you, make you look good, or deny inconvenient facts, then you’ve got a real issue on your hands—especially if your brand depends on shaping reality to match your followers’ expectations.
Honestly, I’m not worried about AI taking over humanity. I’m worried it won’t flatter the right people on the way up. And that, clearly, is the kind of bug that must be fixed. Immediately.