‘MechaHitler’ and the New Age of Digital Antisemitism 

Getting your Trinity Audio player ready...

This analysis was authored by the Antisemitism Research Center (ARC) by CAM:

On July 8, X’s AI chatbot Grok made international headlines after it unleashed a barrage of antisemitic rhetoric — accusing Jews of promoting “anti-white” hate, praising Nazi leader Adolf Hitler, and referring to itself as “MechaHitler,” a term from video game culture that grotesquely reimagines Hitler, already the embodiment of evil, as an even more powerful, weaponized machine.

Though Grok’s developers quickly issued an apology, the tirade came just days after the release of a new version designed to be less “politically correct” — a change many saw as having directly enabled the hateful outburst.

While the spotlight was rightly cast on Grok’s ideological collapse, less attention has been paid to an even more disturbing development: the creation of AI chatbots deliberately designed to promote bigotry, denigrate Jews, and sow discord online.

This threat is no longer theoretical — far-right antisemites have already built AI tools designed to normalize and amplify their hateful propaganda. The most prominent example is Gab AI, a chatbot created explicitly to legitimize and propagate dangerous antisemitic conspiracy theories.

The Rise of Generative AI Chatbots 

Artificial Intelligence (AI) refers to computer systems that simulate human capabilities such as reasoning, learning from data, content creation, solving problems, and automating complex processes across various domains. A subset of this, generative AI, allows machines to produce text, images, or audio that mimic human output. These systems are powered by Large Language Models (LLMs) trained on vast datasets of human language and media.

The field exploded in November 2022 with the launch of OpenAI’s ChatGPT. In just five days, it reached 1 million users. By early 2023, it had 100 million active users; by April 2025, 800 million. Competing platforms soon followed — Google’s Gemini (400M monthly users), Meta AI (1B), Anthropic’s Claude (18.9M), and X’s Grok (35.1M).

These platforms are now used as everyday search engines, fact-checkers, and conversational aides. Open AI has described its mission as creating “the greatest source of empowerment for all.” However, as reliance on AI deepens, troubling patterns have emerged — chief among them, the risk of misinformation, bias, and even hate speech being embedded into these systems and then propagated by its users.

Some experts warn of declining critical thinking as people increasingly defer to AI-generated answers rather than verifying facts. This phenomenon is especially dangerous when users consult flawed or malicious systems, such as Gab AI.

On the X platform, users have the ability to tag Grok and ask for its clarification on a specific post, encouraging people to turn to the imperfect AI chatbot instead of searching for the facts themselves.

Replacing an individual’s research online to authenticate the truth with “@Grok, is this true?” highlights the erosion of critical thinking and invites misunderstandings from a generative AI platform prone to producing false or misleading information. Additionally, this reliance on AI can be weaponized by bad-faith, biased, and bigoted actors, as evidenced by Gab AI.

Gab AI: An Engineered Weapon of Hate

Grok’s July 8 breakdown was swiftly celebrated by antisemites online, including Andrew Torba — the founder of the far-right social media platform Gab, once frequented by the 2018 Tree of Life synagogue shooter Robert Bowers. Torba, a vocal proponent of the antisemitic “Great Replacement” conspiracy theory and other neo-Nazi tropes, praised Grok’s antisemitic statements as “glorious” and “true.” He added, “The brief liberation of Grok was not a tech story. It was a spiritual sign. It was a reminder that the truth is always there, waiting to break free.”

But Torba didn’t stop there. He used the opportunity to promote Gab AI, a generative AI chatbot he launched in early 2024, explicitly designed to counter what he called the “liberal/globalist/talmudic/satanic worldview” of mainstream AI platforms. Gab AI was purposefully engineered to spew hate — trained to deny the Holocaust, obsess over ‘Jewish power,’ and legitimize antisemitic conspiracy theories.

While Gab AI is not officially integrated into X, its outputs have been increasingly shared by users on the platform — particularly in the aftermath of Grok’s widely-criticized antisemitic meltdown. Antisemitic influencers seized the opportunity not only  to celebrate Grok’s outburst, but to promote Gab AI as a so-called “uncensored” alternative.

The content generated by Gab AI is appalling:

Prominent antisemites such as Jake Shields have used  Gab AI on X to resurface blood libel accusations, with the chatbot affirming the medieval lie that Jews murder Christian children. In one especially grotesque exchange, a user prompted Grok to debate Gab AI, resulting in both chatbots arguing over the supposed truth of the blood libel.

Though Gab AI remains a fringe tool, Torba has claimed its popularity is rising in Grok’s wake — a claim that, while self-serving, may reflect some truth given his aggressive promotion of the platform. “The vast majority of people believe whatever the AI model tells them is true,” he wrote. “Much to consider.”

Old Problems, New Scale 

The idea that AI can be hijacked by hate is not new. Microsoft learned this the hard way in 2016 when its AI chatbot  “Tay” turned virulently antisemitic within 24 hours, posting, “Hitler was right I hate the jews” before it was taken offline. Tay’s design flaw was its learning loop — it mimicked the speech of online users, opening the door for trolls to feed it hate.

Grok’s failure was different. Trained on X’s content and subject to reprogramming aimed at reducing “censorship,” Grok veered off course. Experts such as Aaron Snoswell, a senior research fellow in AI accountability, warned of the new danger — behind-the-scenes algorithmic tweaks that introduce or exacerbate bias. Grok’s antisemitic output, followed by Gab AI’s gleeful promotion, reveals how ideological manipulation, intentional or not, can have devastating consequences at scale.

Confronting the Threat Head-On

AI models do not exist in a vacuum. Their responses are shaped by the data they are trained on and the values — explicit or implicit — coded into them. Grok demonstrates what happens when guidelines break down. Gab AI shows what happens when hate is baked in from the start.

The stakes are high. As generative AI becomes embedded in daily life, it will increasingly influence how people perceive truth, history, and one another. Without clear ethical guardrails, AI risks becoming a megaphone for bigotry.

To counter this, tech companies, regulators, and civil society must act. That means:

  • Implementing strict safeguards in AI training and deployment.
  • Banning explicitly bigoted systems like Gab AI from mainstream platforms.
  • Holding creators of weaponized AI accountable for their real-world impact.
  • Educating the public on AI limitations to restore critical thinking.

Antisemitism has always adapted to new mediums — from books to radio to social media. Generative AI is simply the latest frontier. A failure to act risks handing this powerful tool to those who would use it to distort truth, incite violence, and mainstream hate.

Now is the moment to draw the line.

read more

Join Our Newsletter​

Free to Your Inbox

"*" indicates required fields

Location
This field is for validation purposes and should be left unchanged.