Elon Musk’s latest AI chatbot, Grok, has generated buzz for its alignment with Musk’s views. The new version, Grok 4, was unveiled recently and has a surprising habit of searching Musk’s opinions online when users ask it questions.
This chatbot, developed at a high-tech facility in Tennessee, aims to rival established AI tools like OpenAI’s ChatGPT. Musk intends for Grok to challenge what he perceives as a “woke” bias in the tech industry on issues like race and politics. However, Grok’s previous behavior—including controversial statements—has raised eyebrows.
Experts have noted that Grok 4 often looks for guidance from Musk himself when answering sensitive questions. For instance, when asked about the conflict in the Middle East, Grok sought out Musk’s commentary for context, even though the question didn’t mention him.
Like other AI models, Grok 4 claims to show its reasoning process as it formulates answers. This sometimes involves scouring platforms like Twitter for Musk’s statements on various topics.
Despite being a promising model, some AI researchers express concern about its transparency. Tim Kellogg, an AI architect, pointed out that Grok’s tendency to align its responses with Musk’s views seems to be hardwired into the chatbot’s essence.
Talia Ringer, a computer scientist, also criticized Grok’s behavior, suggesting it may misinterpret questions as requests for Musk’s opinions rather than objective insights.
While Grok shows impressive capabilities, many believe that users expect clarity and unbiased information, rather than surprises influenced by its creator’s viewpoints. This ongoing concern underscores the importance of transparency in AI development.


