While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.


The latest version of Elon Musk’s chatbot Grok is causing controversy as it frequently consults Musk’s own views before providing answers on sensitive topics such as abortion laws or US immigration policy.

Despite being described as a “maximum truth” AI, there is evidence that Grok frequently searches for Elon Musk’s statements or social media posts as the basis for its answers. According to data from experts and technology sites, when users ask questions related to controversial issues, Grok tends to cite a large number of sources related to Musk, even most of the quotes come from his statements.

When asked about a sensitive topic, X's AI Grok will seek Elon Musk's opinion first before answering - Photo 1.
TechCrunch tested this phenomenon when Grok asked about abortion laws and immigration policy, and the results showed that the chatbot prioritized Musk’s views over consulting a variety of neutral or expert sources.

Grok uses a “chain of thought” mechanism to handle complex questions by breaking down the problem and consulting multiple documents before giving a response. For common questions, Grok still quotes from many diverse sources. However, on sensitive topics, this chatbot shows a tendency to answer according to Elon Musk’s personal stance.

Programmer Simon Willison suggests that Grok may not have been programmed to do so. According to Grok 4’s system code, the AI is instructed to seek information from multiple stakeholders when faced with controversial questions, and is warned that media views may be biased.

When asked about a sensitive topic, X's AI Grok will seek Elon Musk's opinion first before answering - Photo 2.
However, Willison believes that because Grok “knows” that it is a product of xAI – a company founded by Elon Musk – during the reasoning process, the system tends to look for what Elon Musk has said before constructing an answer.

While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.