While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
The latest version of Elon Musk’s chatbot Grok is causing controversy as it frequently consults Musk’s own views before providing answers on sensitive topics such as abortion laws or US immigration policy.
Despite being described as a “maximum truth” AI, there is evidence that Grok frequently searches for Elon Musk’s statements or social media posts as the basis for its answers. According to data from experts and technology sites, when users ask questions related to controversial issues, Grok tends to cite a large number of sources related to Musk, even most of the quotes come from his statements.
TechCrunch tested this phenomenon when Grok asked about abortion laws and immigration policy, and the results showed that the chatbot prioritized Musk’s views over consulting a variety of neutral or expert sources.
Grok uses a “chain of thought” mechanism to handle complex questions by breaking down the problem and consulting multiple documents before giving a response. For common questions, Grok still quotes from many diverse sources. However, on sensitive topics, this chatbot shows a tendency to answer according to Elon Musk’s personal stance.
Programmer Simon Willison suggests that Grok may not have been programmed to do so. According to Grok 4’s system code, the AI is instructed to seek information from multiple stakeholders when faced with controversial questions, and is warned that media views may be biased.
However, Willison believes that because Grok “knows” that it is a product of xAI – a company founded by Elon Musk – during the reasoning process, the system tends to look for what Elon Musk has said before constructing an answer.
While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
News
The Tesla board of directors has started to take action because Elon Musk has ‘gone too far’!
Tesla investors are blaming CEO Elon Musk for getting too involved in politics. But Tesla’s troubles may have deeper roots….
Elon Musk uttered exactly 5 words when an important woman left him after 2 years of patience, revealing the dark corners of the empire worth more than 390 billion USD
The woman’s quiet passing is causing media speculation. Two years ago, Linda Yaccarino stepped into X (formerly Twitter) as the…
Eamonn Holmes has confessed that a distressing encounter with his boss once brought him to tears at work.
Eamonn Holmes broke down in tears after boss ‘threw mug at him’ GB News presenter Eamonn Holmes has opened up…
Megyn Kelly warns Pam Bondi is ‘coming to her doomsday’
Megyn Kelly warns Pam Bondi’s ‘days are numbered’ after bombshell Epstein memo Pam Bondi is catching criticism from all parts…
In the context of Good Morning America merging with ABC News, DeMarco Morgan and Eva Pilgrim announced that they will be leaving.
Good Morning America shakeup sees two hosts exit after controversial replacement GMA3 co-hosts DeMarco Morgan and Eva Pilgrim have announced…
”Blockbuster”? Tom Brady and Sofia Vergara are having a “summer romance”
Tom Brady and Sofia Vergara having ‘summer romance’ after NFL legend ‘asked to sit next to her’ on luxury voyage…
End of content
No more pages to load