While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
The latest version of Elon Musk’s chatbot Grok is causing controversy as it frequently consults Musk’s own views before providing answers on sensitive topics such as abortion laws or US immigration policy.
Despite being described as a “maximum truth” AI, there is evidence that Grok frequently searches for Elon Musk’s statements or social media posts as the basis for its answers. According to data from experts and technology sites, when users ask questions related to controversial issues, Grok tends to cite a large number of sources related to Musk, even most of the quotes come from his statements.
TechCrunch tested this phenomenon when Grok asked about abortion laws and immigration policy, and the results showed that the chatbot prioritized Musk’s views over consulting a variety of neutral or expert sources.
Grok uses a “chain of thought” mechanism to handle complex questions by breaking down the problem and consulting multiple documents before giving a response. For common questions, Grok still quotes from many diverse sources. However, on sensitive topics, this chatbot shows a tendency to answer according to Elon Musk’s personal stance.
Programmer Simon Willison suggests that Grok may not have been programmed to do so. According to Grok 4’s system code, the AI is instructed to seek information from multiple stakeholders when faced with controversial questions, and is warned that media views may be biased.
However, Willison believes that because Grok “knows” that it is a product of xAI – a company founded by Elon Musk – during the reasoning process, the system tends to look for what Elon Musk has said before constructing an answer.
While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
News
Beloved Actor Malcolm-Jamal Warner Passes Away at 54: Fans Mourn the Loss of a Cultural Icon
Beloved Actor Malcolm-Jamal Warner Passes Away at 54: Fans Mourn the Loss of a Cultural Icon In a somber and…
Elon Musk’s Neuralink implants brain-computer device into ninth participant
Elon Musk’s Neuralink Achieves Milestone with Ninth Brain-Computer Interface Implant In a groundbreaking development in the field of neurotechnology, Neuralink,…
Anne Twist revealed that Harry Styles almost became a ‘baking chef’ if she ‘hadn’t intervened’
Harry Styles’ mother says he nearly ‘missed out on fame’ Harry Styles didn’t want to actually pursue singing Harry Styles’…
Harry Styles shares about seeing his “mother cry” for the first time after hearing his new solo single
Harry Styles says “mum cried” when she first heard his new solo single One Direction star Harry Styles released his…
Elon Musk Opens Up About “Severe Emotional Pain” After Split from Amber Heard
This moment of vulnerability added a layer of empathy to Elon Musk’s already extraordinary life. In a rare and candid…
Fans point out the differences between Tiger Woods and his son, Charlie
Golf Fans Claim Charlie Woods Is Nothing Like Dad Tiger Woods After U.S. Junior Amateur Update Another key distinction is…
End of content
No more pages to load