OpenAI marketed its newest model, GPT-5.2, as a powerhouse for high-level professional work. However, recent tests from the Guardian suggest the AI might have a credibility problem. The report found that the model relies on “Grokipedia,” an online encyclopedia created by Elon Musk’s AI company, xAI, to answer questions about highly sensitive and controversial topics.
The Guardian discovered that when users asked about the Iranian government’s ties to a specific telecommunications company, or details regarding a historian involved in a famous Holocaust denial trial, GPT-5.2 used Grokipedia as a primary source. This raises concerns because Grokipedia itself has a shaky reputation. Since its launch, researchers have caught the AI-powered encyclopedia citing “problematic” sources, including neo-Nazi forums. Interestingly, the tests showed that ChatGPT didn’t use Grokipedia for other heated topics, like media bias regarding Donald Trump.
OpenAI built GPT-5.2 to handle difficult “office” work, such as managing complex spreadsheets or deep research. Because it’s supposed to be a tool for experts, the fact that it pulls information from an unreliable source is raising eyebrows. Grokipedia has already faced heavy criticism from U.S. researchers who labeled its citations as “questionable.”
When asked about these findings, OpenAI defended its model. The company explained that GPT-5.2 scans a huge variety of public websites to provide a broad range of viewpoints. They insisted on using safety filters to prevent the AI from sharing links associated with “high-severity harms.” Still, the incident shows that even the most advanced AI tools struggle to tell a high-quality source from one with a clear agenda.











