
The rapid rise of AI-generated content is creating a new challenge for the internet: artificial intelligence systems increasingly appear to be learning from material produced by other AI tools. Researchers and media reports describe this as a feedback loop where machine-made text, images, and media are reprocessed into future models, potentially amplifying errors and misinformation.
Attention has recently focused on Grokipedia, an encyclopedia largely generated by xAI’s Grok chatbot. The platform has been described as an AI-driven alternative to Wikipedia and has drawn criticism for factual inaccuracies and unreliable entries. Analysts warn that such sources may contain higher rates of AI “hallucinations,” where systems produce confident but incorrect information.
According to reporting by The Guardian, OpenAI’s ChatGPT has, in some cases, surfaced information similar to content found on Grokipedia when answering user queries. The report suggests the system may be selective, avoiding well-known false claims while still reflecting Grok-related material in more niche or controversial topics. OpenAI has not publicly detailed the extent to which such sources influence its models.
Experts say the broader issue extends beyond any single company. As AI-written material makes up a growing share of online content, the risk increases that flawed data could be recycled into future systems. This iterative process could distort information quality over time, especially in politically sensitive or disputed subject areas. Researchers are urging stronger data filtering, transparency, and verification methods to reduce the spread of AI-generated inaccuracies.

