
Google’s portrayal of its Gemini AI as an environmentally efficient tool is coming under fire after experts challenged the company’s claim that each query consumes only “five drops of water.” The Verge reports that Shaolei Ren, a University of California Riverside researcher and co-author of Making AI Less “Thirsty”, disputes Google’s narrow definition of water usage. Ren’s research previously estimated that training OpenAI’s GPT-3 model used 700,000 liters of water and that a typical ChatGPT session might consume nearly a pint—figures far higher than Google’s claim. Critics argue that Google’s estimate excludes the water consumed by the power plants supplying electricity to its data centers, a major portion of AI’s hidden environmental cost. “Google’s five drops per query is misleading—it doesn’t reflect the full picture,” said sustainability analyst Alex de Vries-Gao.
Experts are also questioning Google’s approach to reporting AI-related carbon emissions. By using market-based figures, Google can effectively omit emissions offset by carbon credits and renewable energy certificates, leaving the true environmental impact obscured. This controversy highlights growing skepticism around how tech giants report the sustainability of AI technologies. As AI becomes more integral to global infrastructure, pressure is mounting for transparent, standardized reporting of its energy and resource consumption to ensure the industry addresses its long-term environmental consequences.

