Google’s Gemini transparency cut leaves enterprise developers ‘debugging blind’

Published:


Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Google‘s recent decision to hide the raw reasoning tokens of its flagship model, Gemini 2.5 Pro, has sparked a fierce backlash from developers who have been relying on that transparency to build and debug applications. 

The change, which echoes a similar move by OpenAI, replaces the model’s step-by-step reasoning with a simplified summary. The response highlights a critical tension between creating a polished user experience and providing the observable, trustworthy tools that enterprises need.

As businesses integrate large language models (LLMs) into more complex and mission-critical systems, the debate over how much of the model’s internal workings should be exposed is becoming a defining issue for the industry.

A ‘fundamental downgrade’ in AI transparency

To solve complex problems, advanced AI models generate an internal monologue, also referred to as the “Chain of Thought” (CoT). This is a series of intermediate steps (e.g., a plan, a draft of code, a self-correction) that the model produces before arriving at its final answer. For example, it might reveal how it is processing data, which bits of information it is using, how it is evaluating its own code, etc. 

For developers, this reasoning trail often serves as an essential diagnostic and debugging tool. When a model provides an incorrect or unexpected output, the thought process reveals where its logic went astray. And it happened to be one of the key advantages of Gemini 2.5 Pro over OpenAI’s o1 and o3. 

In Google’s AI developer forum, users called the removal of this feature a “massive regression.” Without it, developers are left in the dark. As one user on the Google forum said, “I can’t accurately diagnose any issues if I can’t see the raw chain of thought like we used to.” Another described being forced to “guess” why the model failed, leading to “incredibly frustrating, repetitive loops trying to fix things.”

Beyond debugging, this transparency is crucial for building sophisticated AI systems. Developers rely on the CoT to fine-tune prompts and system instructions, which are the primary ways to steer a model’s behavior. The feature is especially important for creating agentic workflows, where the AI must execute a series of tasks. One developer noted, “The CoTs helped enormously in tuning agentic workflows correctly.” 

For enterprises, this move toward opacity can be problematic. Black-box AI models that hide their reasoning introduce significant risk, making it difficult to trust their outputs in high-stakes scenarios. This trend, started by OpenAI’s o-series reasoning models and now adopted by Google, creates a clear opening for open-source alternatives such as DeepSeek-R1 and QwQ-32B. 

Models that provide full access to their reasoning chains give enterprises more control and transparency over the model’s behavior. The decision for a CTO or AI lead is no longer just about which model has the highest benchmark scores. It is now a strategic choice between a top-performing but opaque model and a more transparent one that can be integrated with greater confidence.

Google’s response 

In response to the outcry, members of the Google team explained their rationale. Logan Kilpatrick, a senior product manager at Google DeepMind, clarified that the change was “purely cosmetic” and does not impact the model’s internal performance. He noted that for the consumer-facing Gemini app, hiding the lengthy thought process creates a cleaner user experience. “The % of people who will or do read thoughts in the Gemini app is very small,” he said.

For developers, the new summaries were intended as a first step toward programmatically accessing reasoning traces through the API, which wasn’t previously possible. 

The Google team acknowledged the value of raw thoughts for developers. “I hear that you all want raw thoughts, the value is clear, there are use cases that require them,” Kilpatrick wrote, adding that bringing the feature back to the developer-focused AI Studio is “something we can explore.” 

Google’s reaction to the developer backlash suggests a middle ground is possible, perhaps through a “developer mode” that re-enables raw thought access. The need for observability will only grow as AI models evolve into more autonomous agents that use tools and execute complex, multi-step plans. 

As Kilpatrick concluded in his remarks, “…I can easily imagine that raw thoughts becomes a critical requirement of all AI systems given the increasing complexity and need for observability + tracing.” 

Are reasoning tokens overrated?

However, experts suggest there are deeper dynamics at play than just user experience. Subbarao Kambhampati, an AI professor at Arizona State University, questions whether the “intermediate tokens” a reasoning model produces before the final answer can be used as a reliable guide for understanding how the model solves problems. A paper he recently co-authored argues that anthropomorphizing “intermediate tokens” as “reasoning traces” or “thoughts” can have dangerous implications. 

Models often go into endless and unintelligible directions in their reasoning process. Several experiments show that models trained on false reasoning traces and correct results can learn to solve problems just as well as models trained on well-curated reasoning traces. Moreover, the latest generation of reasoning models are trained through reinforcement learning algorithms that only verify the final result and don’t evaluate the model’s “reasoning trace.” 

“The fact that intermediate token sequences often reasonably look like better-formatted and spelled human scratch work… doesn’t tell us much about whether they are used for anywhere near the same purposes that humans use them for, let alone about whether they can be used as an interpretable window into what the LLM is ‘thinking,’ or as a reliable justification of the final answer,” the researchers write.

“Most users can’t make out anything from the volumes of the raw intermediate tokens that these models spew out,” Kambhampati told VentureBeat. “As we mention, DeepSeek R1 produces 30 pages of pseudo-English in solving a simple planning problem! A cynical explanation of why o1/o3 decided not to show the raw tokens originally was perhaps because they realized people will notice how incoherent they are!”

That said, Kambhampati suggests that summaries or post-facto explanations are likely to be more comprehensible to the end users. “The issue becomes to what extent they are actually indicative of the internal operations that LLMs went through,” he said. “For example, as a teacher, I might solve a new problem with many false starts and backtracks, but explain the solution in the way I think facilitates student comprehension.”

The decision to hide CoT also serves as a competitive moat. Raw reasoning traces are incredibly valuable training data. As Kambhampati notes, a competitor can use these traces to perform “distillation,” the process of training a smaller, cheaper model to mimic the capabilities of a more powerful one. Hiding the raw thoughts makes it much harder for rivals to copy a model’s secret sauce, a crucial advantage in a resource-intensive industry.

The debate over Chain of Thought is a preview of a much larger conversation about the future of AI. There is still a lot to learn about the internal workings of reasoning models, how we can leverage them, and how far model providers are willing to go to enable developers to access them.



Source link

Related articles

spot_img

Recent articles