December ETH Price Prediction · Posting Challenge 📈
With rate-cut expectations heating up in December, ETH sentiment turns bullish again.
We’re opening a prediction challenge — Spot the trend · Call the market · Win rewards 💰
Reward 🎁:
From all correct predictions, 5 winners will be randomly selected — 10 USDT each
Deadline 📅: December 11, 12:00 (UTC+8)
How to join ✍️:
Post your ETH price prediction on Gate Square, clearly stating a price range
(e.g. $3,200–$3,400, range must be < $200) and include the hashtag #ETHDecPrediction
Post Examples 👇
Example ①: #ETHDecPrediction Range: $3,150–
American chip investment expert: Google TPU currently leads, but NVIDIA GPU has a longer-term advantage
Crystal Chip Investment Expert Gavin Baker Analyzes in Depth the Differences Between NVIDIA GPU (Hopper, Blackwell ), and Google TPU, Including Technical, Performance, Cost, and Collaborative Aspects. He points out that Google TPU has a temporary short-term advantage, but in the long run, NVIDIA’s GPU ecosystem still maintains a stronger monopoly.
GPU is a full-stack platform; TPU is a single-point ASIC
Baker states that the divergence of AI accelerators has already emerged from the fundamental design philosophy. NVIDIA’s GPUs, from Hopper, Blackwell to the future Rubin, emphasize a full-stack platform, covering the GPU itself, NVLink bidirectional interconnect technology, network cards, switches, and software layers like CUDA and TensorRT—all handled by NVIDIA. After enterprises purchase GPUs, they gain access to a complete environment ready for training and inference, without needing to assemble networks or rewrite software.
In contrast, Google TPU (v4, v5e, v6, v7 ) are essentially specialized application-specific integrated circuits (ASICs), designed specifically for certain AI computations. Google is responsible for the front-end logic design, while the back-end is produced by Broadcom (Broadcom) and manufactured by TSMC (TSMC). Other essential components of TPU, such as switches, network cards, and the software ecosystem, are integrated by Google itself, making the supply chain collaboration much more complex than that of GPUs.
Overall, the advantage of GPUs lies not in the performance of a single chip but in the completeness of the platform and ecosystem. This is also the starting point for the increasing gap in competition between the two.
Blackwell’s Performance Leap and the Greater Pressure on TPU v6/v7
Baker points out that between 2024 and 2025, the performance gap between GPUs and TPUs will become even more pronounced. Blackwell’s GB200 to GB300 represent a significant architectural leap, shifting to liquid cooling design, with a single cabinet consuming up to 130kW of power, making the overall complexity unprecedented. The deployment of large-scale systems is only three or four months away and remains in a very early stage.
The next-generation GB300 can directly fit into the GB200 cabinets, allowing companies to expand faster. Among them, xAI is regarded as the first batch of customers capable of fully utilizing Blackwell’s performance because of its rapid data center construction. Baker metaphorically states:
“If Hopper is like the most advanced aircraft of WWII, then TPU v6/v7 is like the F-4 Phantom, representing a couple of generations of aircraft later. Blackwell is like the F-35, in a completely different performance class.”
This illustrates that TPU v6/v7 hardware is different from Blackwell’s, and also highlights that Google Gemini 3 still uses TPU v6/v7 rather than Blackwell-level devices. Although Google can train high-level models like Gemini 3 using TPU v6/v7, as Blackwell series is launched at scale, the performance difference between the two architectures will become increasingly obvious.
TPU Was Once the Lowest Cost Leader, but GB300 Will Rewrite the Landscape
Baker states that TPU’s key advantage in the past was having the lowest training costs worldwide. Google indeed leveraged this advantage to compress competitors’ fundraising and operational margins.
However, Baker points out that once GB300 is deployed at scale, companies adopting GB300 as their training platform—especially those like xAI with vertical integration and self-built data centers—will turn the cost advantage into a new competitive edge. If OpenAI can overcome computational bottlenecks and develop their own hardware in the future, they might also join the GB300 camp.
This means that once Google loses its cost leadership, the previous low-price strategy will be hard to sustain. The dominance over training costs will shift from TPU to GB300 in the long term.
GPU Scaling Collaboration Is Faster; TPU Integration Is More Burdensome
As large models advance rapidly, the demand for large-scale GPU collaboration grows, which has been a key factor in GPU surpassing TPU in recent years. Baker explains that GPU clusters via NVLink can scale to 200,000–300,000 GPUs, enabling larger models to utilize bigger training budgets. Large data centers quickly built by xAI have also prompted NVIDIA to release optimized solutions earlier, accelerating the evolution of the entire GPU ecosystem.
In contrast, TPU’s overall engineering complexity is higher because Google has to manage the integration of switches and networks, and coordinate supply chains with Broadcom and TSMC.
GPU Moving Toward Annual Generation; TPU Iteration Limited by Supply Chain
Baker mentions that to counter ASIC competition, NVIDIA and AMD are accelerating their update cycles, with GPUs moving toward “one generation per year.” This is a highly advantageous pace for the era of large models, as model expansion is unlikely to be interrupted.
However, TPU’s iteration speed is more constrained. From v1 to v4, then to v6, each generation took several years to mature. Future versions like v8 and v9 will involve supply chains that include Google, Broadcom, TSMC, and other vendors, making development and iteration slower than GPUs. Therefore, in the next three years, the advantage of GPU’s faster iteration will become increasingly evident.
(Differences in Technology and Future Market Trends of NVIDIA GPU, Google TPU, and Amazon AWS Self-Developed AI Chips)
The three major giants are clearly leaning toward NVIDIA, with Google maintaining TPU dominance
Currently, the four leading model providers worldwide are OpenAI, Gemini (Google), Anthropic, and xAI, but the overall alignment is increasingly favoring NVIDIA.
Baker states that Anthropic has signed a $5 billion long-term procurement contract with NVIDIA, tying itself to the GPU camp. xAI is the largest early customer of Blackwell and has invested heavily in building GPU data centers. Meanwhile, OpenAI faces higher costs because it rents computing power externally, leading it to hope that the Stargate project can help solve its long-term compute bottleneck.
Among these four, Google is the only one extensively using TPU but also faces declining cost competitiveness and slower iteration speed. This creates a “3-to-1” power dynamic, with OpenAI, Anthropic, and XAI clustered around GPUs, while Google remains relatively isolated in the TPU camp.
(NVIDIA’s Financials Show Bright Revenue: AI Data Center Business Booming; Jensen Huang: Blackwell Sold Out)
This article “Crystal Chip Investment Expert: Google TPU Currently Leading, but NVIDIA GPU Holds Long-Term Advantage” first appeared on Chain News ABMedia.