Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
H200 China market volume surge countdown! CUDA supports strong demand Nvidia (NVDA.US) AI empire welcomes "incremental positive catalysts"
Bloomberg Finance APP learned that “AI chip super dominant” Nvidia (NVDA.US) CEO Jensen Huang stated that the chip giant is starting mass production of the H200 AI training/inference accelerator based on the Hopper architecture launched in March 2022 for customers in the Chinese market. This also indicates that the efforts of this American chip company to re-enter China’s critical AI computing infrastructure market are making positive progress.
Undoubtedly, if the H200, which faces an additional 25% cost/US government tariffs, can significantly flow into the Chinese market on a large scale, it would be a substantial incremental benefit for Nvidia’s fundamental growth prospects amid its current stock price consolidation. After all, Nvidia’s official quarterly performance guidance and the “Super AI Blueprint” announced at the GTC conference on Monday, projecting at least a trillion-dollar market by 2027, did not account for revenue prospects from China.
After Huang Huang delivered a major speech at Nvidia’s GTC conference and announced the next-generation AI computing infrastructure—the Vera Rubin architecture—at a press conference on Tuesday local time, he stated that Nvidia has obtained US government approval to sell H200 AI chips to “many large customers in the Chinese market,” and is currently “restarting our large-scale manufacturing.” He emphasized that this outlook is very different from a few weeks ago.
“Our H200 supply chain is being restarted,” Huang said at an event during Nvidia’s annual GTC conference in San Jose, California. The CEO had just unveiled a series of new products during his keynote speech the day before and provided investors with an update on the company’s financial fundamentals.
In recent years, Nvidia has been working to restore sales of its AI chips in China. Due to long-standing US government restrictions on chip exports to China, this once-large market for Nvidia has effectively been closed off to such AI computing infrastructure products for the long term.
H200 Under the Pressure of 25% US Tariffs
Since this year, the Trump administration has begun allowing Nvidia and its strongest competitor AMD (AMD.US) to sell less powerful versions of AI chips in China, but this still requires official US government approval and faces a 25% tariff.
The US government permits Nvidia’s H200 to be exported to China under certain conditions, with a 25% fee/tariff as a “trade-off.” This arrangement is essentially a policy compromise—allowing exports while collecting revenue. In contrast, higher-end AI chips like the Blackwell series architecture and AMD’s Instinct MI450 series are still considered more sensitive technologies at the US policy level and are not currently included in export licenses. This means they are not permitted for export and are not subject to such tariffs.
It is important to note that this semiconductor tariff policy targeting Nvidia and AMD excludes chips used in US domestic data centers, consumer devices, and industrial applications, meaning these tariffs do not apply to H200/MI325X chips used directly within the US.
Currently, Nvidia has not included any revenue prospects from Chinese data center markets in its financial forecasts. Its data center division—its core business—provides the powerful AI infrastructure through H100/H200 and Blackwell/Blackwell Ultra architecture AI GPUs for data centers worldwide.
In last month’s earnings call, the company stated that it had only obtained a preliminary license from the US government to ship a small number of H200 AI chips to China. Although the H200’s overall performance is far inferior to Nvidia’s current AI training and large model deployment chips based on Blackwell/Blackwell Ultra architectures, it remains popular in the sanctioned Chinese market due to its strong inference performance, CUDA ecosystem, and easy deployment.
China previously accounted for about a quarter of Nvidia’s total revenue, but now it makes up only a small part. Despite strong global demand for Nvidia’s AI chips, this Asian country remains the world’s largest single semiconductor market, making it crucial for Nvidia’s long-term fundamental prosperity.
As early as December last year, Nvidia received verbal approval from then-President Donald Trump to sell H200 chips to some Chinese customers, but the company has yet to confirm any revenue from Chinese H200 sales under this license. Washington’s manufacturing and tariff regulators have also set additional hurdles, slowing down the formal approval process and making a full, unrestricted sales recovery unlikely.
With Huang Huang’s latest statement that H200 AI chips are currently “restarting our large-scale manufacturing,” Nvidia may soon confirm revenue data from the Chinese market.
Media reports previously indicated that H200 AI chips shipped to China require additional US routine inspections and are subject to a 25% tariff. US officials are also considering limiting each Chinese customer to purchase up to 75,000 H200 chips, with a maximum shipment volume of 1 million processors.
Demand for H200 AI chips in China is likely very strong; the core limitation is not demand but US government policies and approvals. Recent reports suggest that Chinese tech companies’ actual orders for H200 chips from 2026 have exceeded 2 million units, while Nvidia’s inventory at that time was only about 700,000 units.
China Market—A Major Incremental Benefit for Nvidia
On Tuesday, Nvidia’s stock closed down 0.7% at $181.93, a 2.5% decline year-to-date, underperforming the S&P 500.
From a fundamental perspective, if H200 AI chips truly flow into China on a large scale, it would be a significant incremental benefit for Nvidia. China once accounted for about a quarter of Nvidia’s revenue but now contributes only a small part. Moreover, Nvidia’s strong quarterly guidance issued in February did not include any Chinese data center revenue prospects, and the company’s recent outlook for Chinese data center revenue remains zero. This means that as H200 shipments normalize—even if not fully unrestricted—it could lead to upward revisions of Nvidia’s current valuation model and market growth expectations.
In terms of underlying performance, the H200, based on the Hopper architecture, is already one or two generations behind the latest Blackwell and the upcoming Vera Rubin, announced by Huang Huang to be mass-produced by year’s end. The H200 features 141GB HBM3e, 4.8TB/s bandwidth, and approximately 4 PFLOPS FP8 performance. Nvidia has publicly demonstrated that the H200 NVL72 can achieve up to 15 times the inference performance and revenue opportunity advantage in certain scenarios compared to Hopper H200. Furthermore, Vera Rubin is expected to deliver a 10x improvement in performance per watt and a 10x reduction in token costs relative to Blackwell. However, these differences do not seem to hinder H200’s fit for the current US-sanctioned Chinese market.
H200 offers nearly six times the performance of Nvidia’s previous AI chip product designed for China—the H20. In the global AI inference wave, enterprises need mature chips capable of immediate deployment, running large models, with larger memory and higher bandwidth.
Nvidia’s AI GPU monopoly on training requires more powerful, versatile AI compute clusters and rapid iteration of the entire compute system. On the inference side, after the deployment of cutting-edge AI technologies at scale, the focus shifts to unit token cost, latency, and energy efficiency. “The era of AI inference has arrived,” Huang Huang said at GTC on Monday. “And the demand for inference continues to grow,” he added.
Therefore, the 141GB HBM3e in H200 remains highly attractive for long-context, large-batch, retrieval-augmented, and enterprise-scale, high-efficiency AI inference clusters. Coupled with the strong demand driven by the CUDA ecosystem, it remains “high-end available compute power under constrained conditions” for China. Meanwhile, CUDA, CUDA-X, ready-made models, development tools, and operational experience significantly reduce migration and implementation costs for Chinese customers.
For Wall Street institutions, this is not a “Nvidia turnaround via China” narrative but an additional, potentially underestimated upside in China’s demand within the already strong global AI infrastructure trend.
At the GTC conference on March 17 (Beijing time), Nvidia CEO Huang Huang showcased Nvidia’s “unprecedented AI compute revenue super blueprint” in the AI infrastructure field. He told global investors that, driven by the strong demand for Blackwell architecture GPUs and the upcoming Vera Rubin architecture AI infrastructure, Nvidia’s future revenue in AI chips could reach at least $1 trillion by 2027, far exceeding the previous blueprint announced at the last GTC, which aimed for $500 billion by 2026.
As model sizes, inference chains, and multimodal/agentic AI workloads drive exponential growth in compute consumption, tech giants’ capital expenditure focus increasingly shifts toward AI infrastructure. Global investors continue to see Nvidia, Google TPU clusters, and AMD’s new product iterations and AI compute cluster deliveries as some of the most certain and bullish investment narratives in the stock market. This also means that related investment themes—power supplies, liquid cooling systems, optical interconnects—closely tied to AI training and inference will remain among the hottest sectors, even amid geopolitical uncertainties in the Middle East involving major AI leaders like Nvidia, AMD, Broadcom, TSMC, and Micron.
Major Wall Street firms such as Morgan Stanley, Citigroup, Loop Capital, and Wedbush believe that the wave of global AI infrastructure investment centered on AI compute hardware is far from over. It is only at the beginning. Driven by an unprecedented “AI inference compute demand storm,” the scale of this global AI infrastructure investment wave through 2030 could reach $3 trillion to $4 trillion.