Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Nvidia: Chinese competitors are making progress, and the H200 has not yet generated revenue in China.
NVIDIA’s latest release!
After announcing its fiscal year 2026 Q4 (November 2025 – January 2026) earnings, NVIDIA’s stock initially rose and then fell in after-hours trading on Thursday. Despite the impressive earnings report, many institutional investors still have questions about the company’s future prospects.
The financial data shows that NVIDIA’s total revenue for Q4 reached $68.1 billion, up 73% year-over-year, setting a new record for a single quarter; of which, core data center revenue was $62.3 billion, up 75% year-over-year and 22% quarter-over-quarter. For the full year, data center revenue totaled $193.7 billion, up 68%, nearly 13 times larger since ChatGPT’s launch.
For fiscal year 2027 Q1, NVIDIA provided an optimistic outlook, expecting total revenue of $78 billion (plus or minus 2%), mainly driven by data center growth.
Following the earnings release, CEO Jensen Huang, COO and CFO Colette Kress, and other executives participated in a conference call and answered analyst questions.
During the call, Huang emphasized the core logic of “computing power equals revenue,” noting that the industry has reached a turning point with AI agents. Products like Claude Code and OpenAI Codex have made intelligent agents practically applicable, with token generation directly translating into customer productivity and revenue, fueling exponential growth in computing demand. He stressed that AI-driven new computing paradigms “will not regress,” and the industry is undergoing its third major platform shift—from traditional CPUs to GPUs, from traditional machine learning to generative AI, and from generative AI to AI agents. Each shift has supported large-scale investments. Meanwhile, AI applications have already achieved commercial deployment, contributing over $6 billion in revenue in FY2026, with fields like autonomous taxis and industrial robots driving massive computing needs.
In product and technology development, NVIDIA’s Blackwell architecture is experiencing strong capacity ramp-up, with infrastructure based on this architecture already deployed by customers, consuming nearly 9 gigawatts of compute power—even six-year-old Ampere-based products are still sold out in the cloud. The new Vera Rubin platform has made substantial progress, with initial samples shipped to customers and mass production expected in the second half of 2026. This platform includes six new chips, capable of training MOE models with a quarter of the GPU count needed for Blackwell, reducing inference token costs by up to 10 times, and offering more flexible and maintainable design. NVIDIA expects all cloud model builders to adopt this platform. Additionally, through continuous CUDA software optimization, the performance of GB200NVL72 has increased fivefold in four months; GB300 and NVL72 have achieved up to 50 times higher performance per watt and 35 times lower cost per token in inference, solidifying its position as the “king of inference.”
In ecosystem partnerships and strategic investments, NVIDIA is close to finalizing deep collaboration with OpenAI. Huang called OpenAI “a once-in-a-generation company,” and the planned multi-billion-dollar AI infrastructure project is progressing steadily. The company also invested $10 billion in Anthropic, which will train and infer on NVIDIA’s Grace Blackwell and Vera Rubin systems. Furthermore, NVIDIA has formed a deep partnership with Meta, deploying millions of Blackwell and Rubin GPUs, and has signed a non-exclusive license agreement with Grok to access its low-latency inference technology. Huang stated that NVIDIA’s investments aim to ensure that all areas—from large language models to robotics—are built on NVIDIA platforms, deepening industry barriers through ecosystem investments and full-stack AI infrastructure.
Regarding customer structure, NVIDIA has achieved significant revenue diversification. The top five cloud providers and large-scale computing companies still account for over 50% of data center revenue, but non-hyperscale clients are becoming the main growth drivers, including AI model manufacturers, enterprises, supercomputing centers, and sovereign AI customers. In FY2026, NVIDIA’s sovereign AI business more than doubled, exceeding $30 billion in revenue, with Canada, France, Singapore, and other countries as key drivers. Huang pointed out that the versatility of the CUDA ecosystem makes NVIDIA the only accelerated computing platform present across every cloud, supporting customer diversity. Over 1.5 million AI models on Hugging Face run on CUDA, further strengthening the platform’s advantage.
In supply chain and capacity, despite tight supply of advanced components like high-bandwidth memory affecting the latest chip architectures and spilling over into gaming, NVIDIA has strategically locked in inventory and capacity, extending procurement commitments into 2027 to meet long-term demand. Kress noted that product iterations of Blackwell and Vera Rubin are seamlessly connected, and with scale and supply chain advantages, the company is confident in capturing future growth opportunities.
Addressing market concerns, Huang responded that space data centers are currently not economically viable but will improve gradually. GPUs have unique applications in space imaging, with Hopper becoming the world’s first space GPU. Regarding chip architecture design, NVIDIA believes in minimizing bare die interconnects, enhancing performance through architectural co-design, and maintaining compatibility across generations so software optimizations benefit all products. On gross margin sustainability, Huang emphasized that the key is achieving intergenerational leaps in performance per watt through extreme co-design, creating value for customers. The company plans to launch new AI infrastructure solutions annually to maintain technological leadership.
Looking ahead, NVIDIA expects quarterly revenue growth each quarter in FY2026, surpassing previous expectations of $500 billion in revenue from Blackwell and Rubin. Huang reiterated that by 2030, global data center capital expenditure could reach $3 trillion to $4 trillion. The core growth driver remains agent-based AI, and the integration of physical AI with agent systems will be the next inflection point, opening new computing demand in manufacturing, robotics, and other fields. He emphasized that in the AI era, all companies will become “AI factories,” with computing capacity directly driving revenue growth. NVIDIA will continue leading AI computing infrastructure with its full-stack technology, ecosystem, and supply chain advantages.
Regarding the Chinese market, NVIDIA’s earnings report shows the company sold approximately $60 million worth of H20 chips in China, while H200 has yet to generate revenue there. Kress also mentioned that competitors in China are “making progress,” including companies that have strengthened their capabilities through recent IPOs. She pointed out that these companies have the potential to disrupt the current global AI landscape and could impact NVIDIA’s competitive position worldwide.