🚀 Gate Square Creator Certification Incentive Program Is Live!
Join Gate Square and share over $10,000 in monthly creator rewards!
Whether you’re an active Gate Square creator or an established voice on another platform, consistent quality content can earn you token rewards, exclusive Gate merch, and massive traffic exposure!
✅ Eligibility:
You can apply if you meet any of the following:
1️⃣ Verified creator on another platform
2️⃣ At least 1,000 followers on a single platform (no combined total)
3️⃣ Gate Square certified creator meeting follower and engagement criteria
Click to apply now 👉
Give Nokia 1 billion, Jen-Hsun Huang wants to earn 200 billion.
At the GTC in 2025, Jen-Hsun Huang dropped a bombshell that Nvidia will invest 1 billion dollars in Nokia. Yes, that's the Nokia, the Symbian phone company that was all the rage 20 years ago.
In his speech, Jen-Hsun Huang stated that telecommunications networks are undergoing a significant transformation from traditional architectures to AI-native systems, and that NVIDIA's investment will accelerate this process. Consequently, NVIDIA, through investment, has jointly created an AI platform for 6G networks with Nokia, integrating AI into traditional RAN networks.
The specific investment form is that Nvidia subscribes to approximately 166 million new shares of Nokia at a price of $6.01 per share, which will give Nvidia about 2.9% ownership in Nokia.
At the moment the cooperation was announced, Nokia's stock price surged by 21%, marking the largest increase since 2013.
01 What is AI-RAN?
RAN refers to Radio Access Network, while AI-RAN is a new type of network architecture that integrates AI computing capabilities directly into wireless base stations. Traditional RAN systems are primarily responsible for transmitting data between base stations and mobile devices, whereas AI-RAN adds edge computing and intelligent processing capabilities on top of that.
Enable base stations to apply AI algorithms to optimize spectrum utilization and energy efficiency, improve overall network performance, and also leverage idle RAN assets to host edge AI services, creating new revenue streams for operators.
Operators can run AI applications directly at the base station site without having to send all data back to the central data center for processing, greatly reducing the burden on the network.
Jen-Hsun Huang gave an example that almost 50% of ChatGPT users access it through mobile devices. Not only that, the monthly mobile downloads of ChatGPT exceed 40 million. In an era of explosive growth for AI applications, traditional RAN systems cannot cope with generative AI and agent-dominated mobile networks.
AI-RAN provides distributed AI inference capabilities at the edge, allowing the upcoming AI applications, such as intelligent agents and chatbots, to respond more quickly. At the same time, AI-RAN is also preparing for integrated sensing and communication applications for the 6G era.
Jen-Hsun Huang cited a forecast from the analyst firm Omdia, which expects the RAN market to exceed 200 billion dollars by 2030, with the AI-RAN segment becoming the fastest-growing subfield.
Nokia President and CEO Justin Hotard stated in a joint announcement that this partnership will put AI data centers in everyone's pocket, achieving a fundamental redesign from 5G to 6G.
He specifically mentioned that Nokia is collaborating with three different types of companies: Nvidia, Dell, and T-Mobile. T-Mobile, as one of the first partners, will begin field testing of AI-RAN technology starting in 2026, focusing on validating performance and efficiency improvements. Justin stated that this test will provide valuable data for 6G innovation, helping operators build intelligent networks that adapt to AI demands.
Based on AI-RAN, NVIDIA's new product is called Aerial RAN Computer Pro (ARC-Pro), which is an accelerated computing platform prepared for 6G. Its core hardware configuration includes two types of NVIDIA GPUs: Grace CPU and Blackwell GPU.
This platform runs on NVIDIA CUDA, and the RAN software can be directly embedded into the CUDA technology stack. Therefore, it can not only handle traditional wireless access network functions but also run mainstream AI applications simultaneously. This is also the core method by which NVIDIA achieves the two letters 'AI' in AI-RAN.
Given the long history of CUDA, the biggest advantage of this platform is actually its programmability. Not only that, but Jen-Hsun Huang also announced that the Aerial software framework will be open-sourced, expected to be released on GitHub under the Apache 2.0 license starting in December 2025.
The main difference between ARC-Pro and its predecessor ARC lies in the deployment location and application scenarios. The previous ARC was primarily used for centralized cloud RAN implementation, while ARC-Pro can be directly deployed on-site at the base station, which allows edge computing capabilities to be truly realized.
Ronnie Vashita, head of telecommunications business at Nvidia, said that in the past, RAN and AI required two different sets of hardware to function, but ARC-Pro can dynamically allocate computing resources based on network demands, allowing it to prioritize wireless access functions while running AI inference tasks during idle periods.
ARC-Pro also integrates NVIDIA's AI Aerial platform, which is a complete software stack that includes CUDA-accelerated RAN software, Aerial Omniverse digital twin tools, and the new Aerial Framework. The Aerial Framework can convert Python code into high-performance CUDA code to run on the ARC-Pro platform. In addition, the platform supports AI-driven neural network models for advanced channel estimation.
Jen-Hsun Huang said that telecommunications is the digital nervous system of the economy and security. Collaboration with Nokia and the telecommunications ecosystem will ignite this revolution, helping operators build intelligent, adaptive networks that define the next generation of global connectivity.
02 Looking ahead to 2025, Nvidia has indeed invested quite a bit.
On September 22, Nvidia and OpenAI reached a partnership, with Nvidia planning to gradually invest $100 billion in OpenAI, which will accelerate its infrastructure development.
Jen-Hsun Huang said that OpenAI sought investment from NVIDIA a long time ago, but at that time the company's funds were limited. He humorously mentioned that they were too poor back then and should have given them all the money.
Jen-Hsun Huang believes that AI inference growth is not 100 times or 1000 times, but 1 billion times. Moreover, this collaboration is not limited to hardware but also includes software optimization to ensure OpenAI can efficiently utilize Nvidia's systems.
This may be because he is worried that OpenAI will abandon CUDA after learning about the collaboration between OpenAI and AMD. Once the world's largest AI foundational model does not use CUDA, it is only logical that other large model vendors would follow OpenAI's lead.
Jen-Hsun Huang predicted on the BG2 podcast that OpenAI is likely to become the next trillion-dollar company, with growth rates that will set industry records. He countered the AI bubble theory, pointing out that global capital spending on AI infrastructure will reach $5 trillion annually.
It is also because of this investment that OpenAI announced the completion of its corporate restructuring on October 29. The company has been split into two parts: one is a non-profit foundation, and the other is a for-profit company.
The non-profit foundation will legally control the profit-making part and must take public interest into account. However, it can still freely raise funds or acquire companies. The foundation will own 26% of the shares of this profit-making company and hold a warrant. If the company continues to grow, the foundation can acquire additional shares.
In addition to OpenAI, NVIDIA also invested in Musk's xAI in 2025. The current round of financing for this company has increased to $20 billion. Approximately $7.5 billion was raised through equity, and up to $12.5 billion was raised through debt from special purpose vehicles (SPVs).
The way this special purpose entity operates is that it will use the raised funds to purchase high-performance processors from Nvidia, and then lease these processors for use by xAI.
These processors will be used in xAI's Colossus 2 project. The first generation of Colossus is xAI's supercomputing data center located in Memphis, Tennessee. The first generation of the Colossus project has deployed 100,000 NVIDIA H100 GPUs, making it one of the largest AI training clusters in the world. Now, xAI is building Colossus 2, which plans to expand the number of GPUs to hundreds of thousands or even more.
On September 18, Nvidia also announced a $5 billion investment in Intel and the establishment of a deep strategic partnership. Nvidia will subscribe to newly issued common stock of Intel at a price of $23.28 per share, with a total investment amount of $5 billion. After the transaction is completed, Nvidia will hold approximately 4% of Intel's shares, becoming an important strategic investor.
03 Of course, Jen-Hsun Huang also said a lot at this GTC.
For example, NVIDIA has launched multiple families of open-source AI models, including Nemotron for digital AI, Cosmos for physical AI, Isaac GR00T for robotics, and Clara for biomedical AI.
At the same time, Jen-Hsun Huang launched the DRIVE AGX Hyperion 10 autonomous driving development platform. This is a Level 4 autonomous driving platform that integrates NVIDIA computing chips and a complete sensor suite, including lidar, cameras, and radar.
NVIDIA also launched the Halos certification program, which is the industry's first system for evaluating and certifying the physical safety of AI, specifically aimed at autonomous vehicles and robotics technology.
The core of the Halos certification program is the Halos AI system, which is the industry's first laboratory recognized by the ANSI Certification Committee. ANSI is the American National Standards Institute, and its certification carries high authority and credibility.
The task of this system is to detect whether autonomous driving systems meet standards through NVIDIA's physical AI. Companies such as AUMOVIO, Bosch, Nuro, and Wayve are the first members of the Halos AI system inspection laboratory.
To promote Level 4 autonomous driving, Nvidia has released a multimodal autonomous driving dataset sourced from 25 countries, which contains 1,700 hours of camera, radar, and lidar data.
Jen-Hsun Huang said that the value of this dataset lies in its diversity and scale, as it encompasses different road conditions, traffic rules, and driving cultures, providing a foundation for training more general autonomous driving systems.
However, Jen-Hsun Huang's blueprint is far from just this.
He announced a series of collaborations with U.S. government laboratories and leading companies on GTC, with the goal of building America's AI infrastructure. Jen-Hsun Huang said we are at the dawn of the AI industrial revolution, which will define the future of every industry and country.
The highlight of this collaboration is with the U.S. Department of Energy. NVIDIA is helping the Department build two supercomputing centers, one at Argonne National Laboratory and the other at Los Alamos National Laboratory.
The Argonne National Laboratory will receive a supercomputer named Solstice, which is equipped with 100,000 NVIDIA Blackwell GPUs. What does having 100,000 GPUs mean? This will be the largest AI supercomputer ever built by the Department of Energy. There is also a system called Equinox, equipped with 10,000 Blackwell GPUs, expected to be operational by 2026. Together, these two systems can provide 2,200 exaflops of AI computing performance.
Paul Cohen, director of Argonne National Laboratory, said that these systems will redefine performance, scalability, and scientific potential. What do they want to do with this computing power? From materials science to climate modeling, from quantum computing to nuclear weapon simulations, this level of computing capability is required.
In addition to government laboratories, Nvidia has also built an AI factory research center in Virginia. The uniqueness of this center lies in the fact that it is not just a data center, but an experimental site. Nvidia aims to test something called Omniverse DSX here, which is a blueprint for constructing gigawatt-level AI factories.
A typical data center may only require tens of megawatts of power, while gigawatts are equivalent to the power generation of a medium-sized nuclear power plant.
The core idea of this Omniverse DSX blueprint is to turn the AI factory into a self-learning system. The AI agents will continuously monitor power, cooling, and workload, automatically adjusting parameters to improve efficiency. For example, when the grid load is high, the system can automatically reduce power consumption or switch to energy storage battery power.
This intelligent management is crucial for gigawatt-level facilities, as electricity and cooling costs can be astronomical.
This vision is grand, and Jen-Hsun Huang said it would take him three years to realize it. AI-RAN testing will not begin until 2026, and self-driving cars based on DRIVE AGX Hyperion 10 will not be on the road until 2027, while the Department of Energy's supercomputer will also be put into use in 2027.
NVIDIA holds the killer feature of CUDA, mastering the factual standard of AI computation. From training to inference, from data centers to edge devices, from autonomous driving to biomedicine, NVIDIA's GPUs are everywhere. The investments and collaborations announced at this GTC further solidify this position.