"Autonomous Driving Technology" Era Here? Nvidia Pushing Full Force, Uber and Lyft Both Surge

robot
Abstract generation in progress

NVIDIA is accelerating its deployment in the autonomous driving sector by expanding collaborations with ride-hailing giants like Uber and Lyft, promoting the commercialization of Robotaxi services.

At the recent GTC conference, NVIDIA announced plans to expand partnerships with Uber, Lyft, and other companies. Boosted by this news, Uber’s stock price surged over 5% in recent trading, while Lyft’s stock rose about 3%. After a 1.6% increase on Monday, NVIDIA’s stock price slightly declined by approximately 0.4% in recent trading.

NVIDIA and Uber plan to launch autonomous vehicles equipped with NVIDIA software in Los Angeles and San Francisco in the first half of 2027, and to expand services to dozens of cities by the end of 2028.

Additionally, NVIDIA announced new or expanded collaborations with automakers such as Hyundai, BYD, Geely, Isuzu, and Nissan, whose stock prices also rose on their respective local exchanges.

NVIDIA’s Physical AI “ChatGPT Moment”

NVIDIA’s core weapon in autonomous driving is its latest open-source reasoning VLA (Visual-Language-Action) model, Alpamayo 1. This model aims to enable vehicles to “think” and find solutions in unexpected situations.

CEO Jensen Huang stated:

“The ChatGPT moment for physical AI has arrived — machines are beginning to understand, reason, and act in the real world. Robotaxis are among the first beneficiaries. Alpamayo brings reasoning capabilities to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments, and explain their driving decisions — laying the foundation for safe, scalable autonomous driving.”

The first vehicles equipped with NVIDIA technology are expected to hit the roads in the U.S. in the first quarter of this year, in Europe in the second quarter, and in Asia in the second half of the year.

Solving the “Long Tail” Challenge in Autonomous Driving

Unlike traditional models that directly map visual input to actions, the reasoning VLA model can decompose complex driving tasks into manageable sub-problems and display its reasoning process in an interpretable manner.

For example, when approaching an intersection, the system will think like a human: “I see a stop sign, vehicles are coming from the left, and pedestrians are crossing. I should slow down, come to a complete stop, wait for pedestrians to cross, and then proceed safely.” This capability is crucial for addressing rare and unpredictable “long tail” scenarios in autonomous driving.

Sarfraz Maredia, Head of Global Autonomous Mobility and Delivery at Uber, said: “Handling long tail and unpredictable driving scenarios is one of the decisive challenges in autonomous driving. Alpamayo creates exciting new opportunities for the industry.”

Beyond autonomous driving, NVIDIA is also expanding its footprint in broader physical AI. The company has released open-source models and tools such as the Nemotron family for agent AI, the Cosmos platform for physical AI, Isaac GR00T for robotics, and Clara for biomedical applications.

Risk Disclaimer and Legal Notice

Market risks are present; please invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment is at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin