🚀 Gate Square “Gate Fun Token Challenge” is Live!
Create tokens, engage, and earn — including trading fee rebates, graduation bonuses, and a $1,000 prize pool!
Join Now 👉 https://www.gate.com/campaigns/3145
💡 How to Participate:
1️⃣ Create Tokens: One-click token launch in [Square - Post]. Promote, grow your community, and earn rewards.
2️⃣ Engage: Post, like, comment, and share in token community to earn!
📦 Rewards Overview:
Creator Graduation Bonus: 50 GT
Trading Fee Rebate: The more trades, the more you earn
Token Creator Pool: Up to $50 USDT per user + $5 USDT for the first 50 launche
NVIDIA ( NVDA ) becomes the 'operating system' of the AI factory: leading position in data center network innovation.
Source: TokenPost Original Title: NVIDIA(NVDA) becomes the 'operating system' of the AI factory… Leading innovation in data center networks. Original Link: NVIDIA ( NVDA ) surpasses high-performance GPUs, focusing on innovations in data center inter-network architecture, setting a new standard for AI factories. In distributed computing structures, the network is treated as the operating system to enhance performance and energy efficiency.
Gilad Shainer, Senior Vice President of Marketing at NVIDIA, emphasized in a recent interview: “AI workloads are inherently distributed, and therefore require precise network coordination to enable thousands of accelerators to operate as a single computing engine.” A structure that delivers the same data to each GPU at the same speed without latency must be implemented to optimize overall computing speed.
In this distributed processing structure, the network is no longer just a means of connection, but acts as a substantive operating system (OS). Shaina stated that it is not only individual GPU-specific ASIC( application-specific integrated circuits ), but also the organic integration of the network design of these accelerators that has become the most important factor determining the performance of AI factories.
NVIDIA not only considers performance but also takes energy efficiency into account, adopting a co-design approach that spans hardware, software, and frameworks to comprehensively design networks. Only when all computing elements, from model frameworks to physical connections, are designed as a whole can the token processing speed, execution efficiency, and predictability be maximized. Shaina emphasized this point.
The high-density design, in particular, is Nvidia's differentiated advantage. While traditional data centers tend to avoid excessive densification, Nvidia has taken a different approach: tightly packing high-performance GPU ASICs in racks, achieving the dual goals of scalability and energy efficiency through low-power copper-based connections. During large-scale expansion, technologies such as 'Spectrum-X Ethernet Photonics(' or 'Quantum-X InfiniBand' with co-packaged optics) further reduce the energy consumed for data movement.
This strategy goes beyond mere hardware upgrades, clearly demonstrating Nvidia's ambition to achieve a new paradigm of 'large-scale data centers = supercomputers' in the era of AI-centered computing. The dominance of AI factory infrastructure is shifting from 'GPU manufacturing capability' to 'the ability to transform the entire data center into an organic computing unit.' The next phase of the AI boom seems to be starting from this network-dominated computing architecture.