【AI+Computing Power】Altman: OpenAI Selling Tokens Like Utilities for Water and Electricity

robot
Abstract generation in progress

OpenAI CEO Sam Altman recently attended BlackRock’s 2026 U.S. Infrastructure Summit and stated that AI has shifted from an “experimental” phase to a “significant economic utility” stage. In the future, computing power will become a utility like electricity or water, with metered payments.

Two Thresholds for AGI

Altman said that opinions vary on how close AGI (Artificial General Intelligence) is—some say it’s already here, others say it’s still a year away—but the timing no longer holds much significance. He emphasized that defining AGI is very important and proposed two specific thresholds. The first is when the “world cognition ability” within data centers surpasses that of external humans. He believes this might happen by the end of 2028, though he admits, “Maybe I could be completely wrong.”

The second threshold is when CEOs of large companies, national leaders, and Nobel scientists can no longer work without heavily relying on AI. He explained that no CEO can personally communicate with every employee and customer, attend every meeting, or be an expert in every field. Instead, they will increasingly oversee a group of AIs—“providing supervision, deciding how to trust these outputs, and guiding them.” This represents a threshold for operating large organizations relying heavily on AI, which might take a bit longer but perhaps not too much.

Regarding AI agents, Altman believes their rapid development is one of the next big things. “When they can access nearly all internal information—internal documents, communications, code, customer data—the quality of answers and ideas, regardless of what you call them, will become increasingly better.”

On funding, Altman noted that one of the most challenging aspects is that infrastructure is very expensive, requiring large investments and long-term commitments. However, given the steep growth trajectory of the industry and rapid demand increase, extraordinary measures are necessary; otherwise, the industry remains capacity-constrained.

In-House Chips for Inference, Deployment by Year-End

He pointed out that the company’s business revolves around selling tokens (the minimal units of generative text in computers), generated by larger or smaller AI models for inference. “You want to pay less, maybe run only when needed; it can also work super efficiently, spending tens of millions, hundreds of millions, or even billions of dollars on a truly valuable single problem,” he said. He envisions that in the future, this kind of “intelligence” will be a utility like electricity or water, purchased via metering, and used for anything they want to do. He believes this demand will continue to grow rapidly.

“If we don’t have enough resources, we either can’t sell it or prices will become very high, turning it into a patent for the wealthy.” Therefore, he thinks it’s best to keep the market well supplied.

Regarding the chips used by both NVIDIA and AMD, and also developed in-house, Altman said the chips they are working on are solely for inference. “They may not be the fastest inference chips, but the cheapest. Under our constraints, the most energy-efficient chips per watt will be important for all future inference needs.” He openly admits this is a subjective gamble because “these are limited-function chips, but in an energy-constrained world, I believe what they do will be very important.”

Altman expects to receive the first batch of chips within a few months and to deploy them at scale before the end of the year.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin