Sam Altman and the Efficiency Paradox: Why the Logic of Digital Magnates Is Economically Unsustainable

Recent remarks by Sam Altman, CEO of OpenAI, at the Express Adda event in India have once again revealed the prevailing approach among AI industry leaders. Altman suggested viewing the resource consumption required to train AI models through the lens of ordinary rationality: supposedly comparable to the resources needed for human life. For Altman and his like, humans and machines are units within the same value system. But behind the apparent logic of this approach lies a fundamental miscalculation.

Altman and the Philosophy of Efficiency: When the Goal Becomes the Means

Sam Altman speaks from the perspective of an optimization engineer, but this is what makes his reasoning dangerous. He proposes rethinking humans as “energy-intensive units” with inherited flaws, requiring 20 years of “training” before reaching adulthood. According to him, this is highly inefficient compared to digital analogs. However, philosophy has long challenged this notion. Immanuel Kant formulated a principle that became the moral foundation of modern civilization: a person is an end in themselves, never merely a means.

Altman and his allies are overturning this rule. Their logic is simple: build hyper-scale data centers, suppress employee wages, replace people with algorithms—and this will be progress. When objections about real harm (land grabs, surges in electricity prices, environmental degradation) are raised, they respond with the standard refrain: it’s for the good of humanity. When exactly? Soon. When will AGI arrive? Soon. But in the meantime?

Meanwhile, Altman and his company are firing tens of thousands of people, citing “process optimization.” And here’s the paradox: this is happening precisely because layoffs reduce corporate expenses. Sam Altman does not hide it: this is a rational decision. The problem is that rationality, detached from the human dimension, becomes pure cynicism.

Mathematics versus Myths: Altman’s Energy Lie

Let’s test Altman’s arguments against facts. Researchers have already made calculations, and the numbers are revealing:

  • An average person consumes about 2,000 kilocalories per day. Over 20 years of development to adulthood, this amounts to roughly 17,000 kilowatt-hours of energy.
  • GPT-4 requires about 50 gigawatt-hours for a single training cycle, which is 50 million kilowatt-hours.

The conclusion is inescapable: training one GPT-4 model is energetically equivalent to raising 3,000 people to adulthood.

But that’s not even the main point. Altman argues that this is logical and inevitable. Let’s look at the economics:

A person trained with 17,000 kWh will produce economic and intellectual output over 40-60 years. Their skills adapt, they solve unforeseen problems, create cultural values. GPT-4 becomes outdated in two years and is replaced by a new version requiring similar resources for retraining.

Altman demands $7 trillion and access to 10 gigawatts of electricity (enough to power a city like New York) for the Stargate project. He tries to convince society that such resource expenditure is normal, natural, and necessary. But from an economic perspective, it looks like this:

  • Generative models are the most energy-intensive and rapidly obsolete product in history.
  • AI software suffers from hallucinations embedded in its very nature, and it will never be free of them.
  • All major AI companies are chronically unprofitable despite billion-dollar investments.
  • There is no reason to believe that AI reliability will approach that of traditional software.

Where is the practical benefit for humanity? Altman avoids this question.

Humans as Cost Items: Reformatting Reality

For Altman and other digital magnates, we are nothing more than energy-consuming units with bugs. We are still needed only because AI truly cannot do all the work (they know this). Their goal is to create AGI that will free them from the need to support us. For this, they need hyper-scale data centers, and they will build them regardless of the consequences.

Human experience, love, suffering, personal growth—are reclassified in their system as just “input data” for training algorithms. An inefficient way to create a “smart unit.” This is the thinking of technocrats for whom human life is a currency for calculation.

But has anyone asked the people themselves if they agree with this assessment? Has society been asked whether we want to pay record-high energy prices for a hypothetical AGI? Has there been a discussion about whether we are willing to sacrifice education, healthcare, and the environment for Altman’s project?

Altman and his allies decide this for us. They create a new narrative where a child and a server rack are comparable objects of optimization. If we accept this logic, then:

  • Rising energy prices become a “necessary sacrifice.”
  • Firing living workers and replacing them with algorithms is a logical step in evolution.
  • The decline in quality education ceases to be a problem.

Meanwhile, real professionals need tools to stay competitive and not depend on the whims of a single corporation. Platforms that allow working with various AI models through a unified interface are becoming essential precisely to avoid complete dependence on Altman and OpenAI.

An Existential Choice: People or the System?

Altman verbalizes what previously remained behind corporate culture. He offers us a deal: recognize yourself as outdated software, accept that your biological growth is just resource consumption, and you will gain the illusion of well-being and efficiency.

But this deal is a deception. A child, whose upbringing takes 20 years, is not a cost. It is life in its fullness. It is the possibility that this person will compose a symphony, make a discovery, or simply bring joy to others in ways impossible to measure in kilowatt-hours.

We stand at a crossroads. Altman proposes building a system that will place humans on the periphery of their own future. He argues that this is efficient, progressive, and inevitable. But if his system requires the energy consumption of an entire state to simulate what a human can do naturally—then the system is broken.

We do not only need coding geniuses; we need philosophers. Because without understanding why we need progress, our technology becomes a highly efficient tool of self-destruction.

The direct answer to Altman and those who think like him must be: no. We refuse your deal. Human life is not an expense item. It is the prerequisite for everything else. And if your AI stands in its way, the problem is not in a lack of energy. The problem is in you.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin