Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The Real Price of Progress: Why Sam Altman's Arguments About "Humanity's Benefit" Don't Hold Up to Scrutiny
In recent years, OpenAI CEO Sam Altman has actively shaped the public narrative about the need for massive development of artificial intelligence. His claim that energy consumption for training AI models is logically equivalent to human food needs has become a vivid illustration of how tech industry leaders are rethinking the very concept of human value. The question is not whether his calculations are correct. The question is what philosophical shift is hidden behind this rhetoric.
From Philosophical Tradition to Industrial Cynicism
Immanuel Kant formulated one of the fundamental principles of modern morality: a person is an end in themselves, not a means to other ends. This principle underpins the constitutions of democratic countries and international humanitarian law. Altman and his supporters propose a completely different view.
For Sam Altman and similar leaders in the tech sector, humanity has become a variable in an optimization equation. People are evaluated by utility coefficients, measured in terms of energy efficiency, reclassified as temporarily necessary resources. This is not just a business strategy — it’s a redefinition of the very axioms of our society.
When Sam Altman talks about the need to create thousands of hyper-scale data centers, he speaks in the language of inevitability. “This is necessary for the good of humanity,” he repeats, as do his like-minded colleagues such as Elon Musk and other representatives of the tech corporate elite. But this phrase conceals a logical paradox: those who profit from the infrastructure declare their consumers as indirect beneficiaries.
The Energy Paradox: Training versus Upbringing
Let’s set aside morality and turn to simple mathematics, which Altman so advocates. Researchers have already prepared calculations:
Raising a person to adulthood (20 years of education) requires about 17,000 kWh of energy at an average consumption of 2,000 kcal per day.
Training a GPT-4 model consumed about 50,000,000 kWh of electricity.
Result: one training cycle of a single model is equivalent to raising 3,000 people to adulthood. But this is just the beginning of the paradox.
A person who receives 20 years of education generates intellectual and economic returns over the next 40-60 years of life. Their knowledge accumulates, their experience is passed on, their creativity creates new values. GPT-4 becomes outdated in less than two years. A new model, new training, new resources are needed. The cycle repeats, energy is burned, the planet warms.
Sam Altman demands $7 trillion and 10 GW of electricity (the annual energy consumption of a city the size of New York) for the Stargate project. He demands that we perceive mass resource burning as “efficiency.” But from an economic perspective, we are looking at the most energy-intensive and rapidly outdated industry in the history of civilization. This is not an investment in the future. It’s burning the future in the furnace of corporate ambitions.
The Mechanism of Redefining Human Value
Why does Sam Altman need this rhetoric of requalifying people as productive assets? The answer is practical. If society agrees with the premise that a data center is logically equivalent to an infant, then:
This is a classic rhetorical move: reframing the problem as a solution, and rewriting victims as beneficiaries. Meanwhile, real specialists seek ways to remain independent and competitive outside the monopoly of a single corporation. They look for tools that provide control, not the illusion of progress.
Cracks in the Facade: Internal Contradictions of the AI Industry
The arguments coming from Altman contain several critical weaknesses:
First, claims about AI efficiency have been repeatedly debunked by practice. Generative models suffer from a fundamental problem — hallucinations, confidently generating false information. This is not a technical issue to be solved; it is embedded in the architecture of transformers themselves. Professionals using AI know: its outputs require human verification. Efficiency here is a myth.
Second, AI companies are chronically unprofitable. OpenAI requires continuous new capital injections; Microsoft bets on integration, but profitability remains a mystery. If it were truly so effective, why is the business model not viable without constant investor demand?
Third, there is no reason to believe that AI-based systems will ever approach the reliability of traditional software. This means critical systems (medicine, transportation, infrastructure) will not fully transition to AI. And therefore, humans will remain necessary not because they are inherently valuable, but because the system will collapse without them.
The Existential Choice Facing Civilization
History shows that corporate logic reclassifies people according to current economic interests. Slave systems declared humans as property. Imperial powers considered colonial peoples as inferior races. Industrial corporations viewed workers as replaceable units. Each time, this was accompanied by a philosophical redefinition: rational arguments masking greed as progress.
Sam Altman offers a modern version of this scheme. He claims that humans are outdated software, inefficient units, intermediate nodes in the creation of true intelligence. He proposes a deal: accept your own inadequacy, and we promise you a future paradise.
But the reality is simple: if a technological system requires the energy consumption of an entire metropolis to imitate a thinking human, then the system is broken. And if those who create this system convince us that we must disappear for it to emerge — that is not progress. It’s an invitation to our own displacement.
One critic of Altman rightly noted: we don’t need programmers if we have no philosophers left. Because without a deep understanding of why progress is needed, our technologies become not salvation but tools of self-destruction for our species.
The Final Conclusion
Sam Altman and his allies are not selling technology. They are selling a redefinition of human dignity. They ask us to believe that twenty years of human development are just costs that can be minimized, that motherhood and education are inefficient business processes, that life only has value if it produces quantifiable results.
The only response to this proposal can be: no.
A child who society spends twenty years raising is not an expense item. It is the very essence of human existence, the transmission of culture, knowledge, wisdom. If building an artificial intelligence system requires destroying this, then the problem is not energy consumption or economic efficiency. The problem lies in the system itself and in those who insist on creating it at any cost.