“Someday, we will build the descendants of humanity and launch them off to colonize the universe.” That’s the long-term vision of Sam Altman, founder and current CEO of OpenAI, and former leader of the venture capital firm Y Combinator.
While being replaced by machines is usually the go-to horrific scenario when it comes to AI, Altman’s short-term vision is arguably more terrifying.
The company that makes the much-discussed ChatGPT was founded by Altman and others. Initially established as a nonprofit to develop artificial intelligence technology “for the benefit of humanity,” it has since transitioned to establish a for-profit wing.
That new wing has been financed by Microsoft. The software juggernaut invested $1 billion and is planning to invest another $10 billion in OpenAI.
Where Google and others have been secretive about their AI work, Altman’s outfit has sought to popularize its applications. ChatGPT, DALL-E, and others are inferior, public-facing versions of more complete, commercial machine-learning applications that are available to large corporate customers and which fuel the intuitive (or addictive) user experience of many apps.
Altman told the Wall Street Journal that his current aim is to “build one intelligence that is smarter and more capable than humans in every way.”
Whether he will be successful is far from clear, but the logic that drives billions of dollars of investment in AI is unmistakable.
Altman’s short-term vision is to unleash “trillions of dollars of new businesses” and create “capitalism for everyone.” He simultaneously insists that the technology’s development is inevitable, while promising that AI will fuel leisure time for all.
The OpenAI CEO has gone as far as to elaborate a modified form of universal basic income where every U.S. citizen would effectively receive dividends as a minority shareholder in the form of basic income payments.
But following Altman’s capitalist logic, this new system would be the result of a magnanimous corporate elite — or at worst, an enlightened maneuver by that same owner class to stave off a social and political collapse.
That’s because the load-bearing pillar of this vision of AI — that is fueling billions in investment — is that the technology will consolidate centralized corporate control in unprecedented ways at an awe-inspiring scale.
The short-term economic logic of AI is the same as that of any automation that replaces labor with technology, leveraging accumulated capital.
For example, if you have to pay a worker $50,000 a year to do a job, but you can buy a $500,000 robot that will do the same job, a company with that capital will often spend the $500,000, because in year 11 their profits will start to be much higher. That’s the driving motivation behind AI: an exchange of capital for greater control, higher profits and fewer workers.
That logic goes beyond rational economic calculations when a critical mass of corporate administrators believes that AI is inevitable, or worries that their competitors will use it to leave them behind. That fear drives spirals of overinvestment in tech, which validates investment bubbles.
The collateral damage, of course, is not just livelihoods tossed around or destroyed altogether in anticipation of a far-from-certain AI future.
There are proven solutions to increase leisure time and address profound ecological crises: low cost ecological housing, localized food systems, renewable energy, and the like. Not implementing those — and pursuing technologies that are either overhyped or deeply dystopian — carries a tremendous opportunity cost.
The underlying point is that AI as it’s currently conceived creates a situation where big, capital-intensive institutions trend toward increasing concentrations of power. This centralization is driven by the role of data and machine learning: AI is premised on the idea that more data, more trial and error, and more smart people tweaking the mechanisms mean more powerful predictive functions and more effective replacement of human roles.
Either this story about AI is a false promise and vast financing and human potential is being wasted at a crucial civilizational turning point, or we are on a collision course with an unprecedented concentration of corporate power — which, considering unprecedented concentrations of power, is saying something!
In the aggregate, close to $200 billion has been invested in AI enterprises — not including a lot of government funding and institutional research. This, at a time when there is a profound need to move away from fossil fuels and redistribute wealth.
There is, of course, a way to see a role that AI can play in developing new tech to help humanity thread the needle of ecological catastrophe. But all the investment that’s going into AI has precisely the opposite intention. To the same extent that the AI hypers are correct about the profit potential of AI, this technology is, like capitalism, a failure of the imagination.
If we believe that AI investors will get some of what they expect to receive for their money, there are three basic pathways to respond.
One: do nothing and either AI will falter and the bubble will pop, or run an unknown risk that we’ll live in a world of billionaire control that makes Blade Runner feel like the Teletubbies.
Two: attempt to move AI into institutions — probably at the level of states — with some public oversight, to be managed as a commons.
Three: deploy the tanks to the data centers, or find some less dramatic way to severely curtail the scope of AI.
Options two and three are not mutually exclusive. A sufficiently well-funded and effective, nonprofit, commons-based AI development system with state funding could lead to disinvestment from corporate AI, which could open up avenues to curtail how AI is used.
Right now, that is a one-in-a-million scenario, but there are things that can be done to get its probability into the single digit percentages.
Whether it’s smash-the-machines or lobbying for sound public policy, neo-Luddite activity is probably at a historically low ebb. But for how long? Abuses of corporate-owned AI can only grow from here, and the sentiments of those who bear the burden of the future imagined by Altman and company will range from resentment to apocalyptic desperation.
Maybe AI will never live up to the hype. But as long as hundreds of billions of dollars are being invested in it, it will never get worse at doing what it aims to do; it will only “improve.”
AI advocates are counting on hitting an inflection point of increased data, faster and cheaper computing power, and self-correcting machine learning. I’m not betting on that happening, but I’m also not ready to put all my chips on it never happening.
It’s a decision point that we might look back on wistfully in not too long.