Comment OpenAI CEO Sam Altman has said his upstart is preparing for the coming of artificial general intelligence – though there’s disagreement about what AGI actually means and skepticism about his claim that OpenAI’s mission is to ensure that AGI “benefits all humanity.”
If you teared up at the legally non-binding sentiment of Google’s discontinued “Don’t be evil” diktat, read on.
According to ChatGPT, OpenAI’s non-intelligent, sentence-predicting chatbot, “AGI stands for Artificial General Intelligence, which refers to the hypothetical ability of an artificial intelligence system to perform any intellectual task that a human being can. This would include tasks such as reasoning, problem-solving, learning from experience, and adapting to new situations in ways that are currently beyond the capabilities of even the most advanced AI systems.”
Fine, whatever. The key thing is no such system exists yet. And we’re not close to creating one. So on to the vague pronouncements.
“AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity,” Altman waxed lyrical on his website.
Imagine a world where people’s online images, text, music, voice recordings, videos, and code get gathered largely without consent to train AI models, and sold back to them for $10 a month. We’re already there but imagine something beyond that – and assume it’s incredible.
We do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right
Altman allows things could go awry, but maintains we’ll get it right: “On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”
In other words, there’s so much money to be made obsoleting human labor that business owners can’t be restrained.
Recall that almost a decade ago, OpenAI co-founder and investor Elon Musk – who sold his shares to Microsoft a few years ago – fretted that artificial intelligence is the biggest existential threat there is.
Believe it or not, rogue AI gets serious consideration [PDF] amid more obvious potential cataclysms, such as asteroid strikes on Earth, global climate catastrophe, pandemics, nuclear war, famine, and other cinematic tropes.
Yet Altman suggests AGI cannot be stopped forever. He could have just borrowed a line from villain Thanos in Avengers: Endgame, “I am inevitable.”
Emily Bender, a professor in the Department of Linguistics and the director of the Computational Linguistics Laboratory at the University of Washington in the US, analyzed Altman’s post thus on Twitter: “From the get-go this is just gross,” she wrote. “They think they are really in the business of developing/shaping ‘AGI.’ And they think they are positioned to decide what ‘benefits all of humanity.'”
Here’s a thought experiment: imagine an AGI system that advises taxing billionaires at a rate of 95 percent and redistributing their wealth for the benefit of humanity. Will it ever be hooked into the banking system to effect its recommended changes? No, it will not. Will those minding the AGI actually carry out those orders? Again, no.
No one with wealth and power is going to cede authority to software, or allow it to take away even some of their wealth and power, no matter how “smart” it is. No VIP wants AGI dictating their diminishment. And any AGI that gives primarily the powerful and wealthy more power and wealth, or maintains the status quo, is not quite what we’d describe as a technology that, as OpenAI puts it, benefits all of humanity.
Unassailable AI is fine for snooping on employees; for gaming the behavior of underpaid ride-share drivers; for flagging infringement, trade secret leaks, or labor organizing; or for piloting cars on public roads with only occasional fatalities and no executive liability.
But nobody wants unpredictable AGI. And if AGI is predictable, it’s no more intelligent than any other mechanistic system. So we’re back to dealing with AI as currently formulated: opaque models created with dubious authority that get used for profit and without much regard for the consequences.
As Bender wrote in her dissection of Altman’s missive, “I wish I could just laugh at these people, but unfortunately they are attempting (and I think succeeding) to engage the discussion about regulation of so-called AI systems.”
But framing the issue in terms of AGI regulation misses the mark, Bender argued. AI systems like ChatGPT or DALL•E – what she called “text synthesis machines” – have to be considered in the context of broader discussions about data rights, protection from automated decision making, surveillance, and other tech-related social frictions.
“The problem isn’t regulating ‘AI’ or future ‘AGI,'” Bender argued. “It’s protecting individuals from corporate and government overreach using ‘AI’ to cut costs and or deflect accountability.” ®
PS: Author Charlie Stross has suggested the timing of this latest AI hype after the cryptocurrency implosion is no coincidence…