Depending on what you are paying attention to, emerging AI tech will either lead to a post-scarcity utopia à la “WALL-E” or to a dystopian nightmare in which rogue sentient robots have crushed humanity and achieved dominance over the planet. Some would have us believe that such science fiction may become fact.
Either way, the supposed existential threat of AI has been in the news lately – in case you hadn’t noticed. Add AI anxiety to the litany of our other modern complexes, like climate anxiety or smartphone addiction.
While we should always remain both skeptical and optimistic, there may be some genuine cause for concern, considering the number of prominent figures directly involved with the development of AI that are sounding the alarm.
Surely, at this point, we have all heard about Geoffrey Hinton, the so-called “Godfather of AI,” who has been on the media circuit, warning us that the current trajectory of AI development without any real guardrails will lead to artificial general intelligence (AGI) inevitably gaining control.
The reality is that technology grows exponentially on a J-curve. Computing power roughly doubles every 12 to 18 months, according to Moore’s Law.
Hinton’s warnings echo this: “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”
As Tamlyn Hunt writes in her Scientific American article “Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not,” “this rapid acceleration promises to soon result in ‘artificial general intelligence,’ and when that happens, AI will be able to improve itself with no human intervention.”
One of the key voices of reason, Mo Gawdat, the former chief business officer of Google X, outlined some core principles in his book “Scary Smart” that could prevent this loss of control, but have all been ignored.
First, he says that we should not have put powerful AI systems on the open internet until the control problem was solved. Oops, too late. ChatGPT, Bard, and the like are already there, thanks to our fearless corporate overlords.
Second, he and others warned to not teach AI to write code. In just a matter of a few short years, AI will be the best software developers on the planet. Gawdat also believes that the power of AI will double every year.
By learning to write their own code, AI systems might escape control in the not-too-distant future, according to Hinton and others.
As Hunt observes, once AI can self-improve, which may happen in just a matter of years, it is hard to predict what AI will do or how we can control it.
Perhaps the biggest AI doomer of them all, Eliezer Yudkowsky, one of the pioneers of the field of “aligning” or controlling artificial general intelligence, believes that the recent call for a six-month moratorium on AI development does not go far enough and that the current lack of regulation will inevitably lead to the “Terminator” scenario.
Again, this goes back to the exponential growth of the technology. Yudkowsky writes, “Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems.”
Compounding the seriousness of the issue, Yudkowsky and others point out that properly controlling AI for current and future generations is a tricky prospect that requires time – years, if not decades – and to get it right the first time or else.
“Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan,” he warns.
Alarmingingly, Yudkowsky is not alone in thinking that superintelligent AI is a potential existential risk. At a recent invitation-only Yale CEO summit in June, 42% of the CEOs surveyed think that AI has the potential to destroy humanity within the next five to 10 years, according to Chloe Taylor in a Fortune article.
While aligning AI is a necessary and serious matter regardless of how realistic such a risk is, not everyone is buying into the dualistic utopian and doomer hype. Rather, many critics believe such hype is either deliberate or at least serves a purpose that the major corporate players all benefit from. Further, the doomer hype also obfuscates the very real and many problems that AI is both creating and exacerbating.
In a brilliant op-ed for The Guardian, Samantha Floreani argues that the doomsday scenarios are being peddled to manipulate us as a distraction from the more immediate harms of AI (of which there are many).
For Floreani and many others, this is the same age-old corporate song and dance to maximize profit and power. There is a glaring contradiction between the actions and the words of the corporate elites trying to ride the wave of AI into greater market share and influence. As Floreani writes, “The problem with pushing people to be afraid of AGI while calling for intervention is that it enables firms like OpenAI to position themselves as the responsible tech shepherds – the benevolent experts here to save us from hypothetical harms, as long as they retain the power, money and market dominance to do so.”
Far from being our collective savior, widely-used technologies that fall under the AI umbrella – such as recommendation engines, surveillance tech, and automated decision-making systems – are already causing widespread harm, based on existing inequalities.
Stanford concluded in a recent study that automated decision making often “replicates” and “magnifies” the very biases in society that we are still trying to overcome. Not only can biases be reinforced, but they can actually worsen through the feedback loops of algorithms.
This is because the historical data used to train AI systems is often biased and outdated. UMass Boston professor of philosophy Nir Eisikovits writes in “AI Is an Existential Threat–Just Not the Way You Think,” “AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.” The bias and discrimination in these systems are also negatively impacting access to services, housing, and justice.
Generative AI, such as ChatGPT, could also lead us to dystopian times, albeit with a political twist. The more sophisticated and convincing generative AI writing becomes, the more our already fragile democracy will potentially be undermined and threatened.
As Cornell professors Sarah Kreps and Doug Kriner show, generative AI is now armed with microtargeting, which means AI-generated propaganda can be tailored to individuals en masse. They cite research which shows that such propaganda is just as effective as that written by people.
Thus, disinformation campaigns will be supercharged, making the 2016 election interference look like child’s play.
Such a constant stream of misinformation will not only determine how we perceive politicians and undermine the “genuine mechanism of accountability” that elections are meant to provide, but it will also make cynics of us all. If we can no longer trust any information because the entire information ecosystem has been poisoned, then our trust in the media and the government will be further eroded. You know who will benefit from further political apathy and nihilism.
In a constant disinformation flood, the ones who do not drown are the ones who do not participate. Democracy, though, is ideally predicated on participation.
Circle back to the image of the people depicted in “WALL-E.” They are trivialized and pacified. AI tech not only threatens our jobs, our democracy, our privacy, but it also threatens our humanity.
As AI far outstrips human intelligence, which really is just a matter of time, we will become more and more dependent on it for our every whim and action – even more so than we already are.
To be human is to make decisions and, more often than not, without all the information, rendering our choices all the more meaningful. Eisikovits sees AI eventually co-opting most – if not all – of our decision-making: “More and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves.”
Living according to algorithms will enable us to be more efficient and productive, true, but human life is not just rigid planning and prediction, which Eisikovits believes will increasingly encroach on chance encounters, spontaneity, and meaningful accidents.
Setting aside the dire predictions and the range of more immediate problems, Eisikovits advises us that an “uncritical embrace” of AI tech will lead to a “gradual erosion of some of humans’ most important skills.” There is always a cost to technology. For Eisikovits, the doomsday rhetoric overshadows the fact that these subtle costs are already playing out.
Likewise, Emily Bender, a linguistics professor at the University of Washington, sees the rhetoric as a smokescreen for tech giants’ pathological pursuit of profit. These companies that have so much to gain from the widespread use of AI tech are using the dire warnings as a way to distract us from the bias in their data sets and how their systems are trained, according to Bender. She believes that with our attention squarely focused on the existential threat of AI, these companies can continue to “get away with the data theft and exploitative practices for longer.”
Unfortunately, though, it is not just tech executives who appear to be worried about the existential threat that unregulated superintelligent AI poses.
While critics like Floreani and Bender are right that such corporations may be benefitting from the distraction, it is not a case of either/or. Current AI tech, including generative AI, is already causing serious problems, and the unregulated development of artificial general intelligence can also pose an existential risk to humanity.
Bender asks a thought-provoking question: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”
While that seems logical at first glance, one need not look far to realize that corporations will pursue profit blindly. Just look at the state of the environment. Given the projections of climate change, corporations’ pursuit of profit is not just ecocidal, it is also suicidal. Corporations and tech executives will not “just stop” because they are in a technological arms race; one cannot stop because others will march us all on into oblivion.
It is true, as Daron Acemoglu, MIT professor of economics, says, that “the hype of AI makes us shift from extreme optimism to extreme pessimism, without discussing how to regulate and integrate AI into our daily lives,” but we also need to take the doomsday risk serious as well and properly align AI – before it is too late.
It is true that the range of immediate problems – misinformation, job loss, the threat to democracy – also need to be addressed and regulated.
AI is being rolled out in an “uncontrolled” and “unregulated manner,” as Acemoglu recognizes, but that is, unfortunately, not just true of the immediate problems, but the urgent issue of inevitable superintelligent AI as well.
People such as Geoffrey Hinton have been criticized for being hyper-focused on the possibility of an existential threat instead of the current and growing problems already here. BUT, if he and others are right – or even possibly right, then we should take what they have to say deadly serious, and we should all be calling for the immediate universal alignment of AI systems.
Hinton and his colleagues are terrified because they realize from their expertise in computer science the implications of how quickly AI tech is accelerating and that we are running out of time to properly control it due to the exponential growth of the technology.
You don’t get angry at the doctor and tell them you are more concerned about your cholesterol when they warn you that you must immediately do tests to detect cancer when there is a very real possibility of getting it in the near future. You address both.
The problems of AI that are here and now are real and require an informed and vocal citizenry to demand change. The future is always uncertain, but even the remote possibility of a robot apocalypse or complete redundancy of human life requires serious action as well. The time is now!