The Elon Musk-funded Future of Life Institute this week issued an open letter calling on artificial intelligence research labs to suspend work on their most advanced projects.
Musk, who has described advances in AI as “summoning the demon” has tended to overestimate the pace of development of the field towards the artificial general intelligence (AGI) that could pose genuinely scary problems for humanity.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the institute’s letter called for.
But while the ChatGPT phenomenon has been something to behold, GPT (Generative Pre-trained Transformer) technology, based on the type of large language models OpenAI has developed, is a long way from creative superintelligent AI that can reason and think for itself and tell the difference between fact and fiction.
Unhinged AI hype?
It could even be argued that Musk’s alarmism is a cynical ploy to divert attention from the real problems with AI that exist today, and focus governments on the sci-fi scenarios that don’t yet exist. A large number of credible scientists have endorsed his call to hit pause on cutting-edge AI development, but a large number take a different view. One prominent computer scientist described most of the claims in the letter as “unhinged AI hype”.
The real problem isn’t the rapid advancement of AI. It’s the same problem we have with Big Tech – the concentration of power into the hands of a small number of players. The leaders in AI are the usual suspects – Google, Microsoft, IBM, Amazon, OpenAI is relative newcomer, but has had large injections of capital from the likes of Microsoft. Antitrust action to prevent AI monopolies from forming is crucial.
What would be more beneficial to humanity rather shelving AI efforts for six months, would be for the world’s governments, collaborating via a trusted international body like UNESCO, to fast-track development of effective regulatory frameworks to deal with AI as it advances at an accelerated rate.
At the moment, the Federal Drug Administration in the US has to approve each stage of development of a new drug and the Environmental Protection Agency must okay the use of new chemicals. But there’s no equivalent in the digital world when it comes to emerging technologies. Companies are encouraged to innovate, test the water and see what happens. The lesson from the last 15 years of algorithm development and data-driven product releases is that we need earlier regulatory intervention when it comes to AI.
Still no regulation
There is no dedicated federal AI regulation in the US, but there are moves towards it. The Biden administration has drafted a blueprint for an AI Bill of Rights, which has been well-received. As usual, the European Union is setting the pace with its AI Act and that could have global consequences. The AI Act will create a process for, as the Brookings Institute puts it, “self-certification and government oversight of many categories of high-risk AI systems, transparency requirements for AI systems that interact with people, and attempt to ban a few ‘unacceptable’ qualities of AI systems”.
The EU’s General Data Protection Regulation (GDPR) by default became a sort of global standard for data protection. The AI Act could have a similar influence, which wouldn’t be a bad thing.
But we need to get going on that here. RNZ reported this week that an AI strategy for Aotearoa won’t be in place until at least 2025 – regulation will take even longer. We are way behind the ball already. We’ve had years to prepare for this and we’ve effectively done nothing.
So six months would focus the mind. But not six months of downing tools in AI labs. What we need is six months work to build the guardrails that are going to guide the future use of AI and which can prevent the nightmare scenarios that keep Musk doom-scrolling his Twitter feed.