Leaders in the field of artificial intelligence want to halt the development of strong AI systems due to concerns that humanity may be in danger.
They claim the race to develop AI systems is out of control and have signed an open letter warning of potential risks.
Elon Musk, the CEO of Twitter, is one of those who wants to stop training AIs above a certain capacity for at least six months.
Steve Wozniak, a co-founder of Apple, and a few DeepMind researchers also signed.
The developer of ChatGPT, OpenAI, recently unveiled GPT-4, a cutting-edge technology that has stunned observers with its aptitude for tasks like identifying objects in images.
The letter, from the Future of Life Institute and signed by the luminaries, requests a temporary halt to development at that stage and warns of the dangers that potential future, more sophisticated systems may present.
The report warns that societies and humanity face grave risks from AI systems with intelligence on par with humans.
A non-profit organization called the Future of Life Institute states that its goal is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life.”
Mr. Musk, the owner of Twitter and the CEO of the automaker Tesla, is listed as the organization’s external adviser.
The letter claims that careful consideration must go into the development of advanced AIs, but lately, “AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The letter issues an alert that artificial intelligence (AI) could automate jobs and saturate information channels with false information.
The letter comes in response to a recent study by the investment bank Goldman Sachs, which predicted that while AI would probably increase productivity, it also had the potential to automate millions of jobs.
Other experts, however, told the BBC that it was very difficult to predict how AI would affect the labor market.
The letter poses a more hypothetical question, “Should we develop non-human minds that may ultimately outnumber, outsmart, obsolete, and replace us?”
OpenAI cautioned of the dangers if an artificial general intelligence (AGI) were created carelessly in a recent blog post that was cited in the letter: “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too.
The company stated that “coordination among AGI efforts to slow down at critical junctures will probably be important.”
The BBC has contacted OpenAI to see if it supports the letter, but the company has not made any comments in the media.
Although he left the organization’s board a number of years ago and has tweeted negatively about its current direction, Mr. Musk was a co-founder of OpenAI.
Like the majority of comparable systems, autonomous driving features produced by his automaker Tesla rely on AI technology.
The letter requests that “the training of AI systems more powerful than GPT-4 be immediately suspended for at least six months.”
Governments should intervene and impose a moratorium if such a delay cannot be swiftly implemented, it asserts.
It would also be necessary to create “new and capable regulatory authorities dedicated to AI.”
In the US, UK, and EU, several recent proposals for the regulation of technology have been made, but the UK has rejected the idea of an AI-specific regulator.