UNITED STATES: The unchecked and rapid development of artificial intelligence (AI) is highly irresponsible and could result in a superhumanly intelligent AI wiping out all sentient life on Earth.
This is the warning issued by Machine Intelligence Research Institute decision theorist Eliezer Yudkowsky, who recently penned an alarming article for Time Magazine about the potentially catastrophic consequences of the current AI race among major tech players.
Yudkowsky is a prominent figure in the field of AI and is known for popularising the concept of “friendly” AI. However, his current outlook on the future of Artificial Intelligence is dystopian and echoes the worlds of science fiction films.
In a recent article, Yudkowsky highlighted the need to curb the development of Artificial Intelligence and ensure that it does not exceed human intelligence. He also emphasised the importance of ensuring that AI systems “care” for biological life and do not pose a threat to it.
The Centre for Artificial Intelligence and Digital Policy also recently issued a letter urging regulators to halt further commercial deployment of new generations of the GTP language model created by OpenAI.
The letter carried 1,000 signatures from technology experts and prominent figures, including Elon Musk. It called for a six-month pause on GPT-4’s commercial activities and plans to ask the United States Federal Trade Commission (FTC) to investigate whether the commercial release of GPT-4 violated US and global regulations.
Yudkowsky applauded the letter’s request for a moratorium and expressed respect for individuals who had signed it, but he thinks it downplayed the gravity of the problem.
He emphasised that the key issue is not “human-competitive” intelligence but what happens after AI surpasses human intelligence.
Yudkowsky pointed out that humanity is not prepared for AI’s capabilities and is not on course to be prepared for them within any reasonable time window.
Progress in AI capabilities is far ahead of progress in AI alignment or even understanding what is going on inside these systems.
He cautioned that if we continue along this path, “virtually everyone” on Earth will perish as a result of the most likely outcome of creating a superhumanly intelligent AI under conditions even somewhat similar to the ones we currently face.
Precision, readiness, fresh scientific understandings, and avoiding AI systems made up of “huge, incomprehensible arrays of fractional numbers” are all necessary for survival under artificial intelligence.
According to Yudkowsky, AI could potentially be built to “care” for humans or sentient life in general, but it is currently not understood how this could be achieved.
Without this caring factor, AI would not love or hate humans but would rather see them as consisting of atoms that could be used for something else.
The likely result of humanity facing down a superhuman intelligence would be a “total loss.”
The concerns raised by Yudkowsky and the Centre for Artificial Intelligence and Digital Policy are significant and should be taken seriously.
While AI has the potential to bring about many benefits, it is essential to ensure that its development is carefully monitored to avoid catastrophic consequences.
Also Read: Simpson’s Paradox Explained: The Paradox That Flips Statistics on Its Head