AI extinction threat warning backed by OpenAI, DeepMind chiefs

The potential menace of synthetic intelligence (AI) causing human extinction has been highlighted by numerous consultants, including the leaders of OpenAI and Google DeepMind. They have endorsed a press release on the Centre for AI Safety’s website, which calls for mitigating the danger of extinction from AI, inserting it alongside different world priorities similar to pandemics and nuclear warfare. However, some experts argue that these considerations are exaggerated.
Sam Altman, CEO of ChatGPT-developer OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei of Anthropic are among those that have backed the assertion. The Centre for AI Safety’s web site outlines varied potential catastrophe situations, and Dr Geoffrey Hinton, who previously warned about the dangers of super-intelligent AI, has also supported their name. Yoshua Bengio, a computer science professor at the University of Montreal, has signed as nicely. Dr Hinton, Prof Bengio, and NYU Professor Yann LeCun are also recognized as the “godfathers of AI” due to their pioneering work in the area, which earned them the 2018 Turing Award for excellent contributions to computer science.
Nonetheless, Prof LeCun, who also works at Meta, considers these apocalyptic warnings to be overblown. He expressed on Twitter that “the commonest reaction by AI researchers to these prophecies of doom is face palming”. Many different experts similarly believe that fears of AI annihilating humanity are unrealistic, and that they distract from points like bias in methods that are already problematic.
Arvind Narayanan, a computer scientist at Princeton University, has previously informed the BBC that science fiction-like disaster situations usually are not feasible: “Current AI is nowhere near capable enough for these dangers to materialise. As a outcome, it’s distracted consideration away from the near-term harms of AI”.
Elizabeth Renieris, senior research associate at Oxford’s Institute for Ethics in AI, expressed her concerns about the more immediate dangers to BBC News. She stated that advancements in AI might “magnify the dimensions of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair whereas additionally being inscrutable and incontestable”. This would result in an increase in misinformation, eroding public trust, and exacerbating inequality, notably for these on the wrong aspect of the digital divide.
However, Dan Hendrycks, director of the Centre for AI Safety, argued that future risks and current considerations “shouldn’t be viewed antagonistically”. He defined that addressing current issues may be useful for managing lots of the future risks.
Model of AI’s alleged “existential” menace has escalated since March 2023 when experts, together with Tesla CEO Elon Musk, signed an open letter calling for a halt within the development of the following technology of AI expertise. The new campaign contains a brief assertion aimed toward sparking dialogue and compares the danger to that posed by nuclear warfare. OpenAI recently proposed in a blog publish that superintelligence might be regulated equally to nuclear vitality: “We are prone to ultimately want something like an IAEA [International Atomic Energy Agency] for superintelligence efforts”, the corporate wrote..

Leave a Comment