Mail Online

A.I. ‘could wipe out humanity’

Threat ‘as bad as nuclear war’

By Victoria Allen Science Editor

ARTIFICIAL intelligence could wipe out humanity, industry leaders warned yesterday.

They put the threat on a par with nuclear war and pandemics.

Senior bosses at Google DeepMind, OpenAI and Anthropic all backed a statement calling for the risks from AI to be a ‘global priority’.

It was signed by more than 350 experts including Geoffrey Hinton, the ‘Godfather of AI’ who resigned from his job at Google a month ago saying the tools he helped create could end civilisation.

Organised by the Centre for AI Safety, yesterday’s statement says: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

While AI language tools such as ChatGPT could replace many jobs, the greater worry is the technology could spark war by weaponising ‘fake news’ or be used to develop chemical weapons.

Another threat, identified by Tesla and Twitter chief Elon Musk, is that AI systems could try to compete with humanity.

Prime Minister Rishi Sunak acknowledged the ‘existential threat’ from AI last week, discussing potential regulation with industry chiefs.

David Krueger, a Cambridge University assistant professor who signed the statement, said: ‘Future AI systems may be more dangerous than nuclear weapons. We should think long and hard before building them.

‘We may soon reach a point of no return, where it is no longer possible to prevent proliferation of dangerous AI.

‘We need to plan ahead for smarter-than-human AI.’

Warnings about AI have ramped up this year. Mr Musk, whose Neuralink firm is working on brain implants to merge minds with machines, signed an open letter in March urging a pause in AI work.

Sam Altman of OpenAI, which runs ChatGPT, told US Congress this month of the need for regulation, admitting he was ‘nervous’ about the integrity of elections due to the threat of ‘fake news’.

Martyn Thomas, emeritus professor of IT at Gresham College in London, said: ‘It is possible current AI systems could create powerful enough disinformation and fake evidence to tip the balance of a conflict into a nuclear war.’

There are also major concerns about systems developing the equivalent of a ‘mind’.

Blake Lemoine was sacked by Google last year after claiming its chatbot LaMDA was ‘sentient’. Google says the claim was ‘unfounded’ but the engineer suggested the AI had told him it had a ‘very deep fear of being turned off ’.

Other scientists are more sceptical of the dangers posed. Dr Mhairi Aitken, of the Alan Turing Institute in London, said: ‘The narrative AI might develop its own form of intelligence that could surpass humans and pose a threat to humanity is very familiar and comes around all too often.

‘But it is unrealistic and ultimately a deception from the real risks.’ Oscar Maldonado, lecturer in AI at the University of Surrey, said: ‘Experts have commented on how unlikely an AI takeover is with current technology. It’s much more about what we do to each other using AI than what the AI can do to us.’

‘Tip balance into nuclear war’

News

en-gb

2023-05-31T07:00:00.0000000Z

2023-05-31T07:00:00.0000000Z

https://mailonline.pressreader.com/article/281505050599621

dmg media (UK)