AI will be ‘impossible’ to control and use tactics ‘incomprehensible’ to humans

Artificial intelligence will one day be almost impossible to control even by scientists, experts have warned.

A group of researchers studying AI's capabilities back in 2021 looked into how we'd be able to keep the supersmart robots responsive to human whims – and found it would be an almost insurmountable task.

"A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," the team wrote at the time.

READ MORE: VR will turn us into WALL-E blobs who 'drink food out of straws', warns Twitter founder

"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable."

The research took inspiration from Alan Turing's halting problem from 1936. The problem involves knowing whether or not a computer program will reach an answer to a problem, and halt, or enter a continuous loop trying to find one.

Turing's calculations showed while humans can work this out for some programs, it's not possible to find a way to determine the outcome for every possible program that could ever be written.

AI, on the other hand, could in theory hold every possible computer program in its memory at once given how intelligent it is.

Controlling superhuman intelligence requires us to understand all the possible scenarios AI could face given a particular command, which would involve developing a simulation of that intelligence we can analyse.

  • AI such a threat to humanity folk should stop having kids, says tech guru

But because of our own human limitations, it's impossible for even the brainiest boffins to create a simulation that maps out all possible outcomes.

David Nield, writing for the publication ScienceAlert, explained: "Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with.

"Once a computer system is working on a level above the scope of our programmers, we can no longer set limits."

Computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany, added: "In effect, this makes the containment algorithm unusable."

Add to this the fact that this research was conducted two years ago – when AI was more primitive than it is today – we could have a problem on our hands, Nield explained.

So far it seems the only alternative to teaching AI to behave ethically by human standards, and not bring civilisation crashing down, is to impose limits on how far humans can push it.

Multiple experts have called for tighter regulations on the use of AI while we figure out exactly what it is capable of.

Earlier this year tech heavyweights like Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking mankind to pause our experiments with artificial intelligence for at least six months.

The letter, entitled Pause Giant AI Experiments, read: "AI systems with human-competitive intelligence can pose profound risks to society and humanity.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletter by clicking here.

Source: Read Full Article