It is all but inevitable that humans will create an Artificial Intelligence capable of outdoing to the human brain, and Stephen Hawking says we should be better prepared.

Artificial intelligence has been progressing rapidly; the world is becoming populated by self-driving cars, game-show winning computers, and digital personal assistants such as Apple’s Siri, Google Now and Cortana.

The AI arms race has been fuelled by massive investments from bold benefactors, and is fulfilling its theoretical foundation.

In a recent article for an online newspaper in the US, Dr Stephen Hawking, the Director of Research for the Centre for Theoretical Physics at Cambridge, claimed AI will only get smarter and stronger, hopefully to the continued benefit of biological beings.

“The potential benefits are huge,” Hawking wrote.

“Everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide.

“The eradication of war, disease, and poverty would be high on anyone's list.

“Success in creating AI would be the biggest event in human history.”

But those successes may be our last, Hawking warns, if we do not avoid the risks of powerful digital consciousness.

As world militaries are designing autonomous weapon systems to choose and eliminate their own targets, the UN and Human Rights Watch say such technology should not be allowed.

For many who peer into the technological future, there is a concern that AI may transform economies, bringing both great wealth and great dislocation.

But Dr Hawking says replacing our own abilities could only be the start, as humanity faces the unknowable shift after the ‘technological singularity’.

“There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains,” he writes.

“Machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a ‘singularity’.

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

“If a superior alien civilization sent us a text message saying; ‘We'll arrive in a few decades’, would we just reply; ‘OK, call us when you get here - we'll leave the lights on’? Probably not - but this is more or less what is happening with AI.”

He said that just a few small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute are taking a serious look at our potential redundancy as a species.

Dr Hawking believes the world should ask itself what it can do to improve its chance to reap the benefits while avoiding the risks.