There is a bit of a squabble going on between Computer scientists regarding Artificial Intelligence and how it will effect us humans once it surpasses human intelligence.
There is some agreement that 'human level AI' will be reached by around at 2029 but then it all breaks down into head shaking and loud tutting over what happens after it does.
Some say that the AI will evolve into a supercomputer which learns so quickly that it surpasses human intelligence, and solves all our problems such as find cures for diseases, developing renewable energy resources and benefiting society while others argue that if a machine exceeds our own intelligence, we could be ignored, sidelined or conceivably destroyed by it.
Billionaire entrepreneur and professional idiot Elon Musk is in the latter camp, saying that 'summoning the demon' with AI and calls it 'our biggest existential threat'
while Stephen Hawking warned that 'The development of full artificial intelligence could spell the end of the human race' as Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.
Hawking was also concerned about the use of AI in the military, with autonomous weapons being developed and right up until his death was championing a ban weapons beyond human control.
In most sci-fi movies the threat comes from AI taking off on its own and re-designing itself but there does seem to be a consensus that sometime in the next 30 years or so, a supercomputer will replicate the human brain and evolve into super-intelligence, or ASI.
Whether, once it does, it benefits humans or sends us to extinction is yet to be discovered and if the boffins can't agree which it will be then it doesn't fill me with confidence.