Anyone who has seen enough science fiction shows know that in the near future Artificial Intelligence and super intelligent machines will surpass human-level intelligence and either enslave or just flat out exterminate us all.
It all began with AI winning at chess and then has steadily progressed until we are reaching a point where AI could take our place on the evolutionary ladder and dominate us the way we now dominate all living things on Earth apes according to the Singularity Institute for Artificial Intelligence.
Concerns about super intelligence is becoming a common theme with PayPal co-founder Peter Thiel’s donating $1.6m and Tesla founder Elon Musk donated $10m to organisations concerned with existential threats of AI breaking out and becoming less than friendly to us, its creators.
With an interconnected World, the intelligence wouldn't need a physical body to do its work, a human hacker with an internet connection can cause havoc so imagine what a super intelligent computer can do if it wanted to do maximum damage.
The simplest way to keep AI from taking over is to introduce an 'off button' but that was tried in 2013 when programmers designed an AI that could teach itself to play Nintendo games and turn itself off when it lost but when it was losing the AI would just pause the game and keep it frozen so it would never lose.
Our saving grace at the moment is that humans write the code that run the Artificial intelligence but as the code gets evermore complex, are leaving the AI to write its own code, something called an AI Box which far from being in the realms of a sci-fi movie, are being used today.
The Centre for the Study of Existential Risk includes Artificial Intelligence in their list of concerns although there worries are that we will develop sophisticated cyber-weapons and arming autonomous robots but even we wouldn't be stupid enough to weaponise autonomous robots. Would we?
No comments:
Post a Comment