Sunday, 12 January 2014

The Last Invention That Man Need Ever Make

Back in the 1980's, when the climate change campaigners were just starting to make an impact on our thinking, many people dismissed the warnings of rising sea levels and today there are standing knee deep in their flooded living rooms and stupidly wondering why.
Another looming catastrophe that never seems to get a mention is humans stupidly creating their own downfall by Artificial Intelligence which is rapidly gaining on us humans and will finally overtake it and the repercussions that particular scenario will unfold.
It is already happening, pale faced girlfriendless people are out there writing computer code that can park our cars and even drive them for us. It starts small, consider those adverts in the margins of our web pages, type into Google that you are looking for a tent and you are bombarded by adverts from every camp shop or go browsing for guns and receive adverts for penis enlargements.
In the driverless and car parking examples, we have AI that can do things better and safer than a human and within the next decade we will be happy to let them parallel park and safely transfer us from A to B which is a sure sign that humans haven't got long left as robots intellect bypasses our own, known as the 'the singularity' in AI circles which is the moment when humans cease to be the smartest things on the planet.
Computers already dominate modern life from directing traffic to controlling financial systems and security systems and satellite navigation GPS to most forms of modern day communications so imagine the chaos if they all suddenly just stopped.
Computer pioneer Alan Turing said: 'Once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make'.
Scientist and a software engineers have come together to create the Centre for the Study of Existential Risk (CSER) to address the concerns of our technology outstripping us and posing extinction-level risks to our species.
In 2009 the Association for the Advancement of Artificial Intelligence (AAAI), chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists to discuss the potential impact of robots that could become self-sufficient and able to make their own decisions. They described the current level of AI as achieving the intelligence of a cockroach which may not sound much but it means it is on the evolutionary path where we once stood and in a fraction of the time.
Another concerned group of experts at the Singularity Institute have already announced that the 3 Laws of Robotics would not be enough to protect humans from AI if it attempted to take complete control over humanity.
Our saving grace at the moment is that humans write the code that run the Artificial intelligence but as the code gets evermore complex, we will have to leave the AI to write its own code, something called an AI Box which far from being in the realms of a sci-fi movie, are being used today.
The Machine Intelligence Research Institute has already run experiments on AI boxes that have the potential to make themselves more intelligent by modifying their source code. These improvements would make further improvements although the experiments have been limited to just two hours for safety reasons.
Of course when we do create super intelligent robots and they do take over the world and enslave us they will obviously need to genetically engineer the humans they keep around as slaves and when they have created a race of brilliantly intelligent humans, we will re-enslave them back which will serve them right.

3 comments:

Anonymous said...

Lucy you such a doomer... god

AI can do a lot of things. I'm part of a team that is working with IBM and their Watson system. AI has a very long way to go though I do see dangers circa 2030...

By the way, the google and self driving car examples you used do not use AI...

and as far as computers suddenly stopped working, well then, imagine if the electricity went off. imagine if farmers decided to only feed themselves. imagine if companies topped drilling for oil.

its gonna be ok

q

Lucy said...

It's gonna be okay until 2030??

It's okay to turn off the electricity but what about the solar and battery powered robots?

I thought i read/was told/heard that the sensors are connected to some sort of AI in the car which allows it to navigate the road and traffic.

Keep Life Simple said...

Ive done some futurist studies for my company and many credible scientists predict a computer with more processing power than all humans combined. If course processing power is not intelligence. Ibm's watson can solve very complex situations - like medical diagnoses - but it has to be taught by humans and cannot even decide what to learn unless taught by humans what should be learned

Q