Back in the 1980's, when the climate change campaigners were just starting to make an impact on our thinking, many people dismissed the warnings of rising sea levels and today there are standing knee deep in their flooded living rooms and stupidly wondering why.
Another looming catastrophe that never seems to get a mention is humans stupidly creating their own downfall by Artificial Intelligence
which is rapidly gaining on us humans and will finally overtake it and the repercussions that particular scenario will unfold.
already happening, pale faced girlfriendless people are out there
writing computer code that can park our cars and even drive them for us.
It starts small, consider those adverts in the margins of our web pages,
type into Google that you are looking for a tent and you are bombarded
by adverts from every camp shop or go browsing for guns and receive
adverts for penis enlargements.
In the driverless and car parking
examples, we have AI that can do things better and
safer than a human and within the next decade we will be happy to let
them parallel park and safely transfer us from A to B which is a sure
sign that humans haven't got long left as robots intellect bypasses
our own, known as the 'the singularity' in AI circles which is the moment when humans cease
to be the smartest things on the planet.
dominate modern life from directing traffic to controlling financial systems and security systems and satellite navigation GPS to most forms
of modern day communications so imagine the chaos if they all suddenly
Computer pioneer Alan Turing said: 'Once the machine thinking method has started, it would not take long to outstrip
our feeble powers. At some stage therefore we should have to expect the
machines to take control and the intelligence of man would be left far
behind. Thus the first ultra-intelligent machine is the last invention that man need ever make'.
and a software engineers have come together to create the Centre for
the Study of Existential Risk (CSER) to address the concerns of our
technology outstripping us and posing extinction-level risks to our
In 2009 the Association for the Advancement of Artificial
Intelligence (AAAI), chaired a meeting of leading computer scientists,
artificial intelligence researchers and roboticists to discuss the
potential impact of robots that could become
self-sufficient and able to make their own decisions. They described the
current level of AI as achieving the intelligence of a cockroach which may not sound much but it means it is on the evolutionary path where we once stood and in a fraction of the time.
concerned group of experts at the Singularity Institute have already
announced that the 3 Laws of Robotics would not be enough to protect
humans from AI if it attempted to take complete control over humanity.
saving grace at the moment is that humans write the code that run the
Artificial intelligence but as the code gets evermore complex, we will
have to leave the AI to write its own code, something called an AI Box
which far from being in the realms of a sci-fi movie, are being used
The Machine Intelligence Research Institute has already run
experiments on AI boxes that have the potential to make themselves more
intelligent by modifying their source code. These improvements would
make further improvements although the experiments have been limited to
just two hours for safety reasons.
Of course when we do create
super intelligent robots and they do take over the world and enslave us
they will obviously need to genetically engineer the humans they keep
around as slaves and when they have created a race of brilliantly intelligent
humans, we will re-enslave them back which will serve them right.