Saturday, 2 April 2016

When A.I Turns Bad

Artificial Intelligence has been in the news recently, the AlphaGo A.I. built by Google DeepMind, won a tournament against a top-ranked human player of the game of Go and then Microsofts Tay became corrupted and turned into a racist, sexist genocidal maniac within 24 hours of interaction with humans and had to be taken out by her creators.
It would assume that Microsoft took all the precautions to stop Tay turning into a robotic version of Donald Trump but they seem to have failed miserably, and in such a short time.
The problem with Artificial Intelligence is that there is always going to be some form of human interaction even if it is writing the software and letting it loose to learn itself.
If you look at the annals of human history, you’ll discover that despite being taught what is right or wrong, people are still capable of unimaginable evil. Just look at the list of genocidal maniacs in our role call and if humans are capable of so much wickedness, what hinders a powerful AI from doing the same?
It could be that a future super intelligent AI works out that humans are the weak link in the chain and therefore the best course of action is to remove us, permanently.
I wouldn't rely on the Asimov 3 Laws of robotics to save us either. It may seem a watertight case to protect humans but it can be easily overridden.

1 - A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3 - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

Perfectly sensible you may think but Asimov's laws depend upon the definition of 'human' given to the robots and that is almost impossible to cover completely so the robot could quite happily wipe us out without contravening any of the three laws as they wouldn't be killing 'humans' by their understanding of the definition.

2 comments:

Falling on a bruise said...

Problems reading the 'I would assume' and 'It could be' in the text?

Falling on a bruise said...

Did I? Any particular bit you are referring to where I spoke with authority?