Science fiction author Isaac Asimov foresaw that at some point in the future, robots will need some sort of law to stop them killing us all and came up with the three Laws of Robotics which states that robots cannot injure a human being or allow a human being to come to harm, robots must obey the orders given to it by humans except where such orders conflict with the First Law and that a robot must protect its own existence as long as such protection does not conflict with the first or second law.
All very sensible but as robots get more intelligent it isn't a giant leap to think that one day one of them will think 'hang about, i am far superior to humans in every way so why should i be subordinate to them' and start taking us out with its laser eyes or whatever we stupidly equip it with.
With this in mind the best of the robot builders and designers are coming together for the Institute of Electrical and Electronics Engineers Symposium on Robot and Human Interactive Communication. First topic up for discussion is 'Human-robot co-existence' as we become more and more reliant on things containing computer chips.
Perfectly sensible because as the geeks at work pointed out, Asimov's laws depend upon the definition of 'human' given to the robots and if a rogue nation hellbent on genocide of its neighbour described humans as speaking English, they could be sent into France and wipe out the whole country without contravening any of the three laws as they wouldn't be killing 'humans' by their understanding of the definition.
Always a loop hole so a bit of tinkering with Asimov's rules needed.