Saturday, 28 November 2015

NAO [The Robot], learns to refuse Human Orders !


To get more information about NAO, click here.

Nao (pronounced now) is an autonomous, programmable humanoid robot developed by Aldebaran Robotics, a French robotics company headquartered in Paris. The robot's development began with the launch of Project Nao in 2004.

This robot has a built in AI, which helps it to learn few basic things, like throw wrapper in dustbin etc. But for developers NAO is a robot to play with and experiment with. 

Few developers from HRI labs at TUFTS University programmed this robot to make decisions and say 'NO' to human orders. Sounds like a SCI-FI movie come true right ? 
Well it indeed is, but is this safe ? Or is it a danger sign ? lets check it.

NAO showing consequence reasoning :-

In the video above the person asks the robot to walk forward, the robot scans the area in front of him and observes there is no support. The decision making algorithm decides that walking ahead is unsafe. Robot says 'NO' to the person who has given him the power to make decisions! Later when the person says that he will catch him and won't let him fall robot analyzes the situation and finds that it is safe to walk now since he will be catching us. This is a point where the robot trusts the person and takes the decision. 

Analyzing the situation :- 
  •  Technically, its just a program which returns false, when the robot scans the front space and decides not to walk.
  • Secondly, it is programmed to walk when he senses someone will catch him in such situations. 
Question is was this actual Intelligence ? Indeed it is, but since I have no idea of the source code, here is guess. Depending on how many times the person said that he will catch him and how many times he did and didn't, will decide what decision will the robot take. The trust factor will vary depending upon its experience [Artificial Intelligence].

Well played ROBOT !! :-P

NAO rejecting human request :-

In this video the person asks the robot to walk forward, after scanning and detecting a obstacle ahead the robot says 'NO' to the person. The person asks the robot to disable the Obstacle detection program, which the robot can disable. But it checks and finds that the person giving the orders has no right to disable this function. So the order is rejected by the robot.

Analyzing the situation :- 
  • Well the robot just does what is told to do ! It is obvious that the developer had restricted the access to the obstacle detection program, but is it safe ?
  • Check this situation, the same robot is programmed to keep a eye on a vault. The vault has a glass of water, the person who has set the robot to guard the vault, can only disable the protection. There is a guest in a house and the person faints, now the guest orders the robot to disable the guarding function, but it won't. Water in the vault is the only way to make the person conscious again, and the faint person is the one who can open the vault to get that water. The guarding robot will indeed fail in this situation and it can prove fatal.
  • Well the robot which can decide what is to be done in the above situation is definitely a Intelligent Robot. But such robots or such Intelligence is way too far for now. 
 So the question is that is it safe to design such robots that can deny humans ? well it is important to do so, or else humans can use the robots for unethical purposes. But what if the robot says  no to human order when it was important for the robot to accept the order ? The robotics field is yet to develop and when the perfect balance between the decision making comes, Robots will come in to picture & will become a integral part of daily lives!  

Looking at the way developers are dealing with robots, it seems the days are not too far when you will be seeing robots walk down the road just like any other living things :)


Follow the blog, 
Hit like on the Facebook page and stay tuned :)
Comments are welcomed :)
Thank You.