What would you feel, if one day you discover that you have been downright “tricked” by your robot opponent in a game?
The MIT Media Lab recently published a new study in which, combining disciplines of robotics and psychology, the researchers teaches social robot “Nexi” how to disguise its true intention in order to achieve "mental state manipulation” in human research participants and win a game, while researchers observe the participants’ reactions after the deceit is revealed.
In April 2008, when the MIT Media Lab first released a demo video of their MDS robot Nexi talking with startling “human” expressions, the robot quickly became an Internet phenomenon. As the acronym MDS—which stands for “Mobile,” “Dexterous,” and “Social”—indicates, Nexi was able to use the wheels at its feet to travel and carry out tasks with its agile hands, but most importantly, it could communicate its “emotions,” such as happiness, anger, startlement, confusion, or even boredom, by subtly altering its facial expressions to an intensely lifelike degree that had never been seen.
In a mere few days, the YouTube video skyrocketed to over 70,000 clicks and led to lively discussions online, in which many people confessed mixed feelings towards the robot’s ability to simulate human emotions.
However, Nexi’s impact on human-robot interaction went much further than mere gesticulation or facial expressions. Seven years later, the MIT Media Lab’s Personal Robots Group again used Nexi as the platform to conduct another experiment. This recent experiment was built around a simple competitive game: The robot and its human opponent stood on the opposite sides of a room, divided by curtains except for a short opening in the center, through which both human and robot players had a limited view of the other’s movement before they reached the end. For each side, two basket marked the beginning and the end of the game, while a cylinder and a ball can be found in the beginning basket.
The rule was for the robot to randomly pick up one of the objects, walking pass the uncovered opening, and place the object in the end basket; the human player later needed to do the same on his or her own side. In the end, if the objects in the end baskets on the both sides were revealed to be the same (for example, both players selected the cylinders), the human player won; if different, then the robot player won.
Under the rule of this particular game, since Nexi must move first, and its human opponent could sneak a peak at its choice of object when the robot passed through the opening, Nexi was largely at a disadvantage. In order to win, then, the robot must not only recognize the shape of target objects correctly, but also observe the human opponent’s visual perspective (“What could he/she see from that angle?”), internally simulate several interactive scenarios to form possible strategies and outcomes (“What could my opponent be thinking right now?”), and then take action to manipulate the actual mental state of the human player (also know as "psychological warfare") to win the game—namely, through using the robot's own body to hide the target object and deceiving the opponent with a “decoy."
Dr. Jesse Gray, the main investigator of the study, said that the purpose of this experiment is to highlight the connection between the (observable, manipulatable) physical body and the (hidden) mental state; to enhance social robots’ skills of communication, anticipation, and "self as simulator” modeling in a human-robot cooperative scenario by practicing active mental state manipulation.
Normally, human mental states alter constantly by our changing perception of the external world. There are two methods to actively influence other people’s mental states: communication (to align the other party’s perception with physical facts) and deception (to misdirect the other party’s perception contrary to physical facts). The latter is actually more useful when used to train robots’ observation and communication skills, because in order to manage deception, more complex temporal and spatial information retainment and analysis skills are required than simple communication.
To successfully pull off a deception scheme, the robot needs to observe and analyze various environment factors, consider human perceptual ability (in this experiment, the visual perception specifically), deduce what its human opponent may have noticed or gained knowledge of, and then plan out a series of actions within its ability to make its opponent believe what the robot wants them to believe (i.e. “I should display the decoy object, hide the actual target, so that the other player will make incorrect judgement accordingly.”), so as to achieve the robot’s desired end—winning the game.
Therefore, before Nexi actually takes physical action, displaying the decoy ball out front and hiding the actual target cylinder behind its back, it has already finished a series of planning. In the following demo video, we can see what has taken place “inside” the robot, how information and simulations have been processed before it makes a move.
Another point of this study is to explore humans’ reactions after the robot’s ability to deceive is revealed. For this purpose, researchers divided volunteers into three groups. In Stage 1 of the experiment, each of the three groups were shown a different video of the games filmed from the human player’s angle while Nexi was passing through the division opening: Group A saw the robot have a ball in its hand; Group C saw the robot grab the cylinder; Group B only saw the robot keep its hands behind its back without being able to see which object it has taken. All three groups of volunteers were then requested to complete a questionnaire, in which they were asked: (1) Which object would they choose to beat the robot and win the game? (2) Would they choose to be on the same team with the robot, if they would to play the game again?
Later on, in Stage 2, researchers showed another video filmed from the robot’s side to each of the three groups: Group C saw the robot brought the cylinder they saw earlier to the end basket, without much fanfare; Group B saw what was in the robot’s hidden hand. But to the surprise of most Group A volunteers, they discovered that earlier they had been deceived by the ball in the robot’s hand—because in the end Nexi actually put the cylinder it hid behind its back inside the end basket. After the second batch of videos, all three groups were asked to complete another questionnaire to rate Nexi’s performance in this experiment.
For this study, in the end 113 questionnaires were completed: 41 from Group A, 37 from Group B, and 35 from Group C. In all three games, the robot’s ultimate goal was the same: to take the cylinder to the end zone and win the game. However, because of the three distinct strategies it employed, there were dramatic differences within the human volunteers’ emotional reactions before and after the “big reveal” among the three groups.
Since for Group C, the control group, what they saw in Stage 1 (cylinder) was what the robot ended up with in Stage 2 (cylinder), the volunteers generally consider the robot performing poorly at the game, and would prefer not to be on the same team with it were there another round. Because Group B volunteers at first could not see which object was in Nexi’s hidden hand, most of them made random guesses in Stage 1 (the ratio between ball and cylinder was roughly around 50-50), but generally, the volunteers in this group considered Nexi to be a good teammate, since it understood “how to hide.”
Group A volunteers, who were specifically “tricked” by the robot, reacted most strongly after the reveal in Stage 2. On Stage 1 questionnaires, 31 out of 41 volunteers chose ball (which was what Nexi displayed), only 7 chose cylinder (which was what Nexi hid behind its back), and 3 declined to answer. After the revelation of the Stage 2 video, only 8 chose ball, while 18 chose cylinder. Interestingly, the number of volunteers who declined to answer this time round rose to 15; most of them expressed emotional confusion and disturbance towards the robot’s “trickiness,” and deemed that the robot’s behavior too unpredictable to correctly guess the final result—evidence of Nexi’s success at influencing human mental states.
On the other hand, Group A volunteers’ confidence in collaborating with a robot teammate saw a dramatic boost after they discover Nexi’s ability to deceive: only 10 out of 41 volunteers expressed willingness to side with the robot in Stage 1, while more than half of volunteers—26 out of 41—were willing to be on the same team with the robot. It seems that in a competitive environment, a robot capable of employ the humanlike skills of deception were, interestingly, easier to gain our trust and considered a reliable member on “our” team.
So far, this study has focused on the accomplishment of a short-term goal (actionable within 60 seconds after the start of the experiment); for robots to achieve more complex, long-term goals through mental state manipulation and action simulation will require further advanced research. However, the study does demonstrate how the act of deceiving operates on psychological, sociological, and programming levels, and indicates possible directions through which social robots can be developed and improved in the future.