For the study, the team from Bielefeld University in Germany invited participants in the lab and asked them to jump into the shoes of their robotic bartender named James.
“We asked ourselves how a human bartender solves the problem and whether a robotic bartender can use similar strategies,” said lead researcher Jan de Ruiter.
The participants looked through the robot’s eyes and ears and selected actions from its repertoire.
The study focused on the bartenders’ actions.
For example, the participants did not speak to their customers immediately but they turned the robot towards the customers and looked at them.
“This eye contact is a visual handshake. It opens a channel such that both parties can speak,” the authors noted.
“Customers wish to place an order if they stand near the bar and look at the bartender. It is irrelevant if they speak,” added Sebastian Loth, co-author of the study.
Once it is established that the customer wishes to place an order, the body language becomes less important.
“At this point, the participants focused on what the customer said,” Loth added.
For example, if the camera lost the customer and the robot believed the customer was “not visible”, the participants ignored this visual information.
They continued speaking, served the drink or asked for a repetition of the order.
That means that a robotic bartender should sometimes ignore data.
For the results, the participants were asked to put themselves into the mind of a robotic bartender.
They sat in front of a computer screen and had an overview of the the robot data — visibility of customer, position at bar, position of face, angle of body and angle of face to the robot.
This data was recorded during a trial session with the bartending robot James at its own mock bar in Munich.
For the trial, customers were asked to order a drink with James and to rate their experience afterwards.
The results were published in the open-access research journal Frontiers in Psychology.