In 2021, a team of researchers from Kyushu University, Ritsumeikan University, and Kansai University in Japan evaluated how humans respond to images of people with the same face.
The study showed that human faces, not clone faces, provide vital information for people to identify individuals because human beings have a one-to-one correspondence between face and identity. The study postulated that clone faces violate that principle which may make humans misjudge the identity of people with clone faces that are the same.
As robotics, machine learning, and artificial intelligence (AI) mature and advance and converge, researchers and robot creators are working towards the tipping point of robots in our daily lives.
Jeff Cardenas, CEO of Apptronik, says that after decades of research, innovation, and trial and error, the technology needed for the robotic revolution to take off is finally here and is ready now.
“We’re at an exciting inflection point where, in the next five years [..], we’ll see an explosion of robotic systems integrated into everyday life. The key is to build capable systems that can not only safely interact with humans, but are also human-centered at their core [..],”said Cardenas.
In June, Smithsonian Magazine ran an article delving into how human-looking we really want robots to be. In the study Living skin on a robot, Japanese researchers from the University of Tokyo encased a robotic finger in human living skin cells coated with collagen to see if they could create a more human-like robotic finger.
But, humans’ reactions to robots are still varied. This study from the Georgia Institute of Technology showed that age mattered regarding what types of robots people preferred – most college-aged adults preferred robots to look like robots. In contrast, older adults preferred robots with more human faces.
A recent study from the Italian Institute of Technology (IIT) in Genoa, Italy, performed a non-verbal Turing test showing that people who interacted with the humanoid open-source robot iCub, couldn’t tell if the robot was human-controlled or by a compute
In the non-verbal Turning test, the iCub performed a series of simple button presses. Human participants judged whether they were interacting with a machine or a person by only considering the timing of button presses during a joint action task.
Agnieszka Wykowska, a senior study author and head of the unit “Social Cognition in Human-Robot Interaction” at IIT, said the simple button presses were programmed to be in the human-like range of reaction times, so they resembled human characteristics of reacting to stimuli.
The results showed that people who interacted with the iCub could not tell whether it was human-controlled or pre-programmed when pre-programmed. Wykowska says this suggests that the robot passed this version of the non-verbal Turing test in this specific task.
“The most exciting result of our study is that the human brain has a sensitivity to extremely subtle behavior which manifests humanness,” said Wykowska. “We chose simple behaviors of the robot [..] to understand if reduced information is sufficient for the brain to detect if a human controller is operating the robot.”
“The brains of our participants detected the human controller condition with sufficient accuracy to say that the brain is sensitive to this hint of humanness,” added Wykowska.
Cardenas says that ultimately, humanness in robots comes down to their design.
“Robots that are sleek and modern – and ones that even have faces or hands – are less intimidating than heavy, bulky, unappealing machines,” said Cardenas. “Aspects of a robot that is well-designed for humanness will increase their utility to help us, and makes us want to interact with them.”
“The ultimate goal of humanness should be to build trust between humans and robots so they can become a tool for us and help us unlock new potential as human beings,” said Cardenas.