Franklin & Marshall College researchers were trying to improve computational tools that will develop the performance of socially assistive robots. As scientists have introduced robots in the rea world setting, it is necessary to understand for them when they need to assist their human users. It will also help the robot to autonomously understand if their help is needed or not. This will also help the robots to understand the social cues given by humans and respond accordingly.
Researchers want to develop this technique to help humans in their everyday task where there are not enough people to assist.
Researchers think that if a robot helps a human being, it will understand humanity and it will perform the task with ‘dignity’.
Scientists have also focused on preserving a user’s autonomy. This is why they have focused on to fix an algorithm inside the robot for it to define between helping too much or helping too little.
When a user will need help, they will be able to make the robot understand in an explicit or implicit way. They can make the robot understand through facial expressions or body language. Humans can also use their eye gaze in order to communicate.
A gaze called confirmatory gaze, is primarily used when we look at another person as a request to look at what we are looking at because we are unsure about something. Users can use this technique upon robots as well.
Scientists are trying to improve the functions of the robot to understand eye-gaze-related cues automatically. The robot will be able to analyse different types of cues such as, user’s speech and eye gaze patterns.
This new technique does not need prior information about the task that the user is completing, which enables the robot to co-operate with their users in a real-world context.
This method has proved to be highly effective. So, scientists are hoping that this method will soon be included in the social robots to develop their performance.