Utilize este identificador para referenciar este registo: https://hdl.handle.net/1822/48691

TítuloToward robots as socially aware partners in human-robot interactive tasks
Autor(es)Silva, Rui Manuel Gomes da
Orientador(es)Bicho, Estela
Branco, Pedro
Data18-Abr-2017
Resumo(s)A major challenge in modern robotics is the design of socially intelligent robots that can cooperate with people in their daily tasks in a human-like way. Needless to say that nonverbal communication is an essential component for every day social interaction. We humans continuously monitor the actions and the facial expressions of our partners, interpret them effortlessly regarding their intentions and emotional states, and use these predictions to select adequate complementary behaviour. Thus, natural human-robot interaction or joint activity, requires that assistant robots are endowed with these two (high level) social cognitive skills. The goal of this work was the design of a cognitive control architecture for socially intelligent robots, heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action understanding and emotion understanding in humans. The design of cognitive control architectures on these basis, will lead to more natural and eficient human-robot interaction/collaboration, since the team mates will become more predictable for each other. Central to this approach, neuro-dynamics is used as a theoretical language to model cognition, emotional states, decision making and action. The robot control architecture is formalized by a coupled system of Dynamic Neural Fields (DNFs) representing a distributed network of local but connected neural populations with specific functionalities. Different pools of neurons encode relevant information about hand actions, facial actions, action goals, emotional states, task goals and context in the form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. Ultimately, the DNF architecture implements a dynamic context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviours of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture has been validated in multiple scenarios of a joint assembly task in which an anthropomorphic robot - ARoS - and a human partner assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detected errors and adapt its behaviour accordingly by adjusting its decisions and movements during the execution of the task. It is possible to observe how in the same conditions a different emotional state can trigger different a overt behaviour in the robot, which may include different complementary actions and/or different movements kinematics.
TipoTese de doutoramento
DescriçãoTese de doutoramento em Engenharia Eletrónica e de Computadores (especialidade em Controlo, Automação e Robótica)
URIhttps://hdl.handle.net/1822/48691
AcessoAcesso aberto
Aparece nas coleções:BUM - Teses de Doutoramento
CAlg - Teses de doutoramento/PhD theses
DEI - Teses de doutoramento

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
Rui Manuel Gomes da Silva.pdfTese de Doutoramento3,22 MBAdobe PDFVer/Abrir

Partilhe no FacebookPartilhe no TwitterPartilhe no DeliciousPartilhe no LinkedInPartilhe no DiggAdicionar ao Google BookmarksPartilhe no MySpacePartilhe no Orkut
Exporte no formato BibTex mendeley Exporte no formato Endnote Adicione ao seu ORCID