资料:Demystifying how social and human-lik

资格题库2022-08-02  32

问题 资料:Demystifying how social and human-like robots work is vital so that we can understand and shape how they will affect our future, Dr Hatice Gunes will tell the Hay Festival next week. (1)  Fear mongering and myth-making about human-like and social robots is stopping us from engaging with the technology behind them and having an input into how they—and we—evolve, says Hatice Gunes, Associate Professor at University of Cambridge's Computer Laboratory. (2)  Dr Gunes will be speaking about her research at the Hay Festival on 1st June and says we need to move beyond sensationalist portrayals of human-like robot. Her Hay talk will centre on human robot interaction [ HRI] and how it can be used for our benefit, for instance, for helping children with autism learn how to read expressions and to stimulate the senses of elderly people in care. (3)  Dr Gunes will outline how HRI works. She says it has to be believable in order to be effective. That means robots’ appearance is very important. This is what has driven the development of humanoid robots with arms and aspects of a human face which can behave in a human-like way, for instance, moving their arms, legs and eyes. However, more important than appearance is their behaviour and emotional expressivity. Dr Gunes refers to the way we relate to Disney’s animated characters. “People believe in them because they can portray emotion,” she says. (4)  To achieve expressivity requires an understanding of how human emotions are portrayed and triggered. Scientists have been working on artificial emotional intelligence which enables new technology such as embodied agents and robots to both express and detect emotions, understanding non-verbal cues. Dr Gunes cites the work of Charles Darwin on the visual nature of emotions and how they can be mapped to various changes in facial expressions. (5)  Her research investigates how humanoids can be programmed not only to extract and respond to facial clues to emotions, but also to understand the context in which those emotions are expressed. That means they will be able to offer a response that is sensitive to specific contexts. (6)  Will robots ever be able to have emotions themselves though? Dr Gunes says there is no reason why not and questions what emotions are. The process of working with robots on artificial emotional intelligence unpicks the nature of our emotions, showing them to be a layering of different goals, experiences and stimuli. (7)   Another area which scientists are looking at in their quest to improve humanoids’ believability is personality. Dr Gunes has done a lot of work on personality in telepresence robotics, robots controlled remotely by a human—a kind of 3D avatar. These can be used in many ways, for instance, by medical staff to offer remote home care. The medical person can be based anywhere and operate the robot through a virtual headset. Dr Gunes is interested in how people react to the teleoperator (the human controlling the robot remotely) who is present in robot form. Once again, both the robot’s physical appearance and behaviour are important and research shows that their personality needs to be task dependent. (8)  Dr Gunes says there remain some big challenges for scientists working on HRI, including how to process and combine all the different data they are gathering, how to modify their appearance and behaviour dynamically, and how to keep their power going 24/7. The major challenges, however, are to do with breaking down some of the myths and fears people have about humanoids. (9)   Part of this is because they don’t understand the benefits humanoid robots can bring and why, for instance, they need to take on a human form and understand emotions. She says humanoids can be positive in terms of increasing trust and engagement among certain groups, such as the elderly; that humans tends to anthropomorphise technology in any event; and that robots can be programmed to be limited to positive emotions that promote altruism. (10)  “People tend to love or hate robots, but they don’t really know a lot abouA.It is important for robots to learn about the context so as to understand human emotions.B.Whether humanoids will have human emotions themselves still remains unclear.C.It is a stigma for robots to have different layers of human emotions.D.The nature of human emotions will hinder the development of humanoids.

选项 A.It is important for robots to learn about the context so as to understand human emotions.
B.Whether humanoids will have human emotions themselves still remains unclear.
C.It is a stigma for robots to have different layers of human emotions.
D.The nature of human emotions will hinder the development of humanoids.

答案 A

解析 本题考查的是细节理解。
【关键词】Dr Gunes;true;robots and human emotions
【主题句】第6自然段Her research investigates how humanoids can be programmed not only to extract and respond to facial clues to emotions, but also to understand the context in which those emotions are expressed. 她的研究调查了类人机器人如何被编程的,不仅能够提取和回应面部表情对情绪的暗示,而且能够理解这些情绪表达的背景。
第7自然段Will robots ever be able to have emotions themselves though? Dr Gunes says there is no reason why not and questions what emotions are. The process of working with robots on artificial emotional intelligence unpicks the nature of our emotions, showing them to be a layering of different goals, experiences and stimuli.那么机器人本身能够拥有情感吗?Gunes博士表示,认为机器人不能有情感毫无道理,并且质疑情绪的含义。在与机器人人工情感智能合作的过程揭示了我们情绪的本质,对它们而言是不同目标,经验和刺激的分层。
【解析】本题问的是“根据Gunes博士的观点,以下哪个关于机器人和人类情感的说法是正确的?”选项A意为“为了理解人类情感,机器人了解其背景至关重要”。选项B意为“类人机器人本身是否会拥有人类情感不得而知”。选项C意为“机器人拥有不同层次的人类情感是一种耻辱。”选项D意为“人类情感的本质会阻碍类人机器人的发展。”根据主题句可知,选项A正确。根据第七段可知,人工智能是否会拥有人类情绪是明确的,故B错误。选项C和D在文中并未提及。
转载请注明原文地址:https://tihaiku.com/congyezige/806825.html

最新回复(0)