首页
登录
职称英语
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your r
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your r
游客
2023-10-21
12
管理
问题
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your refrigerator. On that day, you’ll trust that the robot won’t tear through your walls and rip the fridge door off its hinges to get at your leftovers.
(2) Getting robots to do the things humans do in the ways that humans do them (or better) without human intervention is an immensely wicked problem of autonomy. With as many as half of American jobs at risk of automation according to one study, and with an expected 10 million self-driving cars on the road, robots are going to be everywhere, forever, and they won’t go away.
(3) The enormous scope and scale of how autonomous robots will begin changing our lives requires the public and technologists alike to consider the challenges of autonomy. Where will we allow robots to intervene into our lives? How do we make ethical judgments about the behavior of robots? What kind of partnerships will we develop with them? These are big questions. And one key challenge at the core of many of them is, in roboticist-talk, what it means to establish "meaningful human control, " or sufficient oversight over an autonomous agent. To get a grip on our autonomous future, we’ll need to figure out what constitutes "enough" oversight of a machine imbued with incredible intelligence.
(4) Today, most robots are made to accomplish a very specific set of tasks within a very specific set of parameters, such as geographic or time limitations, that are tied to the circuits of the machine itself. "We’re not at the stage where robots can do everything that humans can do, " says Dr. Spring Berman, assistant professor of mechanical and aerospace engineering at Arizona State University. "They could be multi-functional but they’re limited by their hardware."
(5) Thus, they need a human hand to help direct them toward a specific goal, in a futuristic version of ancient dog and human partnerships, says Dr. Nancy Cooke, a professor of human systems engineering at ASU, who studies human-machine teaming. Before dogs can lead search and rescue teams to buried skiers or sniff out bombs, they require an immense amount of training and "on-leash" time, Cooke says, and the same level of training is necessary for robots, though that training is usually programmed and based on multiple tests as opposed to the robot actually "learning."
(6) Even after rigorous "training" and vetting against a variety of distractions and difficulties, sometimes robots still do things they aren’t supposed to do because of quirks buried in their programming. In those cases, someone needs to be held accountable if the robot goes outside of its boundaries.
(7) "It can’t be
some patsy sitting in a cubicle somewhere pushing a button
, " says Dr. Heather Roff, a research scientist at ASU’s Global Security Initiative and senior research fellow at Oxford University. "That’s not meaningful." Based on her work with autonomous weapons systems, Dr. Roff says she is also wary of the sentiment that there will always be a human around. "A machine is not a morally responsible agent, " she says, "a human has to have a pretty good idea of what he’s asking the system to do, and the human has to be accountable."
(8) The allure of technology resolving problems difficult for humans, like identifying enemy combatants, is immense. Yet technological solutions require us to reflect deeply on the system being deployed: How is the combatant being identified? By skin tone, or gender or age or the presence or absence of certain clothing? What happens when a domestic police force deploys a robot equipped with this software? Ultimately, whose finger is on the trigger?
(9) Many of the ethics questions in robotics boil down to how the technology could be used by someone else in the future, and how much decision-making power you give to a robot, says Berman. "I think it’s really important that a moral agent is the solely responsible person [for a robot], " says Roff. "Humans justify bad actions all the time even without robots. We can’t create a situation where someone can shirk their moral responsibilities." And we can’t allow robots to make decisions without asking why we want robots to make those decisions in the first place. Answering those questions allows us to understand and implement meaningful human control. (本文选自 csmonitor. com) [br] The scene in Para. 1 is described to________.
选项
A、compare two generations of robots
B、show the development of autonomy
C、reveal the enormous power of robots
D、warn us about the potential threats
答案
B
解析
推断题。原文第一段描绘了一个场景:一个机器人会从冰箱里拿出比萨,而且不会出现穿墙而过、误拆冰箱门等情况。而第二段第一句指出,让机器人以人类的方式(甚至优于人类的水平)来完成以往人类所做的事情,而不需要人类干预,正是自动化面临的问题;最后一句则提到,机器人可能会遍布各处。由此可见,这一场景描写能够反映出机器人(自动化)的发展,故B为答案。文章中没有明确提到有不同代的机器人,因此排除A;从文章的后续论述可以看出,C“显示机器人的巨大力量”和D“警示我们潜在的威胁”与文章的主题不相关,因此均可排除。
转载请注明原文地址:https://tihaiku.com/zcyy/3118637.html
相关试题推荐
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
随机试题
AtachesstournamentinTunisiain1967,BobbyFischer,then24,waspitted
下列关于设备选择和布置的要求,正确的是()。A.对于一般送排风系统,通风机附
用治热陷心包之高热、神昏、谵语,最宜选用的药物是()A.黄连、胡黄连 B
可交换债券和可转换债券的不同之处有()。 ①发行主体不同②转股的股份来源不
炮制后能添阴滋血,抑制其浮阳之性,增强清肝退热功效的饮片是A.盐知母B.醋乳香C
红外测温图像特征判断法,主要适用于()设备。电压致热型$;$电流致热型$;$电
企业发展战略决定了组织结构的不同模式,以产品为中心设立事业部的大型跨国公司,宜采
关于桩周土沉降引起的桩侧负摩阻力和中性点的叙述,下列哪个选项是正确的? (A)
下列关于个人汽车贷款发放的说法,不正确的是()。A.贷款发放时,按照合同要求借
(2013年真题)关于国际工程招标,下列说法中正确的是()。A.投交标书的方式须
最新回复
(
0
)