首页
登录
职称英语
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your r
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your r
游客
2023-10-21
8
管理
问题
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your refrigerator. On that day, you’ll trust that the robot won’t tear through your walls and rip the fridge door off its hinges to get at your leftovers.
(2) Getting robots to do the things humans do in the ways that humans do them (or better) without human intervention is an immensely wicked problem of autonomy. With as many as half of American jobs at risk of automation according to one study, and with an expected 10 million self-driving cars on the road, robots are going to be everywhere, forever, and they won’t go away.
(3) The enormous scope and scale of how autonomous robots will begin changing our lives requires the public and technologists alike to consider the challenges of autonomy. Where will we allow robots to intervene into our lives? How do we make ethical judgments about the behavior of robots? What kind of partnerships will we develop with them? These are big questions. And one key challenge at the core of many of them is, in roboticist-talk, what it means to establish "meaningful human control, " or sufficient oversight over an autonomous agent. To get a grip on our autonomous future, we’ll need to figure out what constitutes "enough" oversight of a machine imbued with incredible intelligence.
(4) Today, most robots are made to accomplish a very specific set of tasks within a very specific set of parameters, such as geographic or time limitations, that are tied to the circuits of the machine itself. "We’re not at the stage where robots can do everything that humans can do, " says Dr. Spring Berman, assistant professor of mechanical and aerospace engineering at Arizona State University. "They could be multi-functional but they’re limited by their hardware."
(5) Thus, they need a human hand to help direct them toward a specific goal, in a futuristic version of ancient dog and human partnerships, says Dr. Nancy Cooke, a professor of human systems engineering at ASU, who studies human-machine teaming. Before dogs can lead search and rescue teams to buried skiers or sniff out bombs, they require an immense amount of training and "on-leash" time, Cooke says, and the same level of training is necessary for robots, though that training is usually programmed and based on multiple tests as opposed to the robot actually "learning."
(6) Even after rigorous "training" and vetting against a variety of distractions and difficulties, sometimes robots still do things they aren’t supposed to do because of quirks buried in their programming. In those cases, someone needs to be held accountable if the robot goes outside of its boundaries.
(7) "It can’t be
some patsy sitting in a cubicle somewhere pushing a button
, " says Dr. Heather Roff, a research scientist at ASU’s Global Security Initiative and senior research fellow at Oxford University. "That’s not meaningful." Based on her work with autonomous weapons systems, Dr. Roff says she is also wary of the sentiment that there will always be a human around. "A machine is not a morally responsible agent, " she says, "a human has to have a pretty good idea of what he’s asking the system to do, and the human has to be accountable."
(8) The allure of technology resolving problems difficult for humans, like identifying enemy combatants, is immense. Yet technological solutions require us to reflect deeply on the system being deployed: How is the combatant being identified? By skin tone, or gender or age or the presence or absence of certain clothing? What happens when a domestic police force deploys a robot equipped with this software? Ultimately, whose finger is on the trigger?
(9) Many of the ethics questions in robotics boil down to how the technology could be used by someone else in the future, and how much decision-making power you give to a robot, says Berman. "I think it’s really important that a moral agent is the solely responsible person [for a robot], " says Roff. "Humans justify bad actions all the time even without robots. We can’t create a situation where someone can shirk their moral responsibilities." And we can’t allow robots to make decisions without asking why we want robots to make those decisions in the first place. Answering those questions allows us to understand and implement meaningful human control. (本文选自 csmonitor. com) [br] In this passage, the author expresses________over the prospect of autonomous robots.
选项
A、support
B、pessimism
C、hesitation
D、concern
答案
D
解析
态度题。本题考查的是在把握全文大意的基础上判断作者对此话题的观点态度。作者在前两段描述现象,引出话题之后,在第三段前半部分提出了一系列的问题,第八段又再次强调技术的发展要求我们深刻地反思我们使用的系统,而最后一段总结全文时,也指出人类解决一些有关机器人的关键问题的重要性,可见作者对机器人未来的发展表达出关注与思考,因此答案为D。文章并没有进行正反两种观点的博弈,作者也没有明确表达自己对自动化技术未来的预料,因此排除A和B;全文重点关注自动化机器人的控制和道德问题,并没有表现出迟疑的态度,故排除C。
转载请注明原文地址:https://tihaiku.com/zcyy/3118639.html
相关试题推荐
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
随机试题
Asfoodistothebody,soislearningtothemind.Ourbodiesgrowandmusc
Thestudentwhowantsanewspapercareerhasmuchhardworkaheadofhimbefore
以下哪条措施对增强中空玻璃的保温性能基本无作用?( )A.增加两层玻璃本身的厚
检验诊断灵敏度是指A.某检验项目确认无某种疾病的能力 B.某检验项目对某种疾病
元气的生成来源A.以上均非 B.先天之精气与后天之精气 C.吸人之精气与肾中
镜检可见棕色腺毛和有壁疣非腺毛的药材是A.吴茱萸B.五味子C.栀子D.山楂E.木
A.增加多巴胺含量B.增加去甲肾上腺素含量C.抗心肌缺血D.兴奋苯二氮受体E.
非霍奇金淋巴瘤中最常见的是A.T细胞来源 B.B细胞来源 C.NK细胞
保守速动比率是评价资产流动比率的更进一步的有关变现能力的比率指标,也被称为“酸性
微机继电保护装置用于旁路保护或其他定值经常需要改变的情况时,宜设置多套切换的定值
最新回复
(
0
)