首页
登录
职称英语
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your r
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your r
游客
2023-10-21
17
管理
问题
(1) Someday soon, you will ask a robot to fetch a slice of pizza from your refrigerator. On that day, you’ll trust that the robot won’t tear through your walls and rip the fridge door off its hinges to get at your leftovers.
(2) Getting robots to do the things humans do in the ways that humans do them (or better) without human intervention is an immensely wicked problem of autonomy. With as many as half of American jobs at risk of automation according to one study, and with an expected 10 million self-driving cars on the road, robots are going to be everywhere, forever, and they won’t go away.
(3) The enormous scope and scale of how autonomous robots will begin changing our lives requires the public and technologists alike to consider the challenges of autonomy. Where will we allow robots to intervene into our lives? How do we make ethical judgments about the behavior of robots? What kind of partnerships will we develop with them? These are big questions. And one key challenge at the core of many of them is, in roboticist-talk, what it means to establish "meaningful human control, " or sufficient oversight over an autonomous agent. To get a grip on our autonomous future, we’ll need to figure out what constitutes "enough" oversight of a machine imbued with incredible intelligence.
(4) Today, most robots are made to accomplish a very specific set of tasks within a very specific set of parameters, such as geographic or time limitations, that are tied to the circuits of the machine itself. "We’re not at the stage where robots can do everything that humans can do, " says Dr. Spring Berman, assistant professor of mechanical and aerospace engineering at Arizona State University. "They could be multi-functional but they’re limited by their hardware."
(5) Thus, they need a human hand to help direct them toward a specific goal, in a futuristic version of ancient dog and human partnerships, says Dr. Nancy Cooke, a professor of human systems engineering at ASU, who studies human-machine teaming. Before dogs can lead search and rescue teams to buried skiers or sniff out bombs, they require an immense amount of training and "on-leash" time, Cooke says, and the same level of training is necessary for robots, though that training is usually programmed and based on multiple tests as opposed to the robot actually "learning."
(6) Even after rigorous "training" and vetting against a variety of distractions and difficulties, sometimes robots still do things they aren’t supposed to do because of quirks buried in their programming. In those cases, someone needs to be held accountable if the robot goes outside of its boundaries.
(7) "It can’t be
some patsy sitting in a cubicle somewhere pushing a button
, " says Dr. Heather Roff, a research scientist at ASU’s Global Security Initiative and senior research fellow at Oxford University. "That’s not meaningful." Based on her work with autonomous weapons systems, Dr. Roff says she is also wary of the sentiment that there will always be a human around. "A machine is not a morally responsible agent, " she says, "a human has to have a pretty good idea of what he’s asking the system to do, and the human has to be accountable."
(8) The allure of technology resolving problems difficult for humans, like identifying enemy combatants, is immense. Yet technological solutions require us to reflect deeply on the system being deployed: How is the combatant being identified? By skin tone, or gender or age or the presence or absence of certain clothing? What happens when a domestic police force deploys a robot equipped with this software? Ultimately, whose finger is on the trigger?
(9) Many of the ethics questions in robotics boil down to how the technology could be used by someone else in the future, and how much decision-making power you give to a robot, says Berman. "I think it’s really important that a moral agent is the solely responsible person [for a robot], " says Roff. "Humans justify bad actions all the time even without robots. We can’t create a situation where someone can shirk their moral responsibilities." And we can’t allow robots to make decisions without asking why we want robots to make those decisions in the first place. Answering those questions allows us to understand and implement meaningful human control. (本文选自 csmonitor. com) [br] Which of the following themes is repeated by both Berman and Roff?
选项
A、Hardware.
B、Ethics.
C、Training.
D、Technology.
答案
B
解析
细节题。原文最后一段中博尔曼指出,机器人学的道德问题在于未来技术如何为人所用,可见他提及了道德问题。而罗孚更是多次探讨到这个问题,她在第七段就指出,机器人不能为其行为承担道德责任,这个责任还是应由人类承担;最后一段中她再次强调要让一个道德个体成为机器人的唯一责任人。可见博尔曼和罗孚均提及道德范畴的问题,故B为答案。A“硬件”话题只有博尔曼在第四段最后一句提到过,但是罗孚并未提及,故排除;C“培训”则是库克博士提到的主题,博尔曼和罗孚均未提及,可排除;而“技术”只是博尔曼在第九段开头提到的,故排除D。
转载请注明原文地址:https://tihaiku.com/zcyy/3118638.html
相关试题推荐
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
(1)Somedaysoon,youwillaskarobottofetchasliceofpizzafromyourr
随机试题
AboutGlassandHowItIsMade1.Obsidian■Akindof【T1】_______
WriteonANSWERSHEETTWOacompositionofabout200wordsonthefollowingt
某机械加工企业下设四个生产车间生产加工同种类型和型号的产品,并以人均产量评价劳动
下列寰枢脱位的治疗方式中,不宜采取的是A.颅骨牵引或枕颌带牵引,重量1~3kg,
建设工程项目施工质量验收合格应符合《建筑工程施工质量验收统一标准》GB-5030
无论是施工方自购还是发包方采购的物资,材料均应()见证取样,检测合格后方可使用。
在全面发展的教育中起导向和动力作用的是( )A.德育 B.智育 C.体育
保妇康栓的功能有A.活血止血 B.散寒止痛 C.行气破瘀 D.生肌止痛
在接受委托前,后任注册会计师应当征得被审计单位的同意后与前任注册会计师进行沟通。
投资项目决策分析与评价的基本要求包括贯彻落实科学发展观、资料数据准确可靠和()
最新回复
(
0
)