首页
登录
职称英语
Trust Me, I’m a Robot With robots now emer
Trust Me, I’m a Robot With robots now emer
游客
2023-09-17
10
管理
问题
Trust Me, I’m a Robot
With robots now emerging from their industrial cages and moving into homes and workplaces, roboticists are concerned about the safety implications beyond the factory floor. To address these concerns, leading robot experts have come together to try to find ways to prevent robots from harming people. Inspired by the Pugwash Conferences—an international group of. scientists, academics and activists founded in 1957 to campaign for the non-proliferation of nuclear weapons—the new group of robo-ethicists met earlier this year in Genoa, Italy, and announced their initial findings in March at the European Robotics Symposium in Palermo, Sicily.
"Security and safety are the big concerns," says Henrik Christensen, chairman of the European Robotics Network at the Swedish Royal Institute of Technology in Stockholm. Should robots that are strong enough or heavy enough to crush people be allowed into homes? Is "system malfunction" a justifiable defence for a robotic fighter plane that contravenes(违反) the Geneva Convention and mistakenly fires on innocent civilians?
These questions may seem hard to understand but in the next few years they will become increasingly relevant, says Dr. Christensen. According to the United Nations Economic Commission for Europe’s World Robotics’ Survey, in 2002 the number of domestic and service robots more than tripled, nearly surpassing their industrial counterparts. By the end of 2003 there were more than 600,000 robot vacuum cleaners and lawn mowers—a figure predicted to rise to more than 4m by the end of next year. Japanese industrial firms are racing to build humanoid robots to act as domestic helpers for the elderly, and South Korea has set a goal that 100% of households should have domestic robots by 2020. In light of all this, it is crucial that we start to think about safety guidelines now, says Dr. Christensen.
Stop right there
So what exactly is being done to protect us from these mechanical menaces? "Not enough," says Blay Whitby. This is hardly surprising given that the field of "safety-critical computing" is barely a decade old, he says. But things are changing, and researchers are increasingly taking an interest in trying to make robots safer. One approach, which sounds simple enough, is try to program them to avoid contact with people altogether. But this is much harder than it sounds. Getting a robot to navigate across a cluttered room is difficult enough without having to take into account what its various limbs or appendages might bump into along the way.
Regulating the behaviour of robots is going to become more difficult in the future, since they will increasingly have self-learning mechanisms built into them, says Giamnarco Veruggio. As a result, their behaviour will become impossible to predict fully, he says, since they will not be behaving in predefined ways but will learn new behaviour as they go.
Then there is the question of unpredictable failures. What happens if a robot’s motors stop working, or it suffers a system failure just as it is performing heart surgery or handing you a cup of hot coffee? You can, of course, build in redundancy by adding backup systems, says Hirochika Inoue. But this guarantees nothing, he says. "One hundred per cent safety is impossible through technology," says Dr. Inoue. This is because ultimately no matter how thorough you are, you cannot anticipate the unpredictable nature of human behaviour, he says. Or to put it another way, no matter how sophisticated your robot is at avoiding people, people might not always manage to avoid it, and could end up tripping over it and falling down the stairs.
Legal problems
In any case, says Dr. Inoue, the laws really just summarize commonsense principles that are already applied to the design of most modern appliances, both domestic and industrial. Every toaster, lawn mower and mobile phone is designed to minimize the risk of causing injury—yet people still manage to electrocute(电死) themselves, lose fingers or fall out of windows in an effort to get a better signal. At the very least, robots must meet the rigorous safety standards that cover existing products. The question is whether new, robot-specific rules are needed—and, if so, what they should say.
"Making sure robots are safe will be critical," says Colin Angle of iRobot, which has sold over 2m "Roomba" household-vacuuming robots. But he argues that his firm’s robots are, in fact, much safer than some popular toys. "A radio-controlled car controlled by a six-year old is far more dangerous than a Roomba," he says. If you tread on a Roomba, it will not cause you to slip over; instead, a rubber pad on its base grips the floor and prevents it from moving. "Existing regulations will address much of the challenge," says Mr. Angle. "I’m not yet convinced that robots are sufficiently different that they deserve special treatment."
Robot safety is likely to surface in the civil courts as a matter of product liability. "When the first robot carpet-sweeper sucks up a baby, who will be to blame?" asks John Hallam, a professor at the University of Southern Denmark in Odense. If a robot is autonomous and capable of learning, can its designer be held responsible for all its actions? Today the answer to these questions is generally "yes". But as robots grow in complexity it will become a lot less clear cut, he says.
"Right now, no insurance company is prepared to insure robots," says Dr. Inoue. But that will have to change, he says. Last month, Japan’s Ministry of Trade and Industry announced a set of safety guidelines for home and office robots. They will be required to have sensors to help them avoid collisions with humans; to be made from soft and light materials to minimize harm if a collision does occur; and to have an emergency shut-off button. This was largely prompted by a big robot exhibition held last summer, which made the authorities realize that there are safety implications when thousands of people are not just looking at robots, but mingling with them, says Dr. Inoue.
However, the idea that general-purpose robots, capable of learning, will become widespread is wrong, suggests Mr. Angle. It is more likely, he believes, that robots will be relatively dumb machines designed for particular tasks. Rather than a humanoid robot maid, "it’s going to be a heterogeneous(不同种类的) swarm of robots that will take care of the house," he says.
选项
A、Y
B、N
C、NG
答案
B
解析
首段及各段主题句。此题显然是主旨大意题。文章开宗明义:……机器人专家们对使用机器人带来的safety implications的关注就不能仅限于工厂范围了。通篇文章围绕着safety问题展开,对于机器人的development作者没有主要讨论。所以题目超出了文章的主题,答案是N。
转载请注明原文地址:http://tihaiku.com/zcyy/3021786.html
相关试题推荐
Industrialrobotsareusedforhandlingavarietyofproductsforcomplexproces
Forsocialrobotstoworksuccessfully,theyhaveto(accept)______byhumans.bea
Nodoubt,robotsarehavingadramaticeffect______thelabormarketinthiscoun
Itwasnotuntilthe1990s________industrialrobotsbecamereallycheap,andt
Thispassageismainlyaboutthedevelopmentandsafetyofrobots.[br]Mr.Angl
Thispassageismainlyaboutthedevelopmentandsafetyofrobots.[br]ColinAn
Thispassageismainlyaboutthedevelopmentandsafetyofrobots.[br]Withsel
Thispassageismainlyaboutthedevelopmentandsafetyofrobots.[br]Research
Thispassageismainlyaboutthedevelopmentandsafetyofrobots.[br]Robotsf
【S1】[br]【S8】increase→increases此处theintelligence在as引导的原因状语从句中作主语,ofrobots为其定语
随机试题
PresidentEnriquePenaNietohasencouragedMexicanstowalkmore,usestair
Whenyou’redrivingonamotorway,youmustobeythesignstellingyoutogetin
健康风险评估中的评估内容不包括A.糖尿病并发症患病风险评估 B.营养状况评估
关于股权投资母基金的运作模式、特点和作用,下列描述错误的是()。A.分散风险:
白噪声过程满足的条件有()。 Ⅰ.均值为0 Ⅱ.方差不变的常数 Ⅲ.
下列建设工程施工合同跟踪的对象中,属于对业主跟踪的是()。A、成本的增减
左边给定的是纸盒的外表面,下列哪一项能由它折叠而成? A.如上图所示 B.如
超导现象是指材料在一定条件下内部电阻变为零。这一特性却并未得到实际应用,其原因在
用人单位的下列事项发生变更,不影响劳动合同履行的有( )。A.名称变更 B.投
共用题干 女性,28岁。葡萄胎刮宫术后5个月,痰中带血丝2天。妇科检查:子宫正
最新回复
(
0
)