Frankenstein's monster haunts discussion

admin2022-08-02  40

问题 Frankenstein's monster haunts discussions of the ethics of artificial intetligence:the fear is that scientists will create something that has purposes and even desires of its own and which will carry them out at the expense of human beings.This is a misleading picture because it suggests that there will be a moment at which the monster comes alive:the switch is thrown,the program run,and after that its human creators can do nothing more.In real life there will be no such singularity.Construction of AI and its deployment will be continuous processes,with humans involved and to some extent responsible at every step.This is what makes Google'-s declarations of ethical principles for its use of AI so significant,because it seems to be the result of a revolt among the company's programmers.The senior management at Google saw the supply of AI to the Pentagon as a goldmine,if only it could be kept from public knowledge."Avoid at all costs any mention or implication of Al,"wrole Google Cloud's chief scientist for AI in a memo."I don't know what would happen if the media starts picking up a theme that Google is building AI weapons or AI technologies to enable weapons for the Defense industry."That,of course,is exactly what the company had been doing.Google had been subcontracting for the Pentagon on Project Maven,which was meant to bring the benefits of AI to war-fighting.Then the media found out and more than 3,000 0f its own employees prote.sted.Only iwo things frighten the tech giants:onc i.s the stock market;the other is an organised workforce.The employees'agitation led to Google announcing six principles of ethical AI,among them that it will not make weapons systems.or technologies whose purpose,or use in surveillance,violates international principles of human rights.This still leaves a huge intentional exception:profiting from"non-lethal"defence technology.Obviously we cannot expect all companies,still less all programmers,to show this kind of ethical fine-tuning.Other companies will bid for Pentagon business:Google had to beat IBM,Amazon and Microsoft to gain the Maven contract.But in all these cases,the companies involved-which means the people who work for them-will be actively involved in maintaining,tweaking and improving the work.This opens an opportunity for consistent ethical pressure and for the attribution of responsibility to human beings and not to inanimate objects.Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to.It is not the monster,but the good Dr Frankenstein we need to worry about most.Which of the following is true according to Paragraph 3?A.Google had been developing war-related Al secretly.B.Google prioritizes employees'opinions over profits.C.Google promises not to profit from AI-related defence technology.D.Google's six principles violate international principles of human rights.

选项 A.Google had been developing war-related Al secretly.
B.Google prioritizes employees'opinions over profits.
C.Google promises not to profit from AI-related defence technology.
D.Google's six principles violate international principles of human rights.

答案 A

解析 第三段首句指出“那(That回指上段所述“秘密制造驱动武器的AI技术”这一事实)正是谷歌一直在做的事”,下文进一步指出,谷歌一直在帮助五角大楼将Al技术用于战争,直至被媒体发现并引发抗议,可见A.正确。[解题技巧]B.对文中事实“员工抗议促使谷歌宣布AI道德原则”错误引申:由谷歌做法“先前向员工隐瞒参与Maven项目、后基于对联合起来的员工(organised workforce)的恐惧而宣布Al道德原则,但仍故意留下了从非致死性防御技术中获利的例外”可知,符歌并非真正重视员工意见大过利益。C.与第三段末句暗示信息“谷歌仍有意从非致死性防御技术中获利”相悖。D.将违背国际人权原则的主体“某些武器系统或技术的使用目的或监控用途(weapons systems,or technologies.…)”偷换为“谷歌的AI道德六原则”。推理可知,谷歌道德六原则符合国际人权原则。
转载请注明原文地址:https://tihaiku.com/xueli/2697784.html

最新回复(0)