This year marks exactly two centuries si

考试题库2022-08-02  18

问题 This year marks exactly two centuries since the publication of Frankenstein; or, The Modem Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.  Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is intelligence, identity, or consciousness? What makes humans humans?”  What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans”.  Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”  But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI “vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem.  Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.  On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm”, or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights.  While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair.  To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.Which of the following would be the best title for the text?A. AI’s Future: In the Hands of Tech GiantsB. Frankenstein, the Novel Predicting the Age of AIC. The Conscience of AI: Complex But InevitableD. AI Shall Be Killers Once Out of Control

选项 A. AI’s Future: In the Hands of Tech Giants
B. Frankenstein, the Novel Predicting the Age of AI
C. The Conscience of AI: Complex But Inevitable
D. AI Shall Be Killers Once Out of Control

答案 C

解析 主旨大意题。本文开篇使用Mary Shelley的著作引出“新技术所引发的道德问题”这一话题,接着提到人工智能所引起的一些根本性问题及其发展现状,并举例详述了人工智能所引发的道德问题及其应对策略,文章末段指出,只有确保智能机器的思维反映人类最高价值观,人工智能才能为人类所用。由此可知,全文围绕人工智能引发的道德问题展开论述,并强调这一问题非常复杂且是科技发展中不可避免的,C选项是对文章内容的合理概括,故正确。
转载请注明原文地址:https://tihaiku.com/xueli/2698821.html

最新回复(0)