From the anti-utopia writings of Aldous Huxley and H. G. Wells to movies lik

游客2024-11-16  6

问题     From the anti-utopia writings of Aldous Huxley and H. G. Wells to movies like Matrix, the rise of the machine has long terrified mankind. There are also thinkers who think artificial intelligence (AI) is a real danger. The following article introduces an Oxford academic’s warning that humanity runs the risk of creating super intelligent computers that eventually destroy human.
    Write an article of NO LESS THAN 300 words, in which you should:
    1. summarize briefly the article;
    2. give your comment.
    Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University, has predicted a future where machines run by artificial intelligence become so indispensable in human lives they eventually make us redundant and take over. And he says his alarming vision could happen as soon as the next few decades. Dr Armstrong said: "Humans steer the future not because we’re the strongest or the fastest, but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel."
    Dr Armstrong envisages machines capable of harnessing such large amounts of computing power, and at speeds inconceivable to the human brain, that they will eventually create global networks with each other—communicating without human interference.
    It is at that point that what is called Artificial General Intelligence (AGI)—in contrast to computers that carry out specific, limited, tasks, such as driverless cars—will be able to take over entire transport systems, national economies, financial markets, healthcare systems and product distribution. "Anything you can imagine the human race doing over the next 100 years there’s the possibility AGI will do very, very fast," he said.
    But while handing over mundane tasks to machines may initially appear attractive, it contains within it the seeds of our own destruction. In attempting to limit the powers of such super AGIs mankind could unwittingly be signing its own death warrant He warns that it will be difficult to tell whether a machine is developing in a benign or deadly direction.
    He says an AI would always appear to act in a way that was beneficial to humanity, making itself useful and indispensable—much like the iPhone’s Siri, which answers questions and performs simple organisational tasks—until the moment it could logically take over all functions. "As AIs get more powerful anything that is solvable by cognitive processes, such as ill health, cancer, depression, boredom, becomes solvable," he says. "And we are almost at the point of generating an AI that is as intelligent as humans."
    Dr Armstrong says mankind is now involved in a race to create "safe AI" before it is too late.
"Plans for safe AI must be developed before the first dangerous AI is created," he writes in his book Smarter Than Us: The Rise of Machine Intelligence. "The software industry is worth many billions of dollars, and much effort is being devoted to new AI technologies. Plans to slow down this rate of development seem unrealistic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry."
    One solution to the dangers of untrammelled AI suggested by industry experts and researchers is to teach super computers a moral code.
    Unfortunately, Dr Armstrong points out, mankind has spent thousands of years debating morality and ethical behaviour without coming up with a simple set of instructions applicable in all circumstances which it can follow. Imagine then, the difficulty in teaching a machine to make subtle distinctions between right and wrong. "Humans are very hard to learn moral behaviour from," he says. "They would make very bad role models for AIs."

选项

答案             Artificial Intelligence Won’t Destroy Humanity
    There are exciting advancements in the field of artificial intelligence and they could make our world a better place. However, not everyone agrees with that. Dr. Stuart Armstrong says in the above article that the rise of super intelligent machines which are likely to be smarter than humans could spell the end of humanity. As it is not clear whether artificial intelligence is moving towards a benign or vicious direction, he says, mankind must create "safe AI" before it is too late. In my view, although machines are getting better and faster, that doesn’t mean our future will be akin to some sci-fi scene where machines destroy us.
    Firstly, we might have overestimated the likelihood that machines will be as intelligent as human beings. In reality, the development of artificial intelligence is a slow and time-consuming process, and there are many fundamental discoveries away from machines being endowed with the abilities to learn, imagine and reason. It is very unlikely that an AI would be smart enough to devise a way to flatten us without being smart enough to understand why we would be a threat. Secondly, machines with superhuman intelligence, if they ever exist, will need us as much as we need them. Just consider what an AI would employ to ensure its survival. If it is computer based, maintenance may remain dependent. You might imagine more robots being created to perform these upkeep functions, but we are nowhere close to having this kind of general-purpose robots. Unless there are majors breakthroughs in robotics, machines are going to depend on humans for supplies and repairs. So rather than having a relationship that would be adversarial, a cooperative relationship would be more likely to ensure.
    I think our trepidation about whether artificial intelligence will take over and eventually wipe human race out is caused by an overestimate of the impact of the technology as well as a lack of vision about how mankind will prosper.

解析     文章围绕“人工智能有可能毁灭人类”进行论述,可分为三部分内容。
    第一段指出人类即将面临的问题,牛津大学的阿姆斯特朗博士的预言:人工智能最终将取代并掌控人类(eventually make us redundant and take over)。
    第二到第五段分析了这一问题可能出现的原因:人工智能的发展速度飞快。将与人类大脑相当(at speeds inconceivable to the human brain);涉及的领域扩大,包括整个运输系统(entire transport systems)、国民经济(national economies)、金融市场(financial markets)等等;发展方向具有不确定性,不能分辨是往良性方向还是恶性方向(in a benign or deadly direction);越来越实用和不可或缺(useful and indispensable),人工智能也许会有逻辑地发挥所有功能(logically take over all functions)。
    最后四段从引领人工智能的发展方向出来,提出解决方案,即给人工只能安装道德模式(teach super computers a moral code)。但由于是非尚未有一套适用于所有情况的标准(a simple set of instructions applicable in all circumstances),这一措施很难实施。
    开篇:首先总结文章大意——阿姆斯特朗博士称未来机器可能会导致人类的毁灭,因此他建议人类加速创造安全的人工智能。接着明确本文的立场:人工智能不会毁灭人类。
    主体:分两方面阐述原因
    1.人工智能的发展是一个漫长的过程。
    2.机器还十分依赖人类。人与机器之间是合作而非对抗关系。
    结尾:总结全文——我们不要夸大技术的作用,也不要小看人类自身发展的可能性。
转载请注明原文地址:https://tihaiku.com/zcyy/3847662.html
最新回复(0)