After years in the wilderness, the term "artificial intelligence" (AI)seems

游客2024-08-05  9

问题     After years in the wilderness, the term "artificial intelligence" (AI)seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research.

    The field was launched, and the term "artificial intelligence" coined, at a conference in 1956 by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways: AI unified the field in name only. But it was a term that captured the public imagination.
    Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating "artificial intelligence" would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. "There was undue optimism in the early 1980s," says David Leaky, a researcher at Indiana University. "Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. "
    Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. "If it works, it can’t be AI," as Dr. Leaky characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to "blue-sky" research that was still years away from commercialisation. Researchers joked that AI stood for "almost implemented". Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research.
    But the tide may now be turning, according to Dr. Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background—tasks humans can do well, but computers cannot. " Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development," says Dr. Leaky.
    Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categories information—classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge.

    The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. "People are beginning to realise that there are impressive things that these systems can do," says Dr. Leake hopefully.
Questions 66 to 70
Answer the following questions with the information given in the passage. [br] What happened to thinking computers and household robots?

选项

答案 They failed to materialise.

解析 (根据文章第三段的第五句至第七句,医学诊断程序和语音识别软件的原型似乎正在取得进展,但这不过是虚假的表象,会思考的电脑和家用机器人并没能成为现实,社会上的反对声也随着壮大了。)
转载请注明原文地址:https://tihaiku.com/zcyy/3706888.html
最新回复(0)