The AlphaGo program’ s victory is an example of how smart computers have beco

游客2023-06-22  4

问题    The AlphaGo program’ s victory is an example of how smart computers have become.
   But can artificial intelligence (AI) machines act ethically, meaning can they be honest and fair?
   One example of AI is driverless cars. They are already on California roads, so it is not too soon to ask whether we can program a machine to act ethically. As driverless cars improve, they will save lives. They will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should the cars be programmed to avoid hitting a child running across the road, even if that will put their passengers at risk? What about making a sudden turn to avoid a dog? What if the only risk is damage to the car itself, not to the passengers?
   Perhaps there will be lessons to learn from driverless cars, but they are not super-intelligent beings. Teaching ethics to a machine even more intelligent than we are will be the bigger challenge.
   About the same time as AlphaGo’ s triumph, Microsoft’ s "chatbot" took a bad turn. The software, named Taylor, was designed to answer messages from people aged 18 -24. Taylor was supposed to be able to learn from the messages she received. She was designed to slowly improve her ability to handle conversations, but some people were teaching Taylor racist ideas. When she started saying nice things about Hitler, Microsoft turned her off and deleted her ugliest messages.
   AlphaGo’ s victory and Taylor’ s defeat happened at about the same time. This should be a warning to us. It is one thing to use AI within a game with clear rules and clear goals. It is something very different to use AI in the real world. The unpredictability of the real world may bring to the surface a troubling software problem.
   Eric Schmidt is one of the bosses of Google, which owns AlphaGo. He thinks AI will be positive for humans. He said people will be the winner, whatever the outcome. Advances in AI will make human beings smarter, more able and "just better human beings". [br] What do we learn about Microsoft’ s "chatbot" Taylor?

选项 A、She could not distinguish good from bad.
B、She could turn herself off when necessary.
C、She was not made to handle novel situations.
D、She was good at performing routine tasks.

答案 A

解析 推理题。由原文第五段可知,人们设计“聊天机器人”泰勒本来是用来回复人们的留言的,而且按照程序设计它可以从收到的信息中学到东西,还可以慢慢提高处理对话的能力,但当有人给它灌输种族主义时,它也会接受并吸收学习这种不好的思想,这说明泰勒不能分辨它所接收到的信息正确与否,故答案为A。B项与原文不符,原文指出因为它的错误言论,微软把它关闭了,而不是它自己关闭了,故排除。C、D两项原文均未提及,故排除。
转载请注明原文地址:https://tihaiku.com/zcyy/2772706.html
最新回复(0)