Human memory is notoriously unreliable. Even people with the sharpest facial-

游客2024-03-11  7

问题    Human memory is notoriously unreliable. Even people with the sharpest facial-recognition skills can only remember so much.
   It’ s tough to quantify how good a person is at remembering. No one really knows how many different faces someone can recall, for example, but various estimates tend to hover in the thousands—based on the number of acquaintances a person might have.
   Machines aren’ t limited this way. Give the right computer a massive database of faces, and it can process what it sees—then recognize a face it’ s told to find—with remarkable speed and precision. This skill is what supports the enormous promise of facial-recognition software in the 21st century. It’ s also what makes contemporary surveillance systems so scary.
   The thing is, machines still have limitations when it comes to facial recognition. And scientists are only just beginning to understand what those constraints are. To begin to figure out how computers are struggling, researchers at the University of Washington created a massive database of faces—they call it MegaFace—and tested a variety of facial-recognition algorithms (算法) as they scaled up in complexity. The idea was to test the machines on a database that included up to 1 million different images of nearly 700,000 different people—and not just a large database featuring a relatively small number of different faces, more consistent with what’s been used in other research.
   As the databases grew, machine accuracy dipped across the board. Algorithms that were right 95% of the time when they were dealing with a 13,000-image database, for example, were accurate about 70% of the time when confronted with 1 million images. That’ s still pretty good, says one of the researchers, Ira Kemelmacher-Shlizerman. " Much better than we expected," she said.
   Machines also had difficulty adjusting for people who look a lot alike—either doppelgangers (长相极相似的人) , whom the machine would have trouble identifying as two separate people, or the same person who appeared in different photos at different ages or in different lighting, whom the machine would incorrectly view as separate people.
   "Once we scale up, algorithms must be sensitive to tiny changes in identities and at the same time invariant to lighting, pose, age," Kemelmacher-Shlizerman said.
   The trouble is, for many of the researchers who ’ d like to design systems to address these challenges, massive datasets for experimentation just don’ t exist—at least, not in formats that are accessible to academic researchers. Training sets like the ones Google and Facebook have are private. There are no public databases that contain millions of faces. MegaFace’ s creators say it’ s the largest publicly available facial-recognition dataset out there.
   " An ultimate face recognition algorithm should perform with billions of people in a dataset," the researchers wrote. [br] What is the difficulty confronting researchers of facial-recognition machines?

选项 A、No computer is yet able to handle huge datasets of human faces.
B、There do not exist public databases with sufficient face samples.
C、There are no appropriate algorithms to process the face samples.
D、They have trouble converting face datasets into the right format.

答案 B

解析 细节题。原文倒数第二段前两句话指出,对于那些想要设计系统应对这一挑战的研究者们,没有用于实验的巨大的数据集,即便有,学术研究人员也无法得到。谷歌和脸书这样的数据库是私人的。没有一个包含百万张人脸图像的公开的数据库。最后一段提到,研究人员表明,终极人脸识别算法应该在有着数十亿人脸数据的数据库上执行。由此可知,研究人员面临的问题是没有一个足够大的可以公开用的人脸图像数据库,故答案为B。A、C、D三项原文均未提及,故排除。
转载请注明原文地址:https://tihaiku.com/zcyy/3525612.html
最新回复(0)