Soft robots may not be in touch with human feelings, but they are getting be

游客2024-03-07  7

问题     Soft robots may not be in touch with human feelings, but they are getting better at feeling human touch. Cornell University researchers have created a low-cost method for soft, deformable (可变形的) robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software.
    The new ShadowSense technology originated as part of an effort to develop inflatable (充气的) robots that could guide people to safety during emergency evacuations. Such a robot would need to be able to communicate with humans in extreme conditions and environments. Imagine a robot physically leading someone down a noisy, smoke-filled corridor by detecting the pressure of the person’s hand.
    Rather than installing a large number of contact sensors—which would add weight and complex wiring to the robot, and would be difficult to embed in a deforming skin—the team took a counterintuitive approach. In order to gauge touch, they looked to sight. "By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images, " the researcher, Yuhan Hu, said.
    The prototype robot consists of a soft inflatable bladder (囊状物) of nylon skin, under which is a USB camera, which connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish between six touch gestures—touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all—with an accuracy of 87. 5 to 96%, depending on the lighting. The robot can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker. And the robot’s skin has the potential to be turned into an interactive screen. By collecting enough data, a robot could be trained to recognize an even wider vocabulary of interactions, custom-tailored to fit the robot’s task, Hu said.
    The robot doesn’t even have to be a robot. ShadowSense technology can be incorporated into other materials, such as balloons, turning them into touch-sensitive devices. In addition to providing a simple solution to a complicated technical challenge, and making robots more user-friendly to boot, ShadowSense offers a comfort that is increasingly rare in these high-tech times: privacy. "If the robot can only see you in the form of your shadow, it can detect what you’re doing without taking high fidelity (精确性) images of your appearance, " Hu said. "That gives you a physical filter and protection, and provides psychological comfort. " [br] How did the robot show that it could sense different gestures?

选项 A、By using a neural-network-based algorithm.
B、By distinguishing the recorded training data.
C、By reacting with specific motions or sounds.
D、By giving feedback on an interactive screen.

答案 C

解析 由题干中的the robot show that it could sense定位到第四段。细节辨认题。由第四段第三句可知,机器人可以通过编程对某些触摸和手势作出反应,比如翻滚离开或通过扬声器发出信息,因此C“通过对特定的动作或声音作出反应”与定位句意思相同,故为正确答案。
转载请注明原文地址:https://tihaiku.com/zcyy/3512233.html
最新回复(0)