Giving robots superhuman vision using radio signals

In the race to develop robust perception systems for robots, one persistent challenge has been operating in bad weather and harsh conditions. For example, traditional, light-based vision sensors such as cameras or LiDAR (Light Detection And Ranging) fail in heavy smoke and fog.

However, nature has shown that vision doesn’t have to be constrained by light’s limitations — many organisms have evolved ways to perceive their environment without relying on light. Bats navigate using the echoes of sound waves, while sharks hunt by sensing electrical fields from their prey’s movements.

Radio waves, whose wavelengths are orders of magnitude longer than light waves, can better penetrate smoke and fog, and can even see through certain materials — all capabilities beyond human vision. Yet robots have traditionally relied on a limited toolbox: they either use cameras and LiDAR, which provide detailed images but fail in challenging conditions, or traditional radar, which can see through walls and other occlusions but produces crude, low-resolution images.

Now, researchers from the University of Pennsylvania School of Engineering and Applied Science (Penn Engineering) have developed PanoRadar, a new tool to give robots superhuman vision by transforming simple radio waves into detailed, 3D views of the environment.

“Our initial question was whether we could combine the best of both sensing modalities,” says Mingmin Zhao, Assistant Professor in Computer and Information Science. “The robustness of radio signals, which is resilient to fog and other challenging conditions, and the high resolution of visual sensors.”

In a paper to be presented at the 2024 International Conference on Mobile Computing and Networking (MobiCom), Zhao and his team from the Wireless, Audio, Vision, and Electronics for Sensing (WAVES) Lab and the Penn Research In Embedded Computing and Integrated Systems Engineering (PRECISE) Center, including doctoral student Haowen Lai, recent master’s graduate Gaoxiang Luo and undergraduate research assistant Yifei (Freddy) Liu, describe how PanoRadar leverages radio waves and artificial intelligence (AI) to let robots navigate even the most challenging environments, like smoke-filled buildings or foggy roads.

PanoRadar is a sensor that operates like a lighthouse that sweeps its beam in a circle to scan the entire horizon. The system consists of a rotating vertical array of antennas that scans its surroundings. As they rotate, these antennas send out radio waves and listen for their reflections from the environment, much like how a lighthouse’s beam reveals the presence of ships and coastal features.

Thanks to the power of AI, PanoRadar goes beyond this simple scanning strategy. Unlike a lighthouse that simply illuminates different areas as it rotates, PanoRadar cleverly combines measurements from all rotation angles to enhance its imaging resolution. While the sensor itself is only a fraction of the cost of typically expensive LiDAR systems, this rotation strategy creates a dense array of virtual measurement points, which allows PanoRadar to achieve imaging resolution comparable to LiDAR. “The key innovation is in how we process these radio wave measurements,” explains Zhao. “Our signal processing and machine learning algorithms are able to extract rich 3D information from the environment.”

One of the biggest challenges Zhao’s team faced was developing algorithms to maintain high-resolution imaging while the robot moves. “To achieve LiDAR-comparable resolution with radio signals, we needed to combine measurements from many different positions with sub-millimeter accuracy,” explains Lai, the lead author of the paper. “This becomes particularly challenging when the robot is moving, as even small motion errors can significantly impact the imaging quality.”

Another challenge the team tackled was teaching their system to understand what it sees. “Indoor environments have consistent patterns and geometries,” says Luo. “We leveraged these patterns to help our AI system interpret the radar signals, similar to how humans learn to make sense of what they see.” During the training process, the machine learning model relied on LiDAR data to check its understanding against reality and was able to continue to improve itself.

“Our field tests across different buildings showed how radio sensing can excel where traditional sensors struggle,” says Liu. “The system maintains precise tracking through smoke and can even map spaces with glass walls.” This is because radio waves aren’t easily blocked by airborne particles, and the system can even “capture” things that LiDAR can’t, like glass surfaces. PanoRadar’s high resolution also means it can accurately detect people, a critical feature for applications like autonomous vehicles and rescue missions in hazardous environments.

Looking ahead, the team plans to explore how PanoRadar could work alongside other sensing technologies like cameras and LiDAR, creating more robust, multi-modal perception systems for robots. The team is also expanding their tests to include various robotic platforms and autonomous vehicles. “For high-stakes tasks, having multiple ways of sensing the environment is crucial,” says Zhao. “Each sensor has its strengths and weaknesses, and by combining them intelligently, we can create robots that are better equipped to handle real-world challenges.”

This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and supported by a faculty startup fund.


Source link
Exit mobile version