Sci-fi promises being realized by MIT’s CSAIL Lab. Using radio waves instead of x-rays this tech can monitor movement even through walls. A lot of the applications sound very exciting such as for video games or health care. It does raise some concerns for privacy though.
Via MIT News:
Their latest project, “RF-Pose,” uses artificial intelligence (AI) to teach wireless devices to sense people’s postures and movement, even from the other side of a wall.
The researchers use a neural network to analyze radio signals that bounce off people’s bodies, and can then create a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.
The team says that RF-Pose could be used to monitor diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy, providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns. The team is currently working with doctors to explore RF-Pose’s applications in health care.
All data the team collected has subjects’ consent and is anonymized and encrypted to protect user privacy. For future real-world applications, they plans to implement a “consent mechanism” in which the person who installs the device is cued to do a specific set of movements in order for it to begin to monitor the environment.
Besides health care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.
Katabi co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT Professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li, and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.
One challenge the researchers had to address is that most neural networks are trained using data labeled by hand. A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can’t be easily labeled by humans.