Introducing the Air-Guardian, a system created by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). In today's aviation landscape, where pilots are inundated with information from multiple screens, especially during critical moments, the Air-Guardian serves as a proactive co-pilot, establishing a symbiotic partnership between humans and machines centered around the comprehension of attention.But …
Air-Guardian blends human intuition with AI precision for safer flying
Introducing the Air-Guardian, a system created by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In today’s aviation landscape, where pilots are inundated with information from multiple screens, especially during critical moments, the Air-Guardian serves as a proactive co-pilot, establishing a symbiotic partnership between humans and machines centered around the comprehension of attention.
But how does it precisely discern attention? For humans, it relies on eye-tracking technology, while for the neural system, it depends on a concept known as “saliency maps” that pinpoint the focus of attention. These maps serve as visual aids, highlighting crucial areas within an image, facilitating the understanding and interpretation of complex algorithms. Unlike traditional autopilot systems that intervene only when safety is compromised, Air-Guardian identifies early signs of potential risks through these attention markers.
The implications of this system extend beyond aviation. Similar collaborative control mechanisms could find applications in automobiles, drones, and a broader range of robotic systems.
MIT CSAIL postdoc Lianhao Yin, a lead author of a new paper on Air-Guardian, notes, “An intriguing feature of our approach is its adaptability. We have designed our cooperative layer and the entire end-to-end process to be trainable. We specifically selected the causal continuous-depth neural network model due to its dynamic capacity in tracking attention. Another unique aspect is its flexibility. The Air-Guardian system is not rigid; it can adapt to the demands of the situation, ensuring a balanced synergy between humans and machines.”
In real-world tests, both the pilot and the system made decisions based on the same raw images when navigating to the designated waypoint. The effectiveness of Air-Guardian was assessed based on cumulative rewards earned during flights and the efficiency of reaching the target waypoint. The system effectively reduced the risk associated with flights and improved the success rate of reaching the desired waypoints.
Ramin Hasani, an MIT CSAIL research affiliate and the inventor of liquid neural networks, emphasizes, “This system exemplifies the innovative concept of human-centric AI-enabled aviation. Our utilization of liquid neural networks offers a dynamic, adaptable approach, ensuring that AI doesn’t merely replace human judgment but complements it, ultimately enhancing safety and collaboration in the aviation domain.”
The core strength of Air-Guardian lies in its foundational technology. It employs an optimization-based cooperative layer that utilizes visual attention from both humans and machines, along with liquid closed-form continuous-time neural networks (CfC), known for their ability to decode cause-and-effect relationships, to analyze incoming images for critical information. Complementing this is the VisualBackProp algorithm, which identifies the system’s focal points within an image, ensuring a clear understanding of its attention maps.
For widespread adoption, there is a need to refine the human-machine interface. Feedback suggests that using an indicator, such as a bar, might be more intuitive to signify when the guardian system assumes control.
Air-Guardian marks a new era of safer skies, offering a dependable safety net for those moments when human attention falters.
Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, director of CSAIL, and senior author on the paper, underscores the synergy between human expertise and machine learning, highlighting the goal of using machine learning to augment pilots in challenging situations and reduce operational errors.
Stephanie Gil, assistant professor of computer science at Harvard University, who was not involved in the work, notes, “One of the most interesting outcomes of using a visual attention metric in this work is the potential for allowing earlier interventions and greater interpretability by human pilots. This showcases a great example of how AI can be used to work with a human, lowering the barrier for achieving trust by using natural communication mechanisms between the human and the AI system.”
This research received partial funding from the U.S. Air Force (USAF) Research Laboratory, the USAF Artificial Intelligence Accelerator, the Boeing Co., and the Office of Naval Research. The findings do not necessarily reflect the views of the U.S. government or the USAF.
Subscribe to Our Newsletter
Keep in touch with our news & offers
Thank you for subscribing to the newsletter.
Oops. Something went wrong. Please try again later.