News

Stephanie Gil wins DARPA Young Faculty Award

Award will support research into improving resilience in multi-robot teams

image of Stephanie Gil leaning against a table

Assistant Professor of Computer Science Stephanie Gil

Stephanie Gil, Assistant Professor of Computer Science, at the Harvard John A. Paulson School of Engineering and Applied Sciences, has won a Defense Advanced Research Project Agency (DARPA) Young Faculty Award. The award, which is up to $1M, will support Gil’s research into improving the resilience of reinforcement learning (RL) systems used in multi-robotic teams, making them safer and more reliable for a range of real-world applications.

Multi-robot teams could be deployed to locate survivors after an earthquake, deliver humanitarian supplies in a war zone, or to streamline transportation as fleets of autonomous rideshare in cities. In each of those situations, the robots’ algorithms can be confronted with so-called malicious agents — whether it's an actual hacker attempting to interfere with robots on a battlefield, or an ill-informed model based on erroneous data or misinformation. 

Gil’s research focuses on building resilient machine reinforcement learning algorithms that help robotic teams coordinate to achieve their tasks when faced with these malicious agents. With the DARPA Young Faculty Award, Gil and her team will be able to explore how this type adversarial influence impacts the robots’ long-term planning and how to mitigate that influence on the team’s decision-making. Gil’s research will also address an important question in multiagent reinforcement learning systems — when to trust and, perhaps more importantly, not trust agents or data in the system.

“The understanding of trust can be key to identifying and eliminating the influence of malicious agents and allowing the robotics agents to understand when the strategy they are deploying to achieve a goal may or may not be the right one,” said Gil. “Our goal is to characterize a class of problems for which trust between agents can ensure a stable strategy, and understand how reinforcement learning can provide the ability to adapt and learn a better strategy over time with adversaries in the loop.”

This work could help researchers understand when reinforcement learning can and can’t be trusted to improve the performance of multi-robot teams, making reinforcement learning safer to deploy in for real-world environments.

Topics: AI / Machine Learning, Computational Science & Engineering, Robotics

Scientist Profiles

Press Contact

Leah Burrows | 617-496-1351 | lburrows@seas.harvard.edu