As AI agents increasingly enter human society, it is paramount that their decision making and behaviors are transparent (i.e. understandable and predictable) to humans. Thus, I researched how AI agents can use key demonstrations to summarize their decision-making and help humans predict agent behaviors across a domain.
I previously completed a Master's at Carnegie Mellon in robotics with Red Whittaker and Nathan Michael researching autonomous radiation source localization. Before that, I graduated from Princeton University ('16) with degrees in Mechanical & Aerospace Engineering and a certificate in Robotics & Intelligent Systems. My undergraduate thesis in computer vision on modeling uncertainty in feature-based visual odometry was advised by Nathan Michael and Jianxiong Xiao.
I am currently looking for opportunities related to my research interests (including but not limited to explainable AI, AI alignment/safety, human-AI interaction, and robotics).
Impact: Throughout my PhD, I published at top AI, machine learning, and robotics conferences (e.g. AAAI, IROS, HRI), workshops (e.g. ICML), and journals (Frontiers in Robotics and AI, THRI), and was selected into two doctoral consortiums (AAAI, HRI).
I have been fortunate to collaborate with researchers at multiple institutions throughout my PhD (Google DeepMind, UMass Lowell, Tufts, BYU) which led to two joint publications.
Toward greater transparency (i.e. understandability and predictability) of AI behaviors to humans, my research explored how an AI agent may teach its underlying reward function to a human learner using informative demonstrations that exhibit key tradeoffs in decision making.
Our first key insight was that a demonstration's informativeness to a human is not intrinsic, but is inherently tied to that human's prior beliefs and their current expectations of agent behavior. We thus relied on inverse reinforcement learning and counterfactual reasoning (i.e. the extent to which the agent's demonstration deviates from the human's current expectations) to evaluate a candidate demonstration's informativeness to a human at revealing the agent's reward function.
Our second key insight was that informativeness and difficulty of comprehension are often correlated, and we leveraged ideas from the education literature (e.g. zone of proximal development / "Goldilocks" principle) to ensure that the selected demonstrations presented the right level of challenge. If the difference between a demonstration and the human's expectation was too small, reconciliation would be trivial; if too large, the gap would be irreconcilable in one shot. We thus used scaffolding to provide demonstrations that incrementally increased in information gain and simultaneously eased humans into learning.
Finally, we explored how to select a suite of informative tests that assessed the human's ability to predict agent behavior in unseen scenarios and revealed remaining gaps in the human learner's understanding, which were then bridged through subsequent targeted demonstrations in a closed-loop fashion.
Developing algorithms for teaching AI policies to humans using informative demonstrations of AI behavior. Conducted four user studies involving 750+ participants. Our teaching model reduces the suboptimality of human predictions of AI behavior by 64% over the baseline of directly providing the AI’s reward function.
Designing a closed-loop teaching framework where AI policy is made more transparent to a human via demonstrations of AI behavior, tests, and feedback. A novel particle filter model of human beliefs is maintained to provide demonstrations that are targeted to the human's current understanding in real time.
Reasoning over a human's counterfactual expectations of the AI's policy in real time to provide informative demonstrations of AI behavior that differ meaningfully.
Leveraging counterfactual reasoning over possible outcomes and recourses to identify optimal belief strengths (i.e. parameters) for algorithmic decision making.
An informative demonstration is one that differs meaningfully from the learner’s expectations of what the robot will do given their current understanding of the robot’s decision making.
A behavior tree hierarchically composed of goals, subgoals, steps, and actions supports explanation generation algorithms that convey causal information about robot behavior.
Augmenting an inverse reinforcement learning model of how humans learn from demonstrations with teaching strategies (e.g. scaffolding, simplicity, pattern discovery, and testing) better accommodates human learners.
A proposal for how a robot can actively assess both its proficiency at a task and its confidence in that assessment through appropriate measures of uncertainty that can be efficiently and effectively communicated to a human.
I previously developed a novel gamma radiation map representation, source localization algorithm, and frontier-based exploration for efficient radiological characterization of nuclear facilities using a robot equipped with a gamma-ray camera.
The proposed frontier-based exploration method biases frontier selection with the observed radiation field gradient to quickly search an environment until a proximal source is detected.
The proposed active source localization algorithm greedily selects new waypoints that maximize the Fisher Information provided by the gamma-ray camera’s range and bearing observations.
A ground robot equipped with a Compton gamma camera localizes multiple gamma radiation sources to within an average of 0.26 m or 0.6% of the environment dimensions in two 5×4 m2 and 14×6 m2 laboratory environments.