As AI and robots increasingly enter human society, it is paramount that their reward functions and subsequent behaviors are transparent (i.e. understandable and predictable) to humans. Thus, I research how robots may intuitively summarize and convey their underlying reward functions (and subsequent policies) to humans using informative demonstrations.
I previously completed a Master's at Carnegie Mellon with Red Whittaker and Nathan Michael studying autonomous radiation source localization. And before that, I graduated from Princeton University ('16) with degrees in Mechanical & Aerospace Engineering and a certificate in Robotics & Intelligent Systems. My undergraduate thesis on modeling uncertainty in stereo visual odometry was advised by Nathan Michael and Jianxiong Xiao.
I plan to transition into industry in 2024, and am currently looking for opportunities related to my research interests (including but not limited to transparency, value/AI alignment, explainable AI, human-robot interaction).
Toward greater transparency (i.e. understandability and predictability) robot behaviors to humans, my current research explores how a robot may teach its underlying reward function to a human learner using informative demonstrations (i.e. ones that exhibit key tradeoffs in decision making).
Our first key insight is that a demonstration's informativeness to a human is not intrinsic, but is inherently tied to that human's prior beliefs and their current expectations of robot behavior. We thus rely on inverse reinforcement learning and counterfactual reasoning (i.e. the extent to which the robot's demonstration differs from the human's current expectations) to evaluate a candidate demonstration's informativeness to a human at revealing the robot's reward function.
Our second key insight is that informativeness and difficulty of comprehension are often two sides of the same coin for humans, and we leverage ideas from cognitive science (e.g. zone of proximal development / "Goldilocks" principle) to ensure that the selected demonstrations present the right level of challenge. Too small of a difference and the reconciliation in the human's mind is trivial, and too large of a difference and the gap is irreconcilable in one shot; we use scaffolding to incrementally increase in information gain and simultaneously ease humans into learning.
Finally, we are currently exploring how to select a suite of informative tests (which query the human's ability to correctly predict robot behavior unseen scenarios) that reveal and bridge remaining gaps in the human learner's understanding, which can then be further supported through subsequent targeted demonstrations in a closed-loop fashion.
Design a closed-loop teaching scheme where AI policy is made more transparent to a human via demonstrations, tests, and feedback. Demonstrations are selected to be informative and understandable given a human's counterfactual expectations of the AI's policy.
Leverage counterfactual reasoning over possible outcomes and recourses to identify optimal belief strengths (i.e. parameters) for algorithmic decision making.
An informative demonstration is one that differs strongly from the learner’s expectations of what the robot will do given their current understanding of the robot’s decision making.
A behavior tree hierarchically composed of goals, subgoals, steps, and actions supports explanation generation algorithms that convey causal information about robot behavior.
Augmenting an inverse reinforcement learning model of how humans learn from demonstrations with teaching strategies (e.g. scaffolding, simplicity, pattern discovery, and testing) better accommodates human learners.
This proposal thus investigates how a robot can actively assess both its proficiency at a task and its confidence in that assessment through appropriate measures of uncertainty that can be efficiently and effectively communicated to a human.
I previously developed a novel gamma radiation map representation, source localization algorithm, and frontier-based exploration for efficient radiological characterization of nuclear facilities using a robot equipped with a gamma-ray camera.
The proposed frontier-based exploration method biases frontier selection with the observed radiation field gradient to quickly search an environment until a proximal source is detected.
The proposed active source localization algorithm greedily selects new waypoints that maximize the Fisher Information provided by the gamma-ray camera’s range and bearing observations.
A ground robot equipped with a Compton gamma camera localizes multiple gamma radiation sources to within an average of 0.26 m or 0.6% of the environment dimensions in two 5×4 m2 and 14×6 m2 laboratory environments.