Explainable AI

In addition to capable robots and AI-enabled systems, it is critical that their users (humans) maintain an accurate understanding of their capabilities as well as inevitable limitations. Too little trust may lead to limited adoption of technology, while over-confidence in its capabilities can lead to undesirable side effects. To address this challenge, we are developing user-centered techniques to improve transparency in AI-enabled systems and guide humans to effectively use them.

Publications

  1. HRI Companion
    Interactively Explaining Robot Policies to Humans in Integrated Virtual and Physical Training Environments
    In Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2024
  2. AAAI
    I-CEE: Tailoring Explanations of Image Classifications Models to User Expertise
    Yao Rong, Peizhu Qian, Vaibhav Unhelkar, Enkelejda Kasneci
    In AAAI Conference on Artificial Intelligence (AAAI), 2024
  3. TPAMI
    Towards Human-centered Explainable AI: User Studies for Model Explanations
    Yao Rong, Tobias Leemann, Thai-trang Nguyen, Lisa Fiedler, Peizhu Qian, Vaibhav Unhelkar, Tina Seidel, Gjergji Kasneci, Enkelejda Kasneci
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
  4. AAMAS
    Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents
    In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022
  5. Towards Interactively Improving Human Users’ Understanding of Robot Behavior
    Extended Abstract at the Workshop on Robotics for People at R:SS, 2021
  6. HRI
    Evaluating Effects of User Experience and System Transparency on Trust in Automation
    X. Jessie Yang, Vaibhav Unhelkar, Kevin Li, Julie Shah
    In ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2017