For an up-to-date list, please check our Google Scholar page. Asterisks denote equal contribution.
2024
-
Measuring Variations in Workload during Human-Robot Collaboration through Automated After-Action Reviews
In Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2024
Human collaborator’s workload plays a central role in human-robot collaboration. Algorithms designed to minimize cognitive workload enhance fluent human-robot teamwork. Time series data of workload is vital for both designing and assessing these algorithms. However, accurately quantifying and measuring cognitive workload, particularly at high temporal resolution, poses a substantial challenge. Towards addressing this challenge, we explore the potential of after-action reviews (AARs) as a tool for gauging workload during human-robot collaboration. First, through a case study, we present and demonstrate AutoAAR for measuring human workload post-task at a high temporal resolution. Second, through a user study, we quantify the validity and utility of measurements derived using AutoAAR for human-robot teamwork.The paper concludes with guidelines and future directions to extend this method to measure other internal states, such as trust and intent.
-
Interactively Explaining Robot Policies to Humans in Integrated Virtual and Physical Training Environments
In Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2024
Policy summarization is a computational paradigm for explaining the behavior and decision-making processes of autonomous robots to humans. It summarizes robot policies via exemplary demonstrations, aiming to improve human understanding of robotic behaviors. This understanding is crucial, especially since users often make critical decisions about robot deployment in the real world. Previous research in policy summarization has predominantly focused on simulated robots and environments, overlooking its application to physically embodied robots. Our work fills this gap by combining current policy summarization methods with a novel, interactive user interface that involves physical interaction with robots. We conduct human-subject experiments to assess our explanation system, focusing on the impact of different explanation modalities in policy summarization. Our findings underscore the unique advantages of combining virtual and physical training environments to effectively communicate robot behavior to human users.
-
IDIL: Imitation Learning of Intent-Driven Expert Behavior
In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2024
When faced with accomplishing a task, human experts exhibit intentional behavior. Their unique intents shape their plans and decisions, resulting in experts demonstrating diverse behaviors to accomplish the same task. Due to the uncertainties encountered in the real world and their bounded rationality, experts sometimes adjust their intents, which in turn influences their behaviors during task execution. This paper introduces IDIL, a novel imitation learning algorithm to mimic these diverse intent-driven behaviors of experts. Iteratively, our approach estimates expert intent from heterogeneous demonstrations and then uses it to learn an intent-aware model of their behavior. Unlike contemporary approaches, IDIL is capable of addressing sequential tasks with high-dimensional state representations, while sidestepping the complexities and drawbacks associated with adversarial training (a mainstay of related techniques). Our empirical results suggest that the models generated by IDIL either match or surpass those produced by recent imitation learning benchmarks in metrics of task performance. Moreover, as it creates a generative model, IDIL demonstrates superior performance in intent inference metrics, crucial for human-agent interactions, and aptly captures a broad spectrum of expert behaviors.
-
RW4T Dataset: Data of Human-Robot Behavior and Cognitive States in Simulated Disaster Response Tasks
In ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2024
To forge effective collaborations with humans, robots require the capacity to understand and predict the behaviors of their human counterparts. There is a growing body of computational research on human modeling for human-robot interaction (HRI). However, a key bottleneck in conducting this research is the relative lack of data of human internal states – like intent, workload, and trust – which undeniably affect human behavior. Despite their significance, these states are elusive to measure, making the assembly of datasets a challenge and hindering the progression of human modeling techniques. To help address this, we first introduce Rescue World for Teams (RW4T): a configurable testbed to simulate disaster response scenarios requiring human-robot collaboration. Next, using RW4T, we curate a multimodal dataset of human-robot behavior and internal states in dyadic human-robot collaboration. This RW4T dataset includes state, action and reward sequences, and all the necessary data to replay a visual task execution. It further contains psychophysiological metrics like heart rate and pupillometry, complemented by self-reported cognitive state measures. With data from 20 participants, each undertaking five human-robot collaborative tasks, this dataset accompanied with the simulator can serve as a valuable benchmark for human behavior modeling.
-
GO-DICE: Goal-conditioned Option-aware Offline Imitation Learning
In AAAI Conference on Artificial Intelligence (AAAI), 2024
Offline imitation learning (IL) refers to learning expert behavior solely from demonstrations, without any additional interaction with the environment. Despite significant advances in offline IL, existing techniques find it challenging to learn policies for long-horizon tasks and require significant re-training when task specifications change. Towards addressing these limitations, we present GO-DICE an offline IL technique for goal-conditioned long-horizon sequential tasks. GO-DICE discerns a hierarchy of sub-tasks from demonstrations and uses these to learn separate policies for sub-task transitions and action execution, respectively; this hierarchical policy learning facilitates long-horizon reasoning. Inspired by the expansive DICE-family of techniques, policy learning at both the levels transpires within the space of stationary distributions. Further, both policies are learnt with goal conditioning to minimize need for retraining when task goals change. Experimental results substantiate that GO-DICE outperforms recent baselines, as evidenced by a marked improvement in the completion rate of increasingly challenging pick-and-place Mujoco robotic tasks. GO-DICE is also capable of leveraging imperfect demonstration and partial task segmentation when available, both of which boost task performance relative to learning from expert demonstrations alone.
-
I-CEE: Tailoring Explanations of Image Classifications Models to User Expertise
In AAAI Conference on Artificial Intelligence (AAAI), 2024
Effectively explaining decisions of black-box machine learning models is critical to responsible deployment of AI systems that rely on them. Recognizing their importance, the field of explainable AI (XAI) provides several techniques to generate these explanations. Yet, there is relatively little emphasis on the user (the explainee) in this growing body of work and most XAI techniques generate "one-size-fits-all" explanations. To bridge this gap and achieve a step closer towards human-centered XAI, we present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise. Informed by existing work, I-CEE explains the decisions of image classification models by providing the user with an informative subset of training data (i.e., example images), corresponding local explanations, and model decisions. However, unlike prior work, I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users. We posit that by tailoring the example set to user expertise, I-CEE can better facilitate users’ understanding and simulatability of the model. To evaluate our approach, we conduct detailed experiments in both simulation and with human participants (N = 100) on multiple datasets. Experiments with simulated users show that I-CEE improves users’ ability to accurately predict the model’s decisions (simulatability) compared to baselines, providing promising preliminary results. Experiments with human participants demonstrate that our method significantly improves user simulatability accuracy, highlighting the importance of human-centered XAI.
-
AI-Assisted Human Teamwork
In AAAI Conference on Artificial Intelligence (AAAI) Doctoral Consortium, 2024
Effective teamwork translates to fewer preventable errors and higher task performance in collaborative tasks. However, in time-critical tasks, successful teamwork becomes highly challenging to attain. In such settings, often, team members have partial observability of their surroundings, incur high cost of communication, and have trouble estimating the state and intent of their teammates. To assist a team in improving teamwork at task time, my doctoral research proposes an automated task-time team intervention system. Grounded in the notion of shared mental models, the system first detects whether the team is on the same page or not. It then generates effective interventions to improve teamwork. Additionally, by leveraging past demonstrations to learn a model of team behavior, this system minimizes the need for domain experts to specify teamwork models and rules.
-
A Novel Multimodal Perspective on Objective Assessment of Non-Technical Skills in Cardiac Surgery
Mahdi Ebnali, Lauren Kennedy-Metz, Giovanna Varni,
Vaibhav Unhelkar, Eduardo Salas, Roger Dias, Marco Zenati
Extended Abstract at the Academic Surgical Congress (ASC), 2024
2023
-
Towards Human-centered Explainable AI: User Studies for Model Explanations
Yao Rong, Tobias Leemann, Thai-trang Nguyen, Lisa Fiedler,
Peizhu Qian,
Vaibhav Unhelkar, Tina Seidel, Gjergji Kasneci, Enkelejda Kasneci
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how human-computer interaction (HCI) and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability , and human-AI collaboration performance . Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
-
Towards a Web-Based Digital Twin for the Cardiac Operating Room
Poster at the Ken Kennedy Institute AI in Health Conference (AIHC), 2023
-
Using Deep Learning to Assess Teamwork during Cardiac Surgery
Extended Abstract at the Clinical Translation of Medical Image Computing and Computer Assisted Interventions (CLINICCAI), 2023
-
Opportunities and Challenges of Real-Time Measurement of Team Performance in the Cardiac Operating Room
Extended Abstract at the 67th International Annual Meeting of the Human Factors and Ergonomics Society (HFES), 2023
-
Robotic Tutors for Nurse Training: Opportunities for HRI Researchers
In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2023
An ongoing nurse labor shortage has the potential to impact patient care well-being in the entire healthcare system. Moreover, more complex and sophisticated nursing care is required today for patients in hospitals forcing hospital-based nurses to carry out frequent training and assessment procedures, both to onboard new nurses and to validate skills of existing staff that guarantees best practices and safety. In this paper we recognize an opportunity for the development and integration of intelligent robot tutoring technology into nursing education to tackle the growing challenges of nurse deficit. To this end, we identify specific research problems in the area of human-robot interaction that will need to be addressed to enable robot tutors for nurse training.
-
Rescue World for Teams (RW4T): A Testbed for Measuring Human Behavior and Mental States during HRI
Extended Abstract at the Workshop on Human-Robot Teaming at ICRA, 2023
-
Automated Task-Time Interventions to Improve Teamwork using Imitation Learning
In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2023
Effective human-human and human-autonomy teamwork is critical but often challenging to perfect. The challenge is particularly relevant in time-critical domains, such as healthcare and disaster response, where the time pressures can make coordination increasingly difficult to achieve and the consequences of imperfect coordination can be severe. To improve teamwork in these and other domains, we present TIC: an automated intervention approach for improving coordination between team members. Using BTIL, a multi-agent imitation learning algorithm, our approach first learns a generative model of team behavior from past task execution data. Next, it utilizes the learned generative model and team’s task objective (shared reward) to algorithmically generate execution-time interventions. We evaluate our approach in synthetic multi-agent teaming scenarios, where team members make decentralized decisions without full observability of the environment. The experiments demonstrate that the automated interventions can successfully improve team performance and shed light on the design of autonomous agents for improving teamwork.
2022
-
Semi-Supervised Imitation Learning of Team Policies from Suboptimal Demonstrations
In International Joint Conference on Artificial Intelligence (IJCAI), 2022
We present Bayesian Team Imitation Learner (BTIL), an imitation learning algorithm to model the behavior of teams performing sequential tasks in Markovian domains. In contrast to existing multi-agent imitation learning techniques, BTIL explicitly models and infers the time-varying mental states of team members, thereby enabling learning of decentralized team policies from demonstrations of suboptimal teamwork. Further, to allow for sample- and label-efficient policy learning from small datasets, BTIL employs a Bayesian perspective and is capable of learning from semi-supervised demonstrations. We demonstrate and benchmark the performance of BTIL on synthetic multi-agent tasks as well as a novel dataset of human-agent teamwork. Our experiments show that BTIL can successfully learn team policies from demonstrations despite the influence of team members’ (time-varying and potentially misaligned) mental states on their behavior.
-
Factorial Agent Markov Model: Modeling Other Agents’ Behavior in presence of Dynamic Latent Decision Factors
In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022
Autonomous agents operating in the real world often need to interact with other agents to accomplish their tasks. For such agents, the ability to model behavior of other agents – both human and artificial – without complete knowledge of their decision factors is essential. Towards realizing this ability, we present Factorial Agent Markov Model (FAMM), a model to represent behavior of other agents performing sequential tasks. In contrast with most existing models, FAMM allows for behavior of other agents to depend on multiple, time-varying latent decision factors and does not assume rationality. To enable learning of \famm parameters by observing behavior of other agents, we provide a set of variational inference algorithms for the unsupervised, semi-supervised, and supervised settings. These Bayesian learning algorithms for the FAMM enable agents to model other agents using execution traces and domain-specific priors. We demonstrate the utility of FAMM and corresponding learning algorithms using three synthetic domains and benchmark them against existing algorithms for modeling agent behavior. Our numerical experiments demonstrate that, despite the presence of multiple and time-varying latent states, our approach is capable of learning predictive models of other agents with semi-supervision.
-
Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents
In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022
Autonomous agents are increasingly being deployed amongst human end-users. Yet, human users often have little knowledge of how these agents work or what they will do next. This lack of transparency has already resulted in unintended consequences during AI use: a concerning trend which is projected to increase with the proliferation of autonomous agents. To curb this trend and ensure safe use of AI, assisting users in establishing an accurate understanding of agents that they work with is essential. In this work, we present AI teacher, a user-centered Explainable AI framework to address this need for autonomous agents that follow a Markovian policy. Our framework first computes salient instructions of agent behavior by estimating a user’s mental model and utilizing algorithms for sequential decision-making. Next, in contrast to existing solutions, these instructions are presented interactively to the end-users, thereby enabling a personalized approach to improving AI transparency. We evaluate our framework, with emphasis on its interactive features, through experiments with human participants. The experiment results suggest that, relative to non-interactive approaches, interactive teaching can both reduce the amount of time it takes for humans to create accurate mental models of these agents and is subjectively preferred by human users.
-
Human-Guided Motion Planning in Partially Observable Environments
In International Conference on Robotics and Automation (ICRA), 2022
Motion planning is a core problem in robotics, with a range of existing methods aimed to address its diverse set of challenges. However, most existing methods rely on complete knowledge of the robot environment; an assumption that seldom holds true due to inherent limitations of robot perception. To enable tractable motion planning for high-DOF robots under partial observability, we introduce BLIND, an algorithm that leverages human guidance. BLIND utilizes inverse reinforcement learning to derive motion-level guidance from human critiques. The algorithm overcomes the computational challenge of reward learning for high-DOF robots by projecting the robot’s continuous configuration space to a motion-planner-guided discrete task model. The learned reward is in turn used as guidance to generate robot motion using a novel motion planner. We demonstrate BLIND using the Fetch robot and perform two simulation experiments with partial observability. Our experiments demonstrate that, despite the challenge of partial observability and high dimensionality, BLIND is capable of generating safe robot motion and outperforms baselines on metrics of teaching efficiency, success rate, and path quality.
2021
-
Towards Interactively Improving Human Users’ Understanding of Robot Behavior
Extended Abstract at the Workshop on Robotics for People at R:SS, 2021
-
Learning Dense Rewards for Contact-Rich Manipulation Tasks
In International Conference on Robotics and Automation (ICRA), 2021
Rewards play a crucial role in reinforcement learning. To arrive at the desired policy, the design of a suitable reward function often requires significant domain expertise as well as trial-and-error. Here, we aim to minimize the effort involved in designing reward functions for contact-rich manipulation tasks. In particular, we provide an approach capable of extracting dense reward functions algorithmically from robots’ high-dimensional observations, such as images and tactile feedback. In contrast to state-of-the-art high-dimensional reward learning methodologies, our approach does not leverage adversarial training, and is thus less prone to the associated training instabilities. Instead, our approach learns rewards by estimating task progress in a self-supervised manner. We demonstrate the effectiveness and efficiency of our approach on two contact-rich manipulation tasks, namely, peg-in-hole and USB insertion. The experimental results indicate that the policies trained with the learned reward function achieves better performance and faster convergence compared to the baselines.
-
Motion Planning via Bayesian Learning in the Dark
Workshop on Machine Learning for Motion Planning at ICRA, 2021
-
Towards an AI Coach to Infer Team Mental Model Alignment in Healthcare
In International Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), 2021
Shared mental models are critical to team success; however, in practice, team members may have misaligned models due to a variety of factors. In safety-critical domains (e.g., aviation, healthcare), lack of shared mental models can lead to preventable errors and harm. Towards the goal of mitigating such preventable errors, here, we present a Bayesian approach to infer misalignment in team members’ mental models during complex healthcare task execution. As an exemplary application, we demonstrate our approach using two simulated team-based scenarios, derived from actual teamwork in cardiac surgery. In these simulated experiments, our approach inferred model misalignment with over 75% recall, thereby providing a building block for enabling computer-assisted interventions to augment human cognition in the operating room and improve teamwork.
-
A Bayesian Approach to Identifying Representational Errors
arXiv, 2021
Trained AI systems and expert decision makers can make errors that are often difficult to identify and understand. Determining the root cause for these errors can improve future decisions. This work presents Generative Error Model (GEM), a generative model for inferring representational errors based on observations of an actor’s behavior (either simulated agent, robot, or human). The model considers two sources of error: those that occur due to representational limitations – "blind spots" – and non-representational errors, such as those caused by noise in execution or systematic errors present in the actor’s policy. Disambiguating these two error types allows for targeted refinement of the actor’s policy (i.e., representational errors require perceptual augmentation, while other errors can be reduced through methods such as improved training or attention support). We present a Bayesian inference algorithm for GEM and evaluate its utility in recovering representational errors on multiple domains. Results show that our approach can recover blind spots of both reinforcement learning agents as well as human users.
Prior Publications
The following research was conducted before PI Unhelkar joined Rice University.
-
Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks
In ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2020
Communication is critical to collaboration; however, too much of it can degrade performance. Motivated by the need for effective use of a robot’s communication modalities, in this work, we present a computational framework that decides if, when, and what to communicate during human-robot collaboration. The framework, titled CommPlan, consists of a model specification process and an execution-time POMDP planner. To address the challenge of collecting interaction data, the model specification process is hybrid : where part of the model is learned from data, while the remainder is manually specified. Given the model, the robot’s decision-making is performed computationally during interaction and under partial observability of human’s mental states. We implement CommPlan for a shared workspace task, in which the robot has multiple communication options and needs to reason within a short time. Through experiments with human participants, we confirm that CommPlan results in the effective use of communication capabilities and improves human-robot collaboration.
-
Effective Information Sharing for Human-Robot Collaboration
Doctoral dissertation, Massachusetts Institute of Technology, 2020
Humans and machines often possess complementary skills. The recognition of this fact is leading to a steadily growing interest in collaborative robots. Despite the growing interest, however, a fundamental question remains to be answered: "How does one develop effective collaborative robots?" Three entities need to be considered while answering this question – namely, the collaborative robot itself, the human teammate whom the robot interacts with, and, equally importantly, the robot developer who is tasked with designing the machine. Each of these entities possesses different information. Effective sharing of this information is essential for developing collaborative robots and achieving fluent collaboration. In this dissertation, I present models and algorithms to enable effective information sharing between the robot, the human, and the developer. I begin by presenting the Agent Markov Model (AMM), a Bayesian model of sequential decision-making behavior, and Constrained Variational Inference (CVI), a hybrid learning algorithm that can learn generative models both from data and domain expertise. By utilizing AMM and CVI, the developer can specify decisionmaking models both for the human teammate and the collaborative robot with reduced labeling effort. Next, I present ADACORL, a framework to generate the collaborative robot’s policy for interaction. By leveraging algorithms for planning under uncertainty, ADACORL can generate fluent robot behavior for human-robot collaborative tasks with state spaces significantly larger than prior art (> 1 million states) and short planning times (< 1 s). Finally, I provide an approach for deciding if, when, and what to communicate during human-robot collaboration. Through human-robot interaction studies, I demonstrate that the proposed decision-making approaches result in the effective use of the robot’s action and communication capabilities during collaboration with a human teammate.
-
Semi-Supervised Learning of Decision-Making Models for Human-Robot Collaboration
In Conference on Robot Learning, 2019
We consider human-robot collaboration in sequential tasks with known task objectives. For interaction planning in this setting, the utility of models for decision-making under uncertainty has been demonstrated across domains. However, in practice, specifying the model parameters remains challenging, requiring significant effort from the robot developer. To alleviate this challenge, we present ADACORL, a framework to specify decision-making models and generate robot behavior for interaction. Central to our approach are a factored task model and a semi-supervised algorithm to learn models of human behavior. We demonstrate that our specification approach, despite significantly fewer labels, generates models (and policies) that perform equally well or better than models learned with supervised data. By leveraging pre-computed performance bounds and an online planner, ADACORL can generate robot behavior for collaborative tasks with large state spaces (> 1 million states) and short planning times (< 0.5 s).
-
Learning Models of Sequential Decision-Making with Partial Specification of Agent Behavior
In AAAI Conference on Artificial Intelligence (AAAI), 2019
Artificial agents that interact with other (human or artificial) agents require models in order to reason about those other agents’ behavior. In addition to the predictive utility of these models, maintaining a model that is aligned with an agent’s true generative model of behavior is critical for effective human-agent interaction. In applications wherein observations and partial specification of the agent’s behavior are available, achieving model alignment is challenging for a variety of reasons. For one, the agent’s decision factors are often not completely known; further, prior approaches that rely upon observations of agents’ behavior alone can fail to recover the true model, since multiple models can explain observed behavior equally well. To achieve better model alignment, we provide a novel approach capable of learning aligned models that conform to partial knowledge of the agent’s behavior. Central to our approach are a factored model of behavior (AMM), along with Bayesian nonparametric priors, and an inference approach capable of incorporating partial specifications as constraints for model learning. We evaluate our approach in experiments and demonstrate improvements in metrics of model alignment.
-
Learning and Communicating the Latent States of Human-Machine Collaboration.
In International Joint Conference on Artificial Intelligence (IJCAI), 2018
Artificial agents (both embodied robots and software agents) that interact with humans are increasing at an exceptional rate. Yet, achieving seamless collaboration between artificial agents and humans in the real world remains an active problem [Thomaz et al., 2016]. A key challenge is that the agents need to make decisions without complete information about their shared environment and collaborators. For instance, a human-robot team performing a rescue operation after a disaster may not have an accurate map of their surroundings. Even in structured domains, such as manufacturing, a robot might not know the goals or preferences of its human collaborators [Unhelkar et al., 2018]. Algorithmically, this challenge manifests itself as a problem of decision-making under uncertainty in which the agent has to reason about the latent states of its environment and human collaborator. However, in practice, quantifying this uncertainty (ie, the state transition function) and even specifying the features (ie, the relevant states) of human-machine collaboration is difficult. Thus, the objective of this thesis research is to develop novel algorithms that enable artificial agents to learn and reason about the latent states of humanmachine collaboration and achieve fluent interaction.
-
Mobile Robots for Moving-Floor Assembly Lines
Vaibhav Unhelkar, Stefan Dörr, Alexander Bubeck, Przemyslaw A Lasota, Jorge Perez, Ho Chit Siu, James C Boerkoel Jr, Quirin Tyroller, Johannes Bix, Stefan Bartscher, Julie Shah
IEEE Robotics & Automation Magazine (RA-M) 1070, 2018
Robots that operate alongside or cooperatively with humans are envisioned as the next generation of robotics. Toward this vision, we present the first mobile robot system designed for and capable of operating on the moving floors of automotive final assembly lines (AFALs). AFALs represent a distinct challenge for mobile robots in the form of dynamic surfaces: the conveyor belts that transport cars throughout the factory during final assembly.
-
Human-Aware Robotic Assistant for Collaborative Assembly: Integrating Human Motion Prediction with Planning in Time
Vaibhav Unhelkar*, Przemyslaw A Lasota*, Quirin Tyroller, Rares-Darius Buhai, Laurie Marceau, Barbara Deml, Julie Shah
IEEE Robotics and Automation Letters (RA-L) 3 (3), 2018
Introducing mobile robots into the collaborative assembly process poses unique challenges for ensuring efficient and safe human-robot interaction. Current human-robot work cells require the robot to cease operating completely whenever a human enters a shared region of the given cell, and the robots do not explicitly model or adapt to the behavior of the human. In this work, we present a human-aware robotic system with single-axis mobility that incorporates both predictions of human motion and planning in time to execute efficient and safe motions during automotive final assembly. We evaluate our system in simulation against three alternative methods, including a baseline approach emulating the behavior of standard safety systems in factories today. We also assess the system within a factory test environment. Through both live demonstration and results from simulated experiments, we show that our approach produces statistically significant improvements in quantitative measures of safety and fluency of interaction.
-
Reports of the AAAI 2017 Fall Symposium Series
Arjuna Flenner*, Marlena R Fraune*, Laura M Hiatt*, Tony Kendall*, John E Laird*, Christian Lebiere*, Paul S Rosenbloom*, Frank Stein*, Elin A Topp*,
Vaibhav Unhelkar*, others
AI Magazine 39 (2), 2018
-
Evaluating Effects of User Experience and System Transparency on Trust in Automation
In ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2017
Existing research assessing human operators’ trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human’s entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of "area under the trust curve" than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the "cry wolf" effect – wherein human operators begin to reject an automated system due to repeated false alarms.
-
-
Human-robot co-navigation using anticipatory indicators of human walking motion
In International Conference on Robotics and Automation (ICRA), 2015
Mobile, interactive robots that operate in human-centric environments need the capability to safely and efficiently navigate around humans. This requires the ability to sense and predict human motion trajectories and to plan around them. In this paper, we present a study that supports the existence of statistically significant biomechanical turn indicators of human walking motions. Further, we demonstrate the effectiveness of these turn indicators as features in the prediction of human motion trajectories. Human motion capture data is collected with predefined goals to train and test a prediction algorithm. Use of anticipatory features results in improved performance of the prediction algorithm. Lastly, we demonstrate the closed-loop performance of the prediction algorithm using an existing algorithm for motion planning within dynamic environments. The anticipatory indicators of human walking motion can be used with different prediction and/or planning algorithms for robotics; the chosen planning and prediction algorithm demonstrates one such implementation for human-robot co-navigation.
-
Challenges in Developing a Collaborative Robotic Assistant for Automotive Assembly Lines
In ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, HRI’15 Extended Abstracts, 2015
Industrial robots are on the verge of emerging from their cages, and entering the final assembly to work along side humans. Towards this we are developing a collaborative robot capable of assisting humans in the final automotive assembly. Several algorithmic as well as design challenges exist when the robots enter the unpredictable, human-centric and time-critical environment of final assembly. In this work, we briefly discuss a few of these challenges along with developed solutions and proposed methodologies, and their implications for improving human-robot collaboration.
-
Spacecraft Attitude Determination with Sun Sensors, Horizon Sensors and Gyros: Comparison of Steady-State Kalman Filter and Extended Kalman Filter
In Advances in Estimation, Navigation, and Spacecraft Control, 2015
Attitude determination, along with attitude control, is critical to functioning of every space mission. In this paper, we investigate and compare, through simulation, the application of two autonomous sequential attitude estimation algorithms, adopted from the literature, for attitude determination using attitude sensors (sun sensor and horizon sensors) and rate-integrating gyros. The two algorithms are: the direction cosine matrix (DCM) based steady-state Kalman Filter, and the classic quaternion-based Extended Kalman Filter. To make the analysis realistic, as well as to improve the attitude determination accuracies, detailed sensor measurement models are developed. Modifications in the attitude determination algorithms for estimation of additional states to account for sensor biases and misalignments are presented. A modular six degree-of-freedom closed-loop simulation, developed in house, is used to observe and compare the performances of the attitude determination algorithms.
-
Towards control and sensing for an autonomous mobile robotic assistant navigating assembly lines
In International Conference on Robotics and Automation (ICRA), 2014
There exists an increasing demand to incorporate mobile interactive robots to assist humans in repetitive, non-value added tasks in the manufacturing domain. Our aim is to develop a mobile robotic assistant for fetch-and-deliver tasks in human-oriented assembly line environments. Assembly lines present a niche yet novel challenge for mobile robots; the robot must precisely control its position on a surface which may be either stationary, moving, or split (e.g. in the case that the robot straddles the moving assembly line and remains partially on the stationary surface). In this paper we present a control and sensing solution for a mobile robotic assistant as it traverses a moving-floor assembly line. Solutions readily exist for control of wheeled mobile robots on static surfaces; we build on the open-source Robot Operating System (ROS) software architecture and generalize the algorithms for the moving line environment. Off-the-shelf sensors and localization algorithms are explored to sense the moving surface, and a customized solution is presented using PX4Flow optic flow sensors and a laser scanner-based localization algorithm. Validation of the control and sensing system is carried out both in simulation and in hardware experiments on a customized treadmill. Initial demonstrations of the hardware system yield promising results; the robot successfully maintains its position while on, and while straddling, the moving line.
-
Comparative Performance of Human and Mobile Robotic Assistants in Collaborative Fetch-and-deliver Tasks
In ACM/IEEE International Conference on Human-Robot Interaction, HRI, 2014
There is an emerging desire across manufacturing industries to deploy robots that support people in their manual work, rather than replace human workers. This paper explores one such opportunity, which is to field a mobile robotic assistant that travels between part carts and the automotive final assembly line, delivering tools and materials to the human workers. We compare the performance of a mobile robotic assistant to that of a human assistant to gain a better understanding of the factors that impact its effectiveness. Statistically significant differences emerge based on type of assistant, human or robot. Interaction times and idle times are statistically significantly higher for the robotic assistant than the human assistant. We report additional differences in participant’s subjective response regarding team fluency, situational awareness, comfort and safety. Finally, we discuss how results from the experiment inform the design of a more effective assistant.