Human-AI Interaction
We aim to design AI-based systems, both predictive and prescriptive, that are not only successful from a technical viewpoint but also accepted and utilized in a productive work environment.
While state-of-the-art AI systems are indeed promising, they often lack transparency and may be trained on biased human decisions. To combat transparency issues, we augment our systems with Explainable AI (XAI), which helps users understand the system’s decisions and enables us to diagnose unwanted behavior in certain edge cases. A key focus is to extend XAI to automated decision systems using explainable reinforcement learning. Understanding and predicting an agent’s behavior — for example in ambulance dispatch and relocation — is crucial not only to comply with laws but also to ensure trust from both system users (e.g., dispatchers) and affected individuals (e.g., patients).
We also address fairness issues, where biased training data can lead to unfair AI outcomes (e.g., a bias toward men in hiring decisions). Furthermore, we study the effects of AI on people, such as how decisions, well-being, and cognitive skills change with AI use or exposure.
Example Research Projects:
- Perception of Generative AI in Creative Work
- Acceptance of AI-based Human Resource Systems
- Impact of (X)AI on Human Decision Making
- Algorithm Aversion and Epistemic Authority in Health Care
- Management of AI in Organizations
- Societal Impact of AI-based Systems
Publications:
- Heinrich, K., Janiesch, C., Krancher, O., Stahmann, P., Wanner, J., & Zschech, P. (2025). Decision factors for the selection of AI-based decision support systems - The case of task delegation in prognostics. PLoS One, 20(7), e0328411. https://doi.org/10.1371/journal.pone.0328411
- Herm, L-V., Heinrich, K., Wanner, J., & Janiesch, C. (2023). Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability. International Journal of Information Management, 69, 102538, ISSN 0268-4012. https://doi.org/10.1016/j.ijinfomgt.2022.102538
- Wanner, J., Herm, L-V., Heinrich, K., et al. (2022). The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electron Markets, 32, 2079-2102. https://doi.org/10.1007/s12525-022-00593-5