Date: January 19th, 2026
Time: 16:00 (CET)
Title: The Trustworthiness Assessment Model–Conceptual Insights for Trust Research
Abstract: The Trustworthiness Assessment Model (TrAM) is a conceptual model that explains how trustors reach their perceived trustworthiness of AI systems. It thereby extends previous models that mainly start at the point, where trustors have already formed this perception. In this talk, I will present the main concepts of the TrAM and first evidence from a qualitative field study that suggests the practical usefulness of the TrAM's concepts in describing human interactions with AI systems. Building on these insights, I will conclude with a discussion of the model’s implications for trust research in human-computer interaction.
Nadine Schlicker is a psychologist specializing in human-centered design and trustworthy AI. She is a PhD candidate at the Institute for AI in Medicine at Marburg University. With experience across academia and industry in usability and user experience research, her work focuses on how users assess AI trustworthiness in medical contexts. Her interdisciplinary research bridges Human Factors, HCI, Medicine, and Engineering Psychology. She collaborates with experts from computer science, philosophy, psychology, and medicine to explore how AI can support clinical decision-making and enhance the well-being of healthcare professionals and patients.