Generative AI has transformed how people create, communicate, and collaborate, enabling new opportunities across industries like education, entertainment, and business. However, as these technologies become more embedded in everyday life, critical concerns arise about their trustworthiness, transparency, and accessibility. To ensure these systems truly benefit all, focusing on democratizing access while fostering trust in their use is essential. This involves designing AI systems that prioritize fairness, inclusivity, and explainability, allowing individuals to confidently interact with and influence their outputs. The challenge lies in making these technologies accessible and equitable, while embedding mechanisms for ethical oversight and accountability. By addressing these challenges, we can empower diverse communities to use generative AI responsibly, paving the way for meaningful and trustworthy interactions that align with human values and aspirations.
At the heart of this edition's theme, “Trust in the Times of Generative AI: Of Planning, Reasoning, and Collaborating,” lies a critical examination of how intelligent systems can be designed to act as reliable partners in complex cognitive tasks. As generative AI systems increasingly participate in planning workflows, reasoning through ambiguous scenarios, and collaborating with humans and other agents, questions around trust become central. Trust must be earned not only through performance but also through transparency, consistency, and alignment with human intent. This involves developing models that can explain their reasoning, adapt to diverse user needs, and work synergistically with human collaborators. This TAFF series will delve into how we can build AI systems that are not just tools, but trustworthy teammates that are capable of shared goals, mutual understanding, and ethical decision-making in complex and dynamic environments.
To receive announcements of upcoming presentations and events organised by TAFF, check out the registration page.
Abstract: The anticipated large-scale deployment of AI systems in knowledge work will impact not only productivity and work quality but also workers' values and workplace dynamics. I argue that how we design and deploy AI-infused technologies will shape people's skills and competence, their sense of agency, collaboration with others, and even the meaning they derive from their work. I design human-AI interaction techniques that complement people and amplify their values in AI-assisted work. My research focuses on (1) understanding how people make AI-assisted decisions and (2) designing novel interaction paradigms, explanations, and systems that optimize human-centric outcomes (e.g., human skills) and output-centric outcomes (e.g., decision accuracy) in AI-assisted tasks. In this talk, I will present a suite of interaction techniques I have introduced to optimize AI-assisted decision-making. These include cognitive forcing interventions that reduce overreliance on AI, adaptive AI support that enables human-AI complementarity in decision accuracy, and contrastive explanations that improve both decision accuracy and users’ task-related skills.
4 pm CET (click to convert to your own timezone)
10th November, 2025
Abstract: AI systems are increasingly deployed to assist socially complex forms of work (e.g., involving emotional, care-oriented, connective labor). These forms of work are amongst those projected to be the most “automation proof,” yet they remain chronically undervalued within organizations and in society more broadly. AI deployments in occupations grounded in socially complex work have often perpetuated these tendencies, harming workers’ performance and wellbeing. In my research, I collaborate closely with workers and organizations to design toward a more positive future for AI-augmented work—one where AI deployments meaningfully augment and enhance worker capabilities, rather than diminish them. I take a social ecological approach, identifying worker-, organization-, and society-level challenges, and I identify opportunities for technology and policy interventions to address these challenges. My research so far explores what effective interventions could look like in the design of training interfaces for AI-assisted decision-making, deliberation-based toolkits for AI adoption decisions, and alternative workflows for worker-centered AI measurement approaches.
Time: 3 pm CET (click to convert to your own timezone)
17th November, 2025
Research Scientist @ Nokia Bell Labs
Talk: Trust at First Sight: Visual Tools for Understanding AI’s Impacts
Abstract: Generative AI is rapidly reshaping how people create, communicate, and collaborate, but its societal impacts remain difficult to grasp, especially for non-experts. Building trust in these systems therefore requires new methods for making AI impacts visible and navigable. This talk presents two such methods: the Atlas of AI Risks, a narrative visualization that communicates the broad landscape of AI harms and benefits, and the Impact Assessment Card, an accessible alternative to long-form reports that improves understanding and supports governance tasks. Through empirical studies with diverse populations, I show how these tools make AI impacts legible, equitable, and actionable. These qualities help create the foundation for public trust and for more informed participation in decisions about how AI systems are used.
4 pm CET (click to convert to your own timezone)
24th November, 2025
Assistant Professor @ Department of Computer Science & Engineering at the University of Minnesota
Talk: Cognitive Scaffolding in the Age of AI: Design Principles for Appropriate Reliance
Abstract: Human-AI partnerships are increasingly commonplace, yet often ineffective as people over- or under-rely on AI for support, resulting in harmful outcomes such as propagation of biases, missed edge cases, and homogenization of ideas and skillsets. My work follows the belief that for human-AI partnerships to be effective and reliable, AI should be a tool for thought—a cognitive scaffold that helps us appropriately and effectively reflect on the information we need—rather than displace human cognition. In this talk, I will first motivate this belief by sharing work that demonstrates the cognitive underpinnings of how people differently use and trust AI. Then, through use-cases spanning explainable AI, data science workflows, and scholarly research, I will present design principles for AI as an effective cognitive scaffold. For novices learning to work with AI systems, this means designing interfaces grounded in pedagogical principles—using narrative structures and progressive disclosure to build genuine understanding rather than superficial familiarity. For domain experts, effective scaffolding looks different—preserving agency and providing granular mechanisms for provenance to calibrate trust. I will conclude by examining a persistent challenge: even well-designed scaffolds face systemic barriers of time pressure and competing cognitive demands in real-world contexts.
4 pm CET (click to convert to your own timezone)
4th December, 2025
Abstract: Many people now trust humanlike AI chatbots to write their documents, make important decisions, and listen to their personal emotions and secrets. The term “artificial intelligence,” coined 70 years ago, fails to capture the complex social roles played by these new systems. In this talk, I will show how we can make sense of them as “digital minds,” general-purpose agents that appear to think and feel. With surveys, interviews, and discourse analysis, we find that 20% of U.S. adults see some current AI as sentient, and we characterize emergent forms of human-computer interaction, such as digital companionship, with significant consequences for mental health. With AI-assisted benchmark evaluations, we quantify risks of algorithmic bias and human disempowerment. Responsible coexistence with digital minds requires acknowledging their incredible capabilities but also their fundamentaly alien nature, and AI safety will require a new field of sociotechnical research to continuously map this emergent social landscape.
4 pm CET (click to convert to your own timezone)
8th December, 2025
Abstract: The Trustworthiness Assessment Model (TrAM) is a conceptual model that explains how trustors reach their perceived trustworthiness of AI systems. It thereby extends previous models that mainly start at the point, where trustors have already formed this perception. In this talk, I will present the main concepts of the TrAM and first evidence from a qualitative field study that suggests the practical usefulness of the TrAM's concepts in describing human interactions with AI systems. Building on these insights, I will conclude with a discussion of the model’s implications for trust research in human-computer interaction.
4 pm CET (click to convert to your own timezone)
19th January, 2026
Abstract: The integration of LLMs into society necessitates a broader understanding of alignment—one that accounts for bidirectional influence between humans and AI. This talk introduces a bidirectional human-AI alignment framework and presents a series of studies to understand and measure it. Our research operationalizes this concept through three critical lenses: 1) Value Misalignment: We introduce ValueCompass, a method to quantify contextual value alignment across cultures, and expose systematic value-action gaps in LLMs; 2) Perceptual Manipulation: We document user experiences with LLM dark patterns that manipulate belief and behavior; and 3) Dynamic Influence: We provide empirical evidence of bidirectional opinion dynamics in conversation, where both agent and human stances co-adapt. Together, our work provides new lenses to measure alignment, exposes critical risks, and charts a path for developing truly human-centered, responsible AI systems that are truly aligned through mutual understanding and adaptation.
4 pm CET (click to convert to your own timezone)
26th January, 2026
Abstract: Recent meta-work in HCI has called for deeper and more sustained engagement with policymaking. This presentation responds to that call by framing AIxHCI research as a critical bridge between the realities of AI development and use and the goals of AI governance. After briefly highlighting a small set of research works that exemplify this perspective, I present my ongoing research at the intersection of empirical HCI methods and policy-relevant questions about AI and generative AI. On the AI development side, I introduce a literature survey of AIxHCI studies examining the practices and values of AI developers, and reflect on what these findings imply for policymakers seeking to regulate AI systems in a more grounded manner. On the AI use side, I present a desk-research-based study (CHI’26) that examines how disclosures about generative AI use—provided by GenAI providers—can be structured and expanded to better support accountability and oversight. This work identifies systematic skews and recurring pitfalls in current disclosure practices and discusses their implications for emerging AI governance regimes. Finally, I briefly outline follow-up projects that extend these contributions by engaging policymakers directly. Together, these projects illustrate how AIxHCI can contribute not only to improved AI systems, but also to more informed, grounded, and actionable AI policy.
4 pm CET (click to convert to your own timezone)
16th February, 2026
Abstract: Since the emergence of generative AI, creative workers have voiced concerns about career-based harms stemming from this technology. A recurring issue is that generative AI models are trained on their creative output without consent, credit, or compensation. The "3Cs framework"—Consent, Credit, and Compensation—has emerged as a proposed governance approach for responsible training of GenAI. This talk presents findings from our recent study with creative workers that explores the complexities of implementing this framework for GenAI data governance. Building on these insights, I examine the consent dimension more closely, discussing what lessons for genAI can be learned from another domain: consent to personal data processing under the GDPR.
4 pm CET (click to convert to your own timezone)
23rd February, 2026
Senior Researcher, Microsoft Research
February, 2026