Generative AI has transformed how people create, communicate, and collaborate, enabling new opportunities across industries like education, entertainment, and business. However, as these technologies become more embedded in everyday life, critical concerns arise about their trustworthiness, transparency, and accessibility. To ensure these systems truly benefit all, focusing on democratizing access while fostering trust in their use is essential. This involves designing AI systems that prioritize fairness, inclusivity, and explainability, allowing individuals to confidently interact with and influence their outputs. The challenge lies in making these technologies accessible and equitable, while embedding mechanisms for ethical oversight and accountability. By addressing these challenges, we can empower diverse communities to use generative AI responsibly, paving the way for meaningful and trustworthy interactions that align with human values and aspirations.
At the heart of this edition's theme, “Trust in the Times of Generative AI: Of Planning, Reasoning, and Collaborating,” lies a critical examination of how intelligent systems can be designed to act as reliable partners in complex cognitive tasks. As generative AI systems increasingly participate in planning workflows, reasoning through ambiguous scenarios, and collaborating with humans and other agents, questions around trust become central. Trust must be earned not only through performance but also through transparency, consistency, and alignment with human intent. This involves developing models that can explain their reasoning, adapt to diverse user needs, and work synergistically with human collaborators. This TAFF series will delve into how we can build AI systems that are not just tools, but trustworthy teammates that are capable of shared goals, mutual understanding, and ethical decision-making in complex and dynamic environments.
To receive announcements of upcoming presentations and events organised by TAFF, check out the registration page.
Abstract: The anticipated large-scale deployment of AI systems in knowledge work will impact not only productivity and work quality but also workers' values and workplace dynamics. I argue that how we design and deploy AI-infused technologies will shape people's skills and competence, their sense of agency, collaboration with others, and even the meaning they derive from their work. I design human-AI interaction techniques that complement people and amplify their values in AI-assisted work. My research focuses on (1) understanding how people make AI-assisted decisions and (2) designing novel interaction paradigms, explanations, and systems that optimize human-centric outcomes (e.g., human skills) and output-centric outcomes (e.g., decision accuracy) in AI-assisted tasks. In this talk, I will present a suite of interaction techniques I have introduced to optimize AI-assisted decision-making. These include cognitive forcing interventions that reduce overreliance on AI, adaptive AI support that enables human-AI complementarity in decision accuracy, and contrastive explanations that improve both decision accuracy and users’ task-related skills.
4 pm CET (click to convert to your own timezone)
10th November, 2025
Abstract: AI systems are increasingly deployed to assist socially complex forms of work (e.g., involving emotional, care-oriented, connective labor). These forms of work are amongst those projected to be the most “automation proof,” yet they remain chronically undervalued within organizations and in society more broadly. AI deployments in occupations grounded in socially complex work have often perpetuated these tendencies, harming workers’ performance and wellbeing. In my research, I collaborate closely with workers and organizations to design toward a more positive future for AI-augmented work—one where AI deployments meaningfully augment and enhance worker capabilities, rather than diminish them. I take a social ecological approach, identifying worker-, organization-, and society-level challenges, and I identify opportunities for technology and policy interventions to address these challenges. My research so far explores what effective interventions could look like in the design of training interfaces for AI-assisted decision-making, deliberation-based toolkits for AI adoption decisions, and alternative workflows for worker-centered AI measurement approaches.
Time: 3 pm CET (click to convert to your own timezone)
17th November, 2025
Research Scientist @ Nokia Bell Labs
Talk: Trust at First Sight: Visual Tools for Understanding AI’s Impacts
Abstract: Generative AI is rapidly reshaping how people create, communicate, and collaborate, but its societal impacts remain difficult to grasp, especially for non-experts. Building trust in these systems therefore requires new methods for making AI impacts visible and navigable. This talk presents two such methods: the Atlas of AI Risks, a narrative visualization that communicates the broad landscape of AI harms and benefits, and the Impact Assessment Card, an accessible alternative to long-form reports that improves understanding and supports governance tasks. Through empirical studies with diverse populations, I show how these tools make AI impacts legible, equitable, and actionable. These qualities help create the foundation for public trust and for more informed participation in decisions about how AI systems are used.
4 pm CET (click to convert to your own timezone)
24th November, 2025
Assistant Professor @ Department of Computer Science & Engineering at the University of Minnesota
Talk: Cognitive Scaffolding in the Age of AI: Design Principles for Appropriate Reliance
Abstract: Human-AI partnerships are increasingly commonplace, yet often ineffective as people over- or under-rely on AI for support, resulting in harmful outcomes such as propagation of biases, missed edge cases, and homogenization of ideas and skillsets. My work follows the belief that for human-AI partnerships to be effective and reliable, AI should be a tool for thought—a cognitive scaffold that helps us appropriately and effectively reflect on the information we need—rather than displace human cognition. In this talk, I will first motivate this belief by sharing work that demonstrates the cognitive underpinnings of how people differently use and trust AI. Then, through use-cases spanning explainable AI, data science workflows, and scholarly research, I will present design principles for AI as an effective cognitive scaffold. For novices learning to work with AI systems, this means designing interfaces grounded in pedagogical principles—using narrative structures and progressive disclosure to build genuine understanding rather than superficial familiarity. For domain experts, effective scaffolding looks different—preserving agency and providing granular mechanisms for provenance to calibrate trust. I will conclude by examining a persistent challenge: even well-designed scaffolds face systemic barriers of time pressure and competing cognitive demands in real-world contexts.
4 pm CET (click to convert to your own timezone)
4th December, 2025
Abstract: Many people now trust humanlike AI chatbots to write their documents, make important decisions, and listen to their personal emotions and secrets. The term “artificial intelligence,” coined 70 years ago, fails to capture the complex social roles played by these new systems. In this talk, I will show how we can make sense of them as “digital minds,” general-purpose agents that appear to think and feel. With surveys, interviews, and discourse analysis, we find that 20% of U.S. adults see some current AI as sentient, and we characterize emergent forms of human-computer interaction, such as digital companionship, with significant consequences for mental health. With AI-assisted benchmark evaluations, we quantify risks of algorithmic bias and human disempowerment. Responsible coexistence with digital minds requires acknowledging their incredible capabilities but also their fundamentaly alien nature, and AI safety will require a new field of sociotechnical research to continuously map this emergent social landscape.
4 pm CET (click to convert to your own timezone)
8th December, 2025
Tenure-track Faculty @ Max Planck Institute for Security and Privacy
TBD
Researcher at the Institute for AI in Medicine, @ Philipps-University of Marburg
19th January, 2026
Abstract: The integration of LLMs into society necessitates a broader understanding of alignment—one that accounts for bidirectional influence between humans and AI. This talk introduces a bidirectional human-AI alignment framework and presents a series of studies to understand and measure it. Our research operationalizes this concept through three critical lenses: 1) Value Misalignment: We introduce ValueCompass, a method to quantify contextual value alignment across cultures, and expose systematic value-action gaps in LLMs; 2) Perceptual Manipulation: We document user experiences with LLM dark patterns that manipulate belief and behavior; and 3) Dynamic Influence: We provide empirical evidence of bidirectional opinion dynamics in conversation, where both agent and human stances co-adapt. Together, our work provides new lenses to measure alignment, exposes critical risks, and charts a path for developing truly human-centered, responsible AI systems that are truly aligned through mutual understanding and adaptation.
4 pm CET (click to convert to your own timezone)
26th January, 2026
Senior Researcher, Microsoft Research
February, 2026
Postdoctoral Researcher, Microsoft Research
February, 2026