Menu Close

Keynote Speakers

Rada Mihalcea

Rada Mihalcea

TITLE: The Agent Paradox: Can Multi-Agent Systems Replicate the Complexity of Human Cognition and Social Behavior?

DATE & TIME: Wednesday, May 21, 9:00AM – 10:00AM

LOCATION: TBD

CHAIR: Yevgeniy Vorobeychik

ABSTRACT: Recent advancements in multi-agent systems have sparked a rapidly growing research area focused on simulating increasingly complex human behaviors — group consensus, implicit bias, and, in some cases, even cooperation or conflict. Yet, these systems exist in a paradox: they are computational and artificial, entirely lacking the intrinsic consciousness, emotions, and social intuition that define human individuals and societies. In this talk, I will explore the evolving relationship between AI agents and human behavior, drawing on large-scale generative agent experiments, studies on bias in multi-agent interactions, and insights into misinformation and group behavior. I will also discuss the broader implications of these systems — not only for the future of AI but also for human-centered disciplines such as psychology, sociology, and ethics, where they can challenge or facilitate our understanding of intelligence, agency, and collective decision-making.

BIO: Rada Mihalcea is the Janice M. Jenkins Professor of Computer Science and Engineering at the University of Michigan and the Director of the Michigan Artificial Intelligence Lab. Her research interests are in computational linguistics, with a focus on lexical semantics, multilingual natural language processing, and computational social sciences. She was a program co-chair for EMNLP 2009 and ACL 2011, and a general chair for NAACL 2015 and *SEM 2019. She is an ACM Fellow, a AAAI Fellow, and a former President of the ACL. She is the recipient of a Sarah Goddard Power award for her contributions to diversity in science, an honorary citizen of her hometown of Cluj-Napoca, Romania, and the recipient of a Presidential Early Career Award for Scientists and Engineers awarded by President Obama.

Jeffrey Rosenschein

TITLE: Multiagent Systems, and the Search for Appropriate Foundations: A Personal Journey and Retrospective

DATE & TIME: Thursday, May 22, 9:00AM – 10:00AM

LOCATION: TBD

CHAIR: Ariel Procaccia

ABSTRACT: Research is a highly personal endeavor shaped by a researcher’s own experience, personality, inclinations, and memory. This talk provides a personal interpretation of my 43 years as a researcher in the multiagent systems community, from my first published paper in AAAI’82 to the present. By walking through a timeline of research topics and results, I will highlight connections and research emphases through the decades.

Along the way, I will provide a subjective take on related topics, including tips for early researchers, guiding principles for advisors, AI systems’ historical over-emphasis on (narrowly defined) performance, theory vs. practice, and the importance of finding a (scientific) community.

BIO: Jeffrey S. Rosenschein is the Samuel and Will Strauss Professor of Computer Science in the School of Engineering and Computer Science at the Hebrew University of Jerusalem; he served as Head of the School from 2011 to 2014. He received his undergraduate degree in Applied Mathematics from Harvard University (1979), and his masters degree (1982) and PhD (1986) in Computer Science from Stanford University. He has published widely in the field of Multiagent Systems, including co-authoring the book “Rules of Encounter”, MIT Press, 1994, which influenced the adoption of game-theoretic techniques within the field of artificial intelligence. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of Artificial Intelligence (AAAI), the European Association for Artificial Intelligence (EurAI), and is the recipient of the 2013 ACM/SIGART Autonomous Agents Research Award (now ACM/SIGAI). He was co-editor-in-chief of the Journal of Autonomous Agents and Multiagent Systems from 2008 to 2014. He also served as General Conference Chair of the combined 27th International Joint Conference on Artificial Intelligence and 23rd European Conference on Artificial Intelligence (IJCAI/ECAI 2018), held in Stockholm, Sweden.

Virginia Dignum

TITLE: Responsible AI and Autonomous Agents: Governance, Ethics, and Sustainable Innovation

DATE & TIME: Friday, May 23, 9:00AM – 10:00AM

LOCATION: TBD

CHAIR: Ann Nowé

ABSTRACT: As AI systems become increasingly autonomous and embedded in socio-technical environments, balancing innovation with social responsibility grows increasingly urgent. Multi-agent systems and autonomous agents offer valuable insights into decision-making, coordination, and adaptability, yet their deployment raises critical ethical and governance challenges. How can we ensure that AI aligns with human values, operates transparently, and remains accountable within complex social and economic ecosystems? This talk explores the intersection of AI ethics, governance, and agent-based perspectives, drawing on my work in AI policy and governance, as well as prior research on agents, agent organizations, formal models, and decision-making frameworks. Recent advancements are reshaping AI not just as a technology but as a socio-technical process that functions in dynamic, multi-stakeholder environments. As such, addressing accountability, normative reasoning, and value alignment requires a multidisciplinary approach. A central focus of this talk is the role of governance structures, regulatory mechanisms, and institutional oversight in ensuring AI remains both trustworthy and adaptable. Drawing on recent AI policy research, I will examine strategies for embedding ethical constraints in AI design, the role of explainability in agent decision-making, and how multi-agent coordination informs regulatory compliance. Rather than viewing regulation as a barrier, will show that responsible governance is an enabler of sustainable innovation, driving public trust, business differentiation, and long-term technological progress. By integrating insights from agent-based modeling, AI policy frameworks, and governance strategies, this talk underscores the importance of designing AI systems that are both socially responsible and technically robust. Ultimately, ensuring AI serves the common good requires a multidisciplinary approach—one that combines formal models, ethical considerations, and adaptive policy mechanisms to create AI systems that are accountable, fair, and aligned with human values.

BIO: Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden, where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations and chair of the ACM’s Technology Policy Council. She has a PHD in Artificial Intelligence from Utrecht University (2004), was appointed Wallenberg Scholar in 2024, is member of the Royal Swedish Academy of Engineering Sciences (IVA), and a Fellow of the European Artificial Intelligence Association (EURAI), and of ELLIS (European Laboratory of Learning and Intelligent Systems). She is also co-chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2.0,  member of the Global Partnership on AI (GPAI), of the UNESCO’s expert group on the implementation of AI recommendations, the OECD’s Expert group on AI, and founder of ALLAI, the Dutch AI Alliance. She has been a member of the United Nations Advisory Body on AI, the EU’s High Level Expert Group on Artificial Intelligence, co-chair of the WEF’s Global Future Council on AI, and leader of UNICEF’s guidance for AI and children. Her new book “The AI Paradox” is planned for publication in 2025.