Tutorials
NOTE: The timetable for workshops, tutorials, the doctoral consortium (DC), and competitions can be viewed at a glance here. We will keep adding further details as and when they become available.
T1. Reinforcement Learning for Automated Negotiation
Presenter: Yasser Mohammad (NEC Corporation)
Time: May 19 morning
Contact email(s): [email protected]
Target audience/expected background: The target audience are postgraduate students and researchers in the fields of multi-agent systems, game theory, simulation, and practical applications of MAS. Specifically, the tutorial is designed to target two subgroups: RL researchers interested in a new and challenging problem (for those we introduce automated negotiation) and MAS researchers interested in applying machine learning methods to the classic automated negotiation problem (for those we introduce RL and the Negmas-RL library for simplifying the process).
Learning outcomes/takeaway skills: (1) Basics of automated negotiation, Pareto-optimality, the Nash-Bargaining solution, and the Rubenstein solution. (2) RL formulation of automated negotiation problems. (3) Knowledge of state-of-the-art RL solutions to the automated negotiation problem. (4) Practical knowledge of how to implement, train and test these (and new) solutions using the NegMAS-RL library.
Brief description: Automated negotiation is a long-standing research problem with a history extending back to 1950s. Automated Negotiation is becoming more important for real-world applications with the recent explosion of AI utilization in business operations and the increased need to coordinate the behavior of AI agents across institutional boundaries. This is clearly visible from the number of startups and companies working in the area in the last couple of years. Nevertheless, there are several challenge that face successful adoption of this techonology in real-world applications. Reinforcement learning is one of the most successful machine learning approaches for dealing with strategic games with complete and incomplete information. Automated Negotiation provides a new and boundary expanding challenge for RL researchers. The tutorial introduces attendees to the problem of building effective negotiation strategies using RL and MARL methods. After providing the motivation for this problem, the tutorial presents the needed theoretical background about automated negotiation and reinforcement learning to appreciate different approaches. Based on this background, the tutorial then introduces a unifying framework and an accompanying open-source library (NegMAS-RL) that can be used to represent most existing research in RL for automated negotiation as well as new solutions not yet attempted. This framework is then used to represent several recent advances in applying RL to automated negotiation and motivate new approaches. The attendees will then learn through a live demonstration (with optional follow-along) how to use this framework to represent, solve and evaluate the solution of a specific problem of automated negotiation in supply chains using the learnt framework.
T2. Decision Making in Open Agent Systems
Adam Eck (Oberlin College), Prashant Doshi (University of Georgia), Leen-Kiat Soh (University of Nebraska)
Contact email(s): [email protected], [email protected], [email protected]
Target audience/expected background: We expect two target audiences for this tutorial. First, researchers studying multiagent decision making (e.g., planning, reinforcement learning, and game theory practitioners) interested in exploring additional challenging environment complexities created by real-world applications of multiagent reasoning. Second, AAMAS attendees who work in applied multiagent systems developing solutions in the types of applications that involve OASYS and are seeking decision making solutions for such problems. In this way, we hope to provide useful background information for both audiences, as well as bring together researchers across the decision making and application spaces of AAMAS for cross-pollination of ideas and development of new collaborations.
Learning outcomes/takeaway skills: Participants will enhance their understanding of (1) the challenges of multiagent decision-making in open agent systems caused by the sets of agents, tasks, and/or agent capabilities changing over time in dynamic and unpredictable ways, (2) state-of-the-art planning and reinforcement learning solutions for autonomous reasoning in open agent systems, including a comparison of their relative strengths and weaknesses in different scenarios, and (3) active areas of current and future research for improving reasoning in such challenging multiagent environments.
Brief description: Many real-world applications of multiagent systems (MAS) are open agent systems (OASYS) where the sets of agents, tasks, and capabilities can dynamically change over time. Often, these changes are unpredictable and unknown in advance by the decision making agents utilizing their capabilities to accomplish tasks. In contrast, most methods for autonomous decision making (whether planning, reinforcement learning, or game theory) assume that the set of agents, tasks, and capabilities are static throughout the lifetime of the system. Mismatches between the assumptions of the agents’ reasoning and models of the environment vs. the underlying dynamics of the environment can risk critical failure of agents deployed to real-world applications. In this tutorial, we will (1) introduce OASYS as a challenging complexity of decision making in multiagent systems, illustrating different sources of openness in several real-world applications of MAS, (2) summarize state-of-the-art solutions for decision making in OASYS within both the multiagent planning and multiagent reinforcement learning paradigms, and (3) highlight several promising avenues of future research that would enhance the ability of agents to reason within OASYS.
T3. Theoretical Foundations for Markov Games
Presenters: Shuai Li (Shanghai Jiao Tong University), Canzhe Zhao (Shanghai Jiao Tong University)
Time: May 19 morning
Contact email(s): [email protected]
Target audience/expected background: This tutorial is designed for researchers in both the fields of reinforcement learning (RL) and game theory as well as practitioners who seek a deeper understanding of the theoretical foundations in MGs. For researchers from the domain of RL, this tutorial expounds on the challenges and the key algorithmic designs in the face of the interactions between the agents involved in the MGs. For those from the domain of game theory, this tutorial also provides valuable illustrations on dealing with unknown agents’ utilities and system dynamics. Prior familiarity with the basic preliminaries of multi-armed bandits (MABs), reinforcement learning, and the concept of equilibrium would be beneficial for a thorough understanding of the content covered in this tutorial.
Learning outcomes/takeaway skills: This tutorial will provide attendees with a rigorous exploration of the theoretical underpinnings and practical implications of MGs, offering insights that are directly applicable to challenges in many core themes in the AAMAS community including multi-agent coordination, decision-making, and strategic planning, among others. Given the rapid growth of applications of MGs in real-world multi-agent contexts (e.g., robotics, gaming, economics, and distributed AI), this tutorial offers invaluable insights for researchers and practitioners eager to understand and leverage the principles of MGs in their own work.
Brief description: Markov games (MGs), also known as stochastic games, encompass a broad spectrum of multi-agent reinforcement learning applications across diverse fields, such as robotics, autonomous vehicles, finance, economics, and more. In MGs, given the full knowledge of the system (i.e., the utilities of the agents and the dynamics of the system), computing an equilibrium where no agents will benefit from a unilateral deviation of their own strategy, is an important problem in game theory and has been extensively studied since the seminal work of Shapley. In practice, however, it might often happen that the knowledge of the system is not known a priori. In such cases, MGs are typically addressed by learning an equilibrium during repeated playthroughs of the system in an online manner, which has recently attracted substantial research attention in areas of sequential decision-making and game theory. The primary goal of this problem is to devise algorithms with provable sample complexities of finding an approximate equilibrium. This tutorial offers a comprehensive introduction to the latest advancements and pioneering discoveries in MGs, with a particular emphasis on both algorithmic design and theoretical results related to this problem.
T4. Multi-Agent CoPilot in Industrial AI Application
Presenters: Chathurangi Shyalika (Artificial Intelligence Institute, University of South Carolina), Renjith Prasad (Artificial Intelligence Institute, University of South Carolina), Utkarshani Jaimini (Artificial Intelligence Institute, University of South Carolina), Cory Henson (Bosch Center for Artificial Intelligence), Fadi El Kalach (Department of Automotive Engineering, Clemson University), Amit Sheth (Artificial Intelligence Institute, University of South Carolina)
Time: May 19 afternoon
Contact email(s): [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
Target audience/expected background: The target audience includes academic researchers, data scientists and practitioners working in industrial AI, practitioners applying AI in industrial applications, particularly those who apply machine learning techniques in complex industry environments.
This tutorial welcomes participants who are eager to explore the advanced techniques of building CoPilots for challenging use cases in industrial applications. While a basic understanding of AI, machine learning concepts and programming in Python is recommended, the tutorial is designed to accommodate a wide range of expertise levels. Participants with foundational knowledge in these areas will benefit from a smoother learning experience, but the tutorial content is structured to ensure that all attendees can actively engage in hands-on exercises and build practical skills, regardless of their prior background.
Learning outcomes/takeaway skills: The audience will gain a solid foundational understanding of CoPilot systems, including their background, principles, and the role of multiagent systems in developing industry-specific solutions. They will explore why CoPilots are particularly well-suited to address industry challenges, with practical examples from manufacturing. A deep dive into the methods and technologies behind Multiagent CoPilot development will provide attendees with the technical knowledge needed to build such systems. Hands-on experience with the SmartPilot Multiagent CoPilot demonstration will allow participants to engage directly with its capabilities. Additionally, they will develop insights into adapting and extending Multiagent CoPilots for various manufacturing-specific applications. Finally, attendees will learn about the real-world challenges associated with deploying these systems in industrial settings and discover effective strategies for overcoming them.
Brief description: In the era of smart automation and digital transformation, achieving efficiency, precision, and adaptability is essential for industries to remain competitive. Sectors, including manufacturing, supply chain and logistics, healthcare, finance, and retail, face significant challenges in deploying Artificial Intelligence (AI) solutions tailored to their unique needs, particularly in critical, resource-constrained applications. According to Gartner’s 2024 Hype Cycle for Artificial Intelligence, composite AI, which integrates techniques like machine learning, knowledge graphs, and rule-based systems, is becoming foundational for industries, enhancing predictions, decisions, and scalability across complex environments. The complexity of real-world systems requires Industrial AI solutions to be customizable to business needs, compact for efficient deployment on resource-constrained devices, and agile to adapt to changing requirements. By being neurosymbolic, such solutions integrate data, knowledge, and human expertise to create robust, explainable, and trustworthy AI that supports planning and reasoning. In this tutorial, we will introduce Multiagent CoPilot for Industrial AI applications focusing on the primary use case of manufacturing (offering requirements, data, knowledge, human expertise). The use cases we will describe are inspired by collaborations with, or similar efforts at Bosch, Hewlett Packard Enterprise, Siemens, and others. AAMAS audience will learn about human-in-the-loop CoPilots as we explore how multiagent coordination, collaboration, and decision-making can enhance the functionality of industrial AI models. With our primary use case, we will demonstrate how to address the unique challenges faced by the manufacturing industry, from improving operational efficiency to enhancing adaptability in critical tasks. However, the knowledge and insights gained from this tutorial are applicable and generalizable to various industries, like transportation and healthcare, offering valuable perspectives for researchers and professionals across domains seeking to adopt these technologies in real-world applications.
T5. General Evaluation of AI Agents (link)
Presenters: Manfred Diaz (Mila, University of Montreal), Marc Lanctot (Google DeepMind), Kate Larson (Google DeepMind/University of Waterloo), Ian Gemp (Google DeepMind)
Time: May 19 afternoon
Contact email(s): [email protected]
Target audience/expected background: Evaluating AI agents, models and systems is a universal problem for practitioners, researchers and academics in AI, machine learning, and their applications. The tutorial foundations are cross-disciplinary, borrowing ideas from central areas of interest for the AAMAS community, such as decision theory, game theory, social choice theory, and statistics. Thus, practitioners and researchers in these areas would also be keenly interested in the connections to ML evaluation. Previous experience (although brief) with theories of individual decision-making (e.g., decision theory, game theory) and collective decision-making (e.g., social choice theory) is desirable but not strictly required.
Learning outcomes/takeaway skills: The learning outcomes of this tutorial include 1) an understanding of some of the challenges and pitfalls that arise with the evaluation of AI systems, 2) an introduction to methodologies for the evaluation problem, and 3) the pros and cons of each methodology including insights as to when and how to apply them.
Brief description: Progress in AI is often measured indirectly by assessing tangible research artifacts such as models, agents/algorithms, and architectures on specific tasks the field has designed to test wide-ranging capabilities. Today, with the development and deployment of increasingly complex models, agents, and systems evaluated over many ever-more challenging tasks, there is a growing need to execute principled and transparent evaluations. This tutorial covers the fundamentals of the AI evaluation problem. In Part I, we thoroughly review existing methodologies, including statistics, probabilistic choice models, game theory, social choice theory, and graph theory. Then, Part II presents a unifying decision-theoretic perspective of the problem, reviews common pitfalls originating from the unprincipled applications of different methodologies introduced in Part I, and offers principled recipes to avoid these issues in practice.
T6. Strategic Reasoning and Strategy Representation
Presenters: Munyque Mitellmann (University of Naples Federico II), Laurent Perrussel (Université Toulouse Capitole)
Time: May 19 afternoon
Contact email(s): [email protected]
Target audience/expected background: This tutorial is geared towards researchers in Knowledge Representation, Logics for Multi-agent Systems, and Game Theory.
Learning outcomes/takeaway skills: The attendee will learn what a strategy is and how it may be represented in a logical language. The attendee will learn how the encoding helps to reason about strategies.
Brief description: This tutorial will give an overview on the representation of a game strategy, a key issue in strategic reasoning. Strategic reasoning consists of representing and reasoning on players’ strategies: how an agent should move while considering other players’ possible moves. Several proposals have been made for representing strategies in a logical language, either as plain objects or only at the semantic level. The tutorial will overview several representation models for representing strategies and will explore different variants and extensions (handling uncertainty, incomplete knowledge…). The tutorial will also dedicate time on the reasoning dimension and the relation with planning.
T7. Fairness in AI/ML via Social Choice
Presenters: Evi Micha (University of Southern California), Nisarg Shah (University of Toronto)
Time: May 20 morning
Contact email(s): [email protected], [email protected]
Target audience/expected background: The intended audience broadly includes researchers working on or interested in the topic of algorithmic fairness. The tutorial will not assume any prior knowledge of social choice theory or AI/ML: fairness notions from social choice and the AI/ML application domains will be introduced from ground up in the tutorial. As such, we envision the tutorial to be well-suited for everyone from undergraduate students interested in working on algorithmic fairness to established faculty members already working on it. Attendees can expect to walk away with knowledge of mathematical notions of fairness stemming from social choice and how to apply them to a variety of AI/ML domains.
Learning outcomes/takeaway skills: The main goal of this tutorial is to bring to the attention of both ML and Computational Social Choice communities recent works that provide compelling fairness guarantees in AI/ML problems by utilizing notions and techniques from computational social choice. Since fairness in AI/ML (and more broadly, AI ethics) is quickly emerging to be a topic of paramount importance for the future, this is a timely tutorial for a researcher to catch up with the advances and get involved in this promising approach to understanding and ensuring fairness of AI/ML tools.
Brief description: Today, machine learning and AI are becoming important tools for automating decision-making at an unprecedented rate. This has naturally raised questions regarding whether these decision-making tools treat individuals or groups \emph{fairly}. While fairness is a nascent subject within the AI/ML literature, it has a long history in economics, specifically in social choice theory (and more recently, in computational social choice), where compelling mathematical notions of fairness, such as envy-freeness and the core, have played an important role in the design of fair collective decision-making algorithms. The tutorial will spotlight a recent emerging literature on adopting fairness notions from social choice to design provably fair AI/ML tools. The advantages of such notions over fairness notions proposed in the ML literature will be discussed, and applications such as classification, clustering, federated learning, multi-armed bandits, and rankings will be covered.
T8. Hands-on Interaction-oriented Programming
Presenters: Amit K. Chopra (Lancaster University), Matteo Baldoni (University of Torino), Samuel H. Christie V (North Carolina State University), Munindar P. Singh (North Carolina State University)
Time: May 20 morning
Contact email(s): [email protected]
Target audience/expected background: This tutorial is presented at a senior undergraduate student level. It is accessible to developers from industry and to students. Typical attendees for our past tutorials have been researchers and practitioners from industry and government, developers, graduate and senior undergraduate students, and university faculty.
Learning outcomes/takeaway skills: After attending the tutorial, attendees will be able to:
- Model multiagent systems via abstractions for modeling interactions, such as norms and protocols.
- Verify protocols against requirements
- Implement flexible software agents that enact protocols and reason about norms
- Develop a critical understanding of related work in diverse areas such as programming languages, distributed systems, and microservices.
- Be able to install and use the IOP software suite, including in teaching.
Brief description: Interaction protocols, commitments, and more generally, norms are major themes in multiagent systems (MAS) research. This tutorial will cover the latest advances on these topics, especially from the perspective of information modeling (think databases) and decentralized enactments (think messaging and asynchrony). Understanding these perspectives is crucial to designing practical and correct MAS. Specifically, we will introduce Interaction-Oriented Programming (IOP) as an approach for engineering decentralized MAS. We will introduce languages for modeling MAS in terms of commitments and protocols; techniques for verifying protocols; and programming models for implementing agents, both in Python and BDI, that are informed from the MAS models. We will contrast IOP with traditional agent communication approaches (e.g., AUML, FIPA ACL, and KQML) and mainstream programming approaches. Concepts will be supported by software available in a public repository (https://gitlab.com/masr/). Attendees will be able to install the necessary software and use it to create multiagent systems. The tutorial is presented at the level of senior undergraduate andbeginning graduate students and is suitable for researchers and practitioners. Beyond multiagent systems, this tutorial should be valuable to anyone interested in modeling and programming distributed systems and related themes such as asynchrony, concurrency, and fault-tolerance.
T9. Foundations of Cooperative AI
Presenters: Vincent Conitzer (Carnegie Mellon University) and Caspar Oesterheld (Carnegie Mellon University)
Time: May 20 morning
Contact email(s): [email protected]
Target audience/expected background: The tutorial should be accessible to the AAMAS audience at large; in particular, we will not require background in game theory. The audience should be familiar with probability and very basic decision theory (maximizing expected utility). While attendees who do have background in game theory will probably catch on a bit quicker especially early in the tutorial, there are more than enough nonstandard ideas here from a game-theoretic perspective that they should find plenty that is new.
Learning outcomes/takeaway skills: The main takeaway is an introduction to the emerging area of cooperative AI — standard game-theoretic solution concepts and approaches towards achieving cooperation in game theory, other approaches informed by other disciplines (ethics, social networks, etc.), the philosophical foundations of cooperative AI with technical consequences, its practical aspects, and community resources.
Brief description: AI systems can interact in unexpected ways, sometimes with disastrous consequences. As AI gets to control more of our world, these interactions will become more common and have higher stakes. As AI becomes more advanced, these interactions will become more sophisticated, and multiagent systems and game theory will provide the tools for analyzing these interactions. In particular, bad interactions may be driven by game-theoretic dynamics, such as the tragedy of the commons. However, AI agents are in some ways unlike the agents traditionally studied in game theory. This introduces new challenges, as human ways to maintain cooperation may not apply; but it also introduces new opportunities, as we can design agents in ways that facilitate cooperation and trust, for example by making their reasoning transparent to other agents. These developments have led to the nascent area of cooperative AI. In this tutorial, we will give an introduction to the foundations of this area and lay out future directions for research (no previous background required).
T10. A Concise Introduction to Cooperative Multi-Agent Reinforcement Learning
Presenters: Chris Amato (Northeastern University), Frans Oliehoek (TU Delft)
Time: May 20 afternoon
Contact email(s): [email protected]
Target audience/expected background: This tutorial is meant for people that have at least some familiarity with single-agent, fully observable reinforcement learning but will be accessible to the general AAMAS audience. The target audience is researchers that are interested in or have begun working in the area and would like to have deeper knowledge.
Learning outcomes/takeaway skills: The goals of this tutorial are to introduce novices to the field of cooperative MARL and provide a survey of the area. Participants will learn about the standard assumptions, frameworks, algorithms, and benchmarks in the field. The resulting knowledge will allow them to navigate this seemingly complex research area with confidence.
Brief description: Multi-agent reinforcement learning (MARL) has exploded in popularity in recent years. Cooperative MARL—MARL in which all agents share a single, joint reward—is potentially the most popular form of MARL. Cooperative MARL has been used to train a team of cooperative agents in video games, robots for applications such as warehouses or the home and autonomous vehicles. Many approaches have been developed but they can be divided into three main types: centralized training and execution (CTE), centralized training for decentralized execution (CTDE), and Decentralized training and execution (DTE). This tutorial is meant to explain the setting, basic concepts, and common methods for the CTE, CTDE, and DTE settings. We will discuss the cooperative MARL problem formulation (the Dec-POMDP model), as well as standard CTE algorithms, DTE algorithms based on Independent Q-Learning (IQL) and Independent Actor Critic (IAC), CTDE value factorization methods (e.g., VDN, QMIX, QPLEX), and CTDE centralized critic methods (e.g., MADDPG, COMA, MAPPO). We will also discuss misconceptions about current methods, relationships between them, and open questions.
T11. Multi-Agent and AI Techniques for Decentralised Energy Systems
Presenters: Valentin Robu (CWI Amsterdam/TU Eindhoven), Zita Vale (GECAD, Polytechnic of Porto), Sarah Keren (Technion)
Time: May 20 afternoon
Contact email(s): [email protected], [email protected], [email protected]
Target audience/expected background: The tutorial is aimed at both AI/multi-agent systems researchers (with a particular focus on PhD students and young researchers) wanting to learn more about challenges in applying their work to the area of energy systems, as well as practitioners from the energy sector, exploring the potential of multi-agent solutions. While a basic knowledge of multi-agents or AI concepts is useful, no specific prior knowledge is required.
Learning outcomes/takeaway skills: The main learning outcome is intended for researchers with a background in multi-agent systems or AI who want to learn more about what are the specific challenges/issues in energy systems, and how techniques from their area of expertise could be used to address them. The tutorial should also give an understanding to researchers or industry participants from the energy sector of how multi-agent systems and AI can help address some of the challenges they face.
Brief description: Energy systems are undergoing fundamental changes, having to deal with increasing uncertainty, digitalization, and decentralization. This makes them a natural area for multi-agent systems research, given the increasingly decentralized way in which they are planned and operated. Decentralized, agent-based solutions are playing an increasingly important role in energy systems management and control, yet building successful AI applications for this domain requires both an understanding of the fundamental AI techniques, but also of the features and constraints of energy systems. This tutorial will provide a broad overview of a number of topics where multi-agent systems can play a key role in energy systems, including: design and simulation of electricity markets, coordination of electric vehicle charging, multi-agent reinforcement learning and its uses in e.g. network management and demand-side response, and new market mechanisms such as peer-to-peer energy trading, virtual power plants and community energy coalitions.
T12. Strategic AI: Bridging Game Theory and Multi-Agent Systems via Autoformalization
Presenters: Agnieszka Mensfelt (Royal Holloway, University of London), Kostas Stathis (Royal Holloway, University of London), Vince Trencsenyi (Royal Holloway, University of London)
Time: May 20 afternoon
Contact email(s): [email protected]
Target audience/expected background: This tutorial is primarily intended for researchers and practitioners in areas such as agent-based modelling, advanced multi-agent frameworks, agentic models of beliefs, autoformalization, automation of multi-agent system simulations, extensions of standard game theory, game definition languages, the development of agents with advanced and human-like reasoning processes, practical and game-theoretic agent applications, and multi-agent systems powered by large language models, as well as those working in adjacent fields. It is designed for intermediate participants with a background in artificial intelligence, computer science, or related disciplines. Participants should have a good grasp of basic mathematical concepts such as linear algebra, probability, and statistics, as well as basic familiarity with formal logic and programming.
Learning outcomes/takeaway skills: By the end of this tutorial, participants will understand the basics of game theory, including different game types and key concepts such as Nash equilibrium and Pareto optimality. They will also gain foundational knowledge of multi-agent systems, covering agent types, coordination mechanisms, and decision-making processes. Additionally, participants will learn about the Game Description Language (GDL) and its extensions, enabling them to formally describe and analyse game environments. They will develop an understanding of belief hierarchies, theory of mind, and hypergame theory as frameworks for modelling agent reasoning and handling misaligned perceptions. The tutorial will introduce autoformalization techniques using large language models to translate natural language into mathematical and logical representations. This will expand participants’ knowledge of how AI systems can reason about strategic interactions. Finally, participants will acquire the basics of building and deploying multi-agent simulations that integrate logic-based decision-making, strategic reasoning, and AI-driven interactions.
Brief description: This tutorial explores the intersection of game theory, multi-agent systems, and AI-driven formalization techniques for modelling strategic interactions between agents, both human and artificial. Participants will build a strong foundation in game theory and MAS through hands-on, interactive examples. The course will then introduce Game Description Languages (GDLs) and their temporal and situational extensions, emphasizing their role in formalizing complex game-theoretic scenarios. Subsequently, participants will examine advanced concepts, including belief hierarchies, theory of mind, and recursive reasoning within agentic contexts. A novel highlight is the integration of large language models for autoformalizing game-theoretic scenarios, bridging subsymbolic and symbolic AI tools. Practical applications will be demonstrated through a custom MAS simulation framework and real-world case studies, showcasing the versatility and relevance of game-theoretic autoformalization in AI research.