Skip to content


The TutORials in Operations Research series is published annually by INFORMS as an introduction to emerging and classical subfields of operations research and management science. These chapters are designed to be accessible for all constituents of the INFORMS community, including current students, practitioners, faculty, and researchers. The publication allows readers to keep pace with new developments in the field and serves as augmenting material for a selection of the tutorial presentations offered at the INFORMS Annual Meeting.

Advances for Quantum-Inspired Optimization

In recent years we have discovered that a mathematical formulation known as QUBO, an acronym for a Quadratic Unconstrained Binary Optimization problem, can embrace an exceptional variety of important optimization problems found in industry, science, and government.

The QUBO model has emerged as an underpinning of the quantum computing areas known as quantum annealing and digital annealing and has become a subject of study in neuromorphic computing. Through these connections, QUBO models lie at the heart of experimentation carried out with quantum computers developed by D-Wave Systems and neuromorphic computers developed by IBM. Computational experience is being amassed by both the classical and the quantum computing communities that highlights not only the potential of the QUBO model but also its effectiveness as an alternative to traditional modeling and solution methodologies.

We illustrate the process of reformulating important optimization problems as QUBO models through a series of explicit examples. We then go farther by describing important QUBO-Plus and PUBO models (where “P” stands for “Polynomial”) that go beyond QUBO models to embrace a wide range of additional important applications. These innovations, embodied in software made available through Entanglement, Inc., have produced an ability to solve dramatically larger problems and to obtain significantly better solutions than software being offered through D-Wave, IBM, Microsoft, Fujitsu and other groups pursuing this area.

Speakers: Fred Glover, Gary Kochenberger

Choosing Opponents in Tournaments

We consider the design of tournaments that use a preliminary stage, followed by several rounds of single elimination play. Many sports, including for example most U.S. major sports, use this format. The tournament design problem involves determining the sequence of matchups required to determine a winner. This problem has been extensively studied within the literature of operations research and economics. The conventional design of the single elimination rounds is a “bracket” based on a prior ranking or seeding of the players. However, this design suffers from several deficiencies, insofar as natural expectations about the results of the design are not satisfied. First, the expectation that higher ranked players having a higher probability of winning is not satisfied. Second, the probability that the top two players meet is not maximized. Third, there is the widely observed issue of tanking or shirking at the preliminary stage, where there are disincentives for a player to win. Fourth, top ranked players randomly incur unfortunate matchups against other players, which introduces an unnecessary element of luck. Finally, the use of a conventional fixed bracket is limiting, in that it fails to allow players to consider information that develops during the tournament. Our proposed solution involves allowing higher ranked players at the single elimination stage to choose their next opponent at each round. Using data from 1,902 men’s professional tennis tournaments during 2001–2016, we demonstrate the reasonableness of the results obtained. We also perform sensitivity analysis for the effect of increasing irregularity in the pairwise win probability matrix on three traditional performance measures. Finally, we consider strategic shirking behavior, and show how our opponent choice design can eliminate or reduce such behavior, including in for some famous examples. In summary, compared with the conventional design, the opponent choice design provides higher probabilities that the best player wins and that the two best players meet, and reduces shirking by both individual players and groups.

The tutorial session is based on a full chapter written by Nicholas G. Hall and Zhixin Liu.

Speakers: Nicholas G. Hall,  Zhixin Liu

Epidemic Modeling, Prediction and Control

The COVID-19 pandemic has disrupted almost every facet of life globally and has posed new challenges to policymakers and questions to researchers. In this article, we discuss some of the key considerations and challenges in modeling epidemics, predicting their diffusion within and across populations and evaluating their control policies. Epidemic prediction is challenging due to the uncertain nature of its spatial and temporal diffusion, co-evolution of latent confounding factors, sparsity of signals particularly during the initial stages of a pandemic and the complex interactions of the individual- and group-level behaviors with mitigating policy interventions. We explain, illustrate and comment on the strengths and weaknesses of the commonly used epidemic models. We classify the existing models on methodologies used such as compartmental models versus agent-based models, nature of model uncertainties considered such as deterministic versus stochastic models and factors included in the models such as network effects, disease characteristics and control actions. We highlight some of the common behavioral traits exhibited by individuals and discuss the theoretical sources of such behavior. Based on our work, we illustrate the formulation of a specific compartmental model that accounts for asymptomatic spread of COVID-19 and the effect of control actions such as testing and lockdowns. We also demonstrate the nature of optimal actions based on analytical and agent-based simulation methodology. Finally, we conclude by discussing lessons learned from the COVID-19 pandemic to better manage any future pandemic.

Speakers: Ujjal Kumar Mukherjee, Sridhar Seshadri

Five Starter Pieces: Quantum Information Science via Semi-definite Programs

As the title indicates, this manuscript presents a brief, self-contained introduction to five fundamental problems in Quantum Information Science (QIS) that are especially well-suited to be formulated as Semi-definite Programs (SDP). We have in mind two audiences. The primary audience comprises of Operations Research (and Computer Science) graduate students who have familiarity with SDPs, but have found it daunting to become even minimally conversant with pre-requisites of QIS. The second audience consists of Physicists (and Electrical Engineers) already knowledgeable with modeling of QIS via SDP but interested in computational tools that are applicable more generally. For both audiences, we strive for rapid access to the unfamiliar material. For the first, we provide just enough required background material (from Quantum Mechanics, treated via matrices, and mapping them in Dirac notation) and simultaneously for the second audience we recreate, computationally in Jupyter notebooks, known closed-form solutions. We hope you will enjoy this little manuscript and gain understanding of the marvelous connection between SDP and QIS by self-study, or as a short seminar course. Ultimately, we hope this disciplinary outreach will fuel advances in QIS through their fruitful study via SDPs.

The tutorial session is based on a full chapter written by Vikesh Siddhu and Sridhar Tayur.

Speakers: Vikesh Siddhu, Sridhar Tayur

Integration of Prediction and Optimization with Applications in Operations Management

Big data provides new opportunities to tackle one of the main difficulties in decision-making systems – uncertain behavior following an unknown probability distribution. Standard data-driven approaches usually consist of two steps. The first step involves predicting or estimating the uncertainty behavior using data. Then the second step requires finding decisions that optimize an objective function that depends on the output of the first step. Instead of the classical two-step predict-then-optimize (PTO) procedure, this tutorial examines data-driven solutions that integrate these two steps. We first introduce the problem formulation as a contextual stochastic optimization. In this formulation, the objective function depends on the unknown uncertainty and the distribution of the uncertainty is associated with some contextual information. Massive data is often available to solve this problem, including historical observations of the uncertainty and contextual information. Therefore, machine learning tools have become an important technique to achieve integrated data-driven solutions. Yet, it is noteworthy that the goal of the integrated data-driven solution is very different from traditional predictive tasks for machine learning. Moreover, different integrated data-driven methods have shown applicability and effectiveness in many real-world decision-making situations, such as inventory management, COVID-19 pandemic, and power system. To demonstrate the practicality and the real-world impact, we review current achievements of integrated methods in different real-world applications in operations management.

The tutorial session is based on a full chapter written by Max Shen and Meng Qi.

Speakers: Max Shen, Meng Qi

Operations Research on Mobile-Enabled Financial Inclusion: A Roadmap to Impact

In the developing world, the lack of access to high-quality financial instruments has severely hampered the health and general well-being of poor people. This article explores the causes, results, and emerging solutions to “financial exclusion,” as well as how Operations Research (OR) scholars can help. This article provides a primer on the financial lives of the poor, the promise of “mobile money”, as well as academically fruitful and practically important research avenues that OR scholars (and scholars of adjacent fields) can pursue to make an impact. This article outlines three mobile money topics that are ripe for further research: mobile money agent incentives are explored with a game-theoretic framework, inventory decision support is explored with a regulated Brownian motion framework, and agent network optimization is described in terms of classical OR problems like facility location, assignment, and traveling salesman.

Speaker: Karthik Balasubramanian

Product Recall Research: Dimensions, Methods, and Regulator Implications

Product recalls create complications for manufacturers, inconveniences for regulators, and losses for investors, all while indicating potentially serious health hazards to customers. From a scholarly perspective, recalls are a unique and complex phenomenon that enables novel research explorations and contributions. In this tutorial, we evaluate the recall research domain across three dimensions: causes, decision-making, and effects and across three methods: empirical data, analytical modeling, and behavioral experiments. We provide a particular focus on recall decision-making, as this often-voluntary decision on the part of the manufacturer is fraught with intriguing behavioral biases that can stimulate further research. We conclude by detailing what we perceive to be the key research implications for the three primary U.S. product recall regulators, which includes pulling back the curtain on our research partnership with one of these regulators, the Food and Drug Administration.

Speakers: George P. Ball, Kaitlin D. Wowak, Ujjal K. Mukherjee 

Supply Chain Resilience: Impact of Stakeholder Behavior and Trustworthy Information Sharing with a Case Study on Pharmaceutical Supply Chains

Recent disruptions in many different supply chains have brought the critical issues of supply chain resilience into focus. Despite the notion that most economic markets should adjust to shifts in supply and demand through entry and exit of competitors, we have seen that even sectors that are not as heavily regulated, as the pharmaceutical sector, are vulnerable and prone to severe shortages. Although there are many aspects of a supply chain from design to last-mile logistics that impact resilience, in this chapter we highlight and focus on the importance of incorporating the concepts of (i) stakeholder behaviors and (ii) information availability in the future of OR/MS models focused on addressing supply chain resiliency. We present how the pharmaceutical industry, which has been plagued by supply chain shortages, is a strong case study for exploring these concepts. Further, within this context we present a research framework that incorporates these elements. Informed by the initial results with this framework we highlight important new research directions.

The TutORial session is based on a full chapter written by zlem Ergun, Jacqueline Griffin, Noah Chicoine, Min Gong, Omid Mohaddesi, Zohreh Raziei, Casper Harteveld, David Kaeli, Stacy Marsella.

Speakers: Özlem Ergun, Jackie Griffin

Teaching Supply Chain Analytics: from Problem Solving to Problem Discovery

Mainstream teaching of supply chain analytics focuses on model-driven predictive and prescriptive analytics to solve problems. Data-driven descriptive and diagnostic analytics to define and discover problems is almost entirely missing from the curriculum. The reason, as some believe, is that the latter is easier and of a lower value. But Steve Jobs once said: “If you can define the problem correctly, you almost have the solution.” Problem discovery by descriptive and diagnostic analytics is not only highly valuable but can also be difficult – it is just difficult in a different way from problem solving. One key challenge is data interpretation, that is, transforming data into insights – the INFORMS definition of Analytics. In this tutORial, I summarize recent development and education modules that use descriptive and diagnostic analytics to define and discover problems based on data in various supply chain domains from source, make, move, sell to integration. I showcase the value and methodology by inventory analytics, sourcing analytics and competitive intelligence.

Speaker: Yao Zhao

The Role of Microgrids in Advancing Energy Equity Through Access and Resilience

Microgrids can play a role in advancing energy equity by (i) extending access to electricity in areas where national grids do not reach, and  (ii) enhancing a power system’s resilience — the ability to adapt to and rebound from unanticipated shocks — in times of disaster(s) such as extreme weather events or power outages on the centralized grid. In the developing world, access to electricity remains a challenge in   the most interior rural areas, where incomes are low and grid connection costs   are prohibitive. In both developing and developed economies, the rise of extreme weather events has made the resilience of power systems a concern. Wildfires, for example, are becoming widespread. For example, the United States saw over 71,000 wildfires burn 10 million acres and more than 12,000 buildings in 2017 alone. This specific economic burden — in terms of the impact of wildfires on the U.S. economy — is estimated to be between $71.1 billion and $347.8 billion annually. In addition, there is a social cost incurred by vulnerable populations who (i) may be unable to evacuate from the location of a disaster, or (ii) may not have access to mitigating strategies for failed power systems. In this tutorial, we examine the role of microgrids in electricity access and resilience through a systematic review. With respect to electricity access, we investigate the impact of electricity provision through microgrids on outcomes in rural areas of developing countries. For electricity resilience, we assess the effectiveness of microgrids in providing support to power grids in the aftermath of a disaster. We find that microgrids can provide significant benefits in both settings.

Speakers: Alexandra M. Newman, Destenie Nock

Using Simple Games to Teach Supply Chain Management

Classroom simulations (sometimes referred to as games) are often used in operations and supply chain management courses to improve student involvement. Many popular games are often quite complicated and require a significant amount of class time. But there is also value in using simple games to quickly illustrate one key point and to motivate material. I discuss games that I use in the classroom, drawn almost directly from my research, for three topics: inventory and contracting, competitive bidding, and trust and collaboration. For each topic, I explain the specific goals the games are designed to accomplish. I also discuss the game setup and how to modify games designed for research to be used in the classroom. Where appropriate, I also share my typical experience with student reactions and feedback.

Speaker: Elena Katok