-
Joining two popular areas, this track features talks about the use of artificial intelligence (AI) and analytics in healthcare, not AI in Healthcare.
Monday, April 15, 9:10–10:00am
Optimal Classification and Regression Trees
Jack Dunn
For the past 30 years, decision tree methods have been one of the most widely used approaches in machine learning across industry and academia, due in large part to their interpretability. However, this interpretability comes at a price—the performance of classical decision tree methods is typically not competitive with state of-the-art methods like random forests, boosted trees, and neural networks. We present Optimal Classification and Regression Trees, a novel method that leverages the improvements in optimization over the past three decades to produce decision trees that deliver interpretability and state-of-the-art performance simultaneously. We show comprehensive evidence that this method is tractable and performs competitively with random forests, boosting and neural networks. We also show how the interpretability of these trees has led to transformational business impact with a variety of cases in healthcare, insurance, financial services, cybersecurity, and more.
Monday, April 15, 10:30–11:20am
Identifying Suspicious Financial Activity Using Machine Learning
Sanjay Melkote
Detecting unusual and suspicious money laundering and terrorist financing activity is an important problem faced by all financial institutions. The obligation of banks to report suspicious activity related to potentially suspicious wire transfers and cash structuring is a regulatory requirement. Traditionally, banks have used rules-based programs to monitor transactions for suspicious activity. Although simple, these programs may not result in good accuracy, can generate redundant alerts, need to be manually updated, and cannot easily adapt to changes in the data. In this talk, we explore a machine learning-based approach to detecting suspicious financial activity. Inspired by the literature on class imbalance learning, we develop a hybrid method called EasyEnsembleRF that deeply explores the data while retaining fast training speeds. Using 1-2 years of transaction data for training and testing, we compare EasyEnsembleRF to several benchmark predictive models, including random forests, logistic regression, and neural networks. The results show EasyEnsembleRF has by far the lowest false negative rates of all the models tested, while maintaining low false positive rates. Production versions of EasyEnsembleRF are currently being piloted and are expected to result in enhanced detection of suspicious wire transfer and cash structuring activity.
Monday, April 15, 11:30am–12:20pm
Delivering Impact and Developing The Analytics Roadmap at Memorial Sloan Kettering’s Department of Surgery
Christopher Stromblad
In three years we established a vision, built a team and delivered impact through analytics at the department of surgery of a leading cancer center. Upon interviewing 30 stakeholders at all levels of the organization and receiving support from the Chairman of Surgery at Memorial Sloan Kettering, we have been able to organize and kick off eight multi-phased analytics products that consistently guide decisions across several time horizons. Specific projects include strategic planning for the year 2030, successful management of the tactical surgical block schedule, and the path to leveraging individual predictive modeling for 25,000 patients/year.
Monday, April 15, 1:50–2:40pm
Developing Machine Learning Applications in an Agile Environment
Mary McGuire
Two years ago, the Big Data Services team in the Institutional Analytics and Informatics group at the University of Texas MD Anderson Cancer Center (MDACC), Houston Texas, started using Agile methodology for the development and implementation of machine learning (ML) applications to support both clinical and business needs. Agile methodology offered the flexibility and speed to create the latest analytical approaches that met the increasing demands of stakeholders to consolidate data to support decision-making. In this session, the attendee will learn how 3 major machine learning (ML) projects were developed and implemented in the Agile environment at the University of Texas MD Cancer Center in Houston, TX.
Monday, April 15, 3:40–4:30pm
Novel Computer-Aided Detection of Breast Cancer: Stalking the Serial Killer
Hamparsum Bozdogan
Breast cancer is the second leading cause of death among women worldwide and, because preventing it is beyond current medical abilities, much research attention has been focused on early detection and post diagnostic treatment. But early detection has flaws. Even mammography, the most effective tool for detecting the cancer, misses up to 30 percent of breast lesions. The missed evidence is attributed to poor-quality radiographic images and eye fatigue and oversight on the part of radiologists who read the images. In this presentation, we present several novel statistical modeling and machine learning techniques for computer-aided detection (CAD) of breast cancer on 1,269 Italian patients by introducing and developing flexible supervised and unsupervised classification methods using information complexity criterion. The efficiency and robustness of our approach is shown in computer-aided diagnostic tools that show promise in increasing the ability to spot cancerous lesions in the digital images collected during mammography.
-
Operations research and analytics are driving advancements in government that touch nearly every part of our lives. From improving disaster relief efforts following a storm, to enhancing access to healthcare, to criminal justice and immigration reforms, and insuring our national security, analytics is saving lives, reducing costs, and improving productivity across the private and the public sectors. Just as business leaders have used O.R. and analytics to make smart business decisions, policymakers in government have increasingly turned to these modern tools to analyze important policy questions. Come see how the latest applications of analytics are solving public policy problems.
Tuesday, April 16, 9:10–10:00am
Interactive Simulations in Support of Warfighters, Intel Analysts and Policy Makers
Sam Savage
The Open SIPmath™ Standard from 501(c)(3) ProbabilityManagement.org allows simulations in any environment to be networked by communicating uncertainties as arrays of Monte Carlo realizations called SIPs. This presentation will show how to roll up operational risk in native Excel or other computer environments that support arrays. This sort of analysis is particularly applicable to infrastructure such as roads, bridges, communications networks, pipelines etc. Examples will include portfolios of mitigations for gas pipeline risk, military communications networks, and protection against flooding of coastal regions. The presentation is for all Excel users who make decisions under uncertainty, so bring your laptop. No statistical background is assumed, but for those with extensive training in the area, this session should repair the damage. We encourage all participants to download some of the companion models to our presentation in advance available here: https://www.probabilitymanagement.org/models. We also encourage you to see our public article in ORMS Today titled, Probability Management: Rolling up operational risk at PG&E, which can be downloaded here: https://www.informs.org/ORMS-Today/Public-Articles/December-Volume-43-Number-6/Probability-Management-Rolling-up-operational-risk-at-PG-E.
Tuesday, April 16, 10:30–11:20am
Using Multiattribute Decision Analysis for Public-Sector Decisions
Roger Burk
Since businesses exist to make money, the primary decision criterion for business decisions usually comes down to money (or occasionally proxies for future streams of money, such as market share). In the public sector this is not the case. There are typically many stakeholders with many diverse and divergent values that have to be taken into account. The techniques of multiattribute decision analysis (MADA) may not be able to make this problem actually easy, but they can help avoid certain common pitfalls. MADA can help define the value differences so they can be understood and dealt with explicitly, identifying the necessary tradeoffs. This talk will lay out a technically sound and easy-to-apply approach for multiattribute problems, based on an additive value model. The emphasis will be on sound and practical methods that understandable by clients without special training in analytics or operations research, and on clear methods of presentation of results. Several common technical errors, some of them surprisingly popular, will be pointed out. Other issues addressed will include cost issues, uncertainty, portfolio decisions, and sensitivity analysis
Tuesday, April 16, 11:30am–12:20pm
Starting from Scratch – An Army “start-up”
LTC Cade Saie
New organizations and companies are created every day in the private sector, however, in the public sector this is much rarer. The creation of Army Futures Command was the most significant Army reorganization since 1973. The concept of the command is simple, build a better Army for years to come through harnessing artificial intelligence and big data analysis to quickly process information and identify trends that will shape modernization efforts. This presentation will the share lessons learned starting from scratch and address the pitfalls associated with developing an enduring strategy and capability for a command managing a $30 plus billion modernization portfolio from day one.
Tuesday, April 16, 1:50–2:40pm
Process Mining: The Capability Every Organization Needs
John Bicknell
Process Mining is an emerging AI/ML technique which may be thought of as an x-ray capability for your organization’s processes. It allows you to see where process challenges reside, simulate change assumptions, make corrections with confidence, and quickly re-measure the upgraded ecosystem — capturing return on investment every step of the way. Your organization has next-level strategic advantage hidden within your IT systems. All systems create “data exhaust” which is rich with process activity trails documenting the actions of users or machines while performing business activities. When process ecosystems are not optimized towards meaningful goals, your organization hemorrhages costs unnecessarily. Failing to adopt cutting edge artificial intelligence to optimize your processes places you at a competitive disadvantage. In this session, you will learn process mining fundamentals, hear impactful cross-industry use cases, and understand why it is the capability your organization needs to compete and transform continually.
Tuesday, April 16, 4:40–5:30pm
Adopt Mission Focused Analytics to Drive Rapid and Comprehensive Data Discovery
Amy Hauck and Savita Raghunathan
Most commercial and government organizations have massive collections of useful data spanning decades. However, with that quantity comes the inability to quickly analyze patterns and discern insights. Performing a holistic assessment may require checking and resolving entities across siloed systems, which can take days. In today’s technology driven era, there are strong solutions in the market that understand the advances in big data to help uncover hidden or complex relationships to visualize and make connections within seconds.In addition to reviewing the principles and use cases for rapid data discovery, this session also deep dives into a successful case study – the Texas Department of Public Safety’s Intelligence and Counterterrorism (ICT) Division. ICT tackled these challenges head on by developing a state-of-the-art modern analytical data mart. The solution provides ICT Analysts a data platform with sophisticated data visualization capabilities that enables them to obtain results from a number of large data sets in a user-friendly, proficient, accurate, and expedient manner on a continuous (24/7) basis.
-
The Analytics Leadership Track was formed to bring together leaders of analytics efforts and analytics groups to discuss: i) how to build a world class analytics team for your company; ii) how to define, solve, deliver, and communicate an analytics solution; and iii) what an analytics Leader must do to succeed within the organization and help the organization/company succeed.
Monday, April 15, 9:10–10:00am
Data: The Fuel of Tomorrow
Aziz Safa
Data is the new fuel that powers the experiences of the future and gives businesses critical insights we once thought were impossible. Data is disrupting and transforming all industries, it truly is the fuel of tomorrow. Control and security will be key for businesses that need to know what’s being collected, how it’s being used, and how personal data can be deleted as required. End-to-end computing, powered by technology like 5G, sets the foundation for delivering critical insights, and the incredible experiences of tomorrow. Discover how to drive business transformation with a smart data strategy that is optimized for artificial intelligence (AI) to deliver faster time-to-insight. Learn how to unlock the power trapped in immense data volumes, how quality data enables better application of Data Science for effective decisions, as well as the next generation, future-ready infrastructure platforms needed to make the impossible, possible.
Monday, April 15, 10:30–11:20am
Analytics and the Dell Digital Way
Jim Roth
It’s no question that technology is transforming the way our customers live and work at an ever-increasing pace. Dell EMC believes that if we do not digitally transform to meet customers’ needs, we risk becoming irrelevant. Join Jim Roth to hear about how Dell Digital is using their award-winning Analytics Framework and the “Dell Digital Way” to drive digital transformation in each step of the software development lifecycle.
Monday, April 15, 11:30am–12:20pm
Why Effective Data Science Needs Human-centered Design
James Guszcza
While data science and AI are often framed in technical terms, the economic success of data science projects often hinges on organizational and human factors. This talk will explore what might be called a “Greater Data Science” perspective: beyond statistics and computer science, ideas from such domains as psychology, behavioral economics, and human-centered design are often needed to bridge the gaps that exist between algorithmic outputs and improved business outcomes. This talk will provide examples in which such considerations have made the difference between project success and failure; and will articulate a framework to guide future applications.
Monday, April 15, 1:50–2:40pm
Setting Up the Analytics Leader for Success
Noha Tohamy
Successful analytics adoption hinges upon strong leadership. In this session, Gartner will share a proven framework that analytics leaders can use to guide their organizations to analytics success. To demonstrate the framework in action, Gartner will discuss its stages, best practices and lessons learned from a supply chain analytics leader’s perspective.
Monday, April 15, 3:40–4:30pm
Lessons for Leaders: Reinventing Your Business with AI and OR
Michael Watson
In this talk, we will share how the AI mindset is helping top companies transform their businesses using practical examples from a number of industry-leaders. How did we arrive at AI as a term? How are companies implementing artificial intelligence to reduce cost, mitigate risk, automate complex processes, and more? How are successful teams built around AI efforts? How does reinforcement learning work & how can it help leaders make better business decisions and drive better outcomes?Hear about the future of AI and how companies should be thinking and talking about the term externally as they embark on their AI journey.
-
Treating analytics as a process, not just a collection of tools, leads to better outcomes, which is reflected in the 7 Domains of the Certified Analytics Professional (CAP) Job Task Analysis.
Tuesday, April 16, 9:10–10:00am
Why Most Analytics Projects Fail and How To Engage With Your Client For Success
Max Henrion
According to Gartner Research, 85% of data analytics projects fail. Their results don’t get used. Analysts often blame “management resistance”. But maybe we shouldn’t blame our clients. Experienced analysts know that the key to success is close engagement with clients — your boss, senior executives, and other decision makers. Most analytics professionals are skilled with numbers and software, but get little training on how to work with clients. Fortunately, you can learn these “soft skills” — often more easily than your hard-won technical skills. I will explain and illustrate keys to successful engagement, how to:- Discover your real “client” – who actually makes the decisions with what process?
- Ask good questions, listen effectively, and gain clients’ trust.
- Draw influence diagrams to help clients frame and scope their real objectives and decisions.
- Use sensitivity analysis to help clients understand what data and assumptions matter and why.
- Employ agile modeling methods to build decision tools that users find usable and useful.
- Design compelling visualizations to help clients make informed and confident decisions.
Tuesday, April 16, 10:30–11:20am
Understanding The Impact Of Virtual Mirroring-based Learning On Collaboration In Data And Analytics Function: A Resilience Perspective
Nabil Raad
Large multinational organizations are struggling to adapt and innovate in the face of increasing turbulence, uncertainty, and complexity. The lack of adaptive capacity is one of the major risks facing such organizations as the rapid change in technology, urbanization, socio-economic trends, and regulations continues to accelerate and outpace their ability to adapt. This is a resilience problem that organizations are addressing by investing in Data and Analytics to improve their innovation and competitive capabilities. However, Data and Analytics projects are more likely to fail than to succeed. Competing on Data and Analytics is not only a technical challenge but also a challenge in promoting collaborative innovation networks that are based on two key characteristics of resilient systems. One characteristic is the ability to learn while the second is the ability to foster diversity. In this study, we examine how a newly-established Data and Analytics function has evolved over a one-year period. First, we conduct a baseline survey with two sections. The first section captures the structure of Innovation, Expertise, and Projects networks using network science techniques. In the second section we extract four resilience-based workstyles that provide a behavioral representation of each phase of the Adaptive Cycle Theory. Following the survey, we conduct a controlled experiment where the Data and Analytics population is divided into four groups. One group acts as control mechanism while the remaining three groups are exposed to three different Virtual Mirroring-Based Learning (VMBL) interventions using simulation techniques. A virtual-mirror, which is a visualization of an employee’s own social network that provides a self-reflection as a learning process. The premise is that exposure to self-insights leads to a change in collaborative behavior. After a period of nine months, the baseline survey is repeated and then the effects of the interventions are analyzed. The findings provided original insights into the evolution of the Data and Analytics function, the characteristics of an effective VMBL design, and the relationship between resilience-based work styles and brokerage roles in social networks. The applied and theoretical contributions of this research provide a template for practitioners in Data and Analytics functions while advancing the theory and measurement of resilience.
Tuesday, April 16, 11:30am–12:20pm
Pattern Identification and Analysis of Sensor Outputs by Combining Pattern Markov Chains and Explainable Machine Learning
Teddy Ko
For a highly complex system such as a major military weapons platform, it is difficult to establish a Prognostics and Health Management (PHM) program with predictive and sustainment maintenance capabilities. These platforms often monitor and interpret thousands of sensor statuses and error codes. Error codes from various sensors are often reported simultaneously, in bursts, or in sequences, and it can be difficult to separate the underlying root error condition from sympathetic error codes. In addition, the systems are frequently updated and may be reconfigured for different mission assignments. For such systems, applying analytical models without understanding the complex details of the underlying health monitoring and reporting system usually will not yield accurate health predictions. In the presentation, we describe a “Pattern Identification and Analysis of Sensor Outputs by Combining Pattern Markov Chains (PMC) and Explainable Machine Learning” to address the challenge. PMCs are used in order to provide humans with visual and intuitive understanding of the underlying error codes and to correlate sequences of codes across platforms, configurations, and updates. Machine learning can then be more intelligently employed to assess the state of the equipment and to predict remaining useful life. By creating a PMC using the sensor codes, integrated with association rule learning, we can identify associations, correlations, and likely root causes of event sequences with unspecified time/event gaps between events. PMC makes visual and interactive understanding of error reporting of a very complex system possible and enables us to diagnose and refine the monitoring and reporting system. Using the PMC together with machine learning prediction of the remaining useful life of a part in a system/subsystem can enable us to progressively establish good predictive and sustainment maintenance capabilities for a complex system.
Tuesday, April 16, 1:50–2:40pm
Changing Cisco’s Marketing Behavior through Data Analytics
Viswanath Srikanth
Moving a 2000-person strong organization’s behavior from one driven by anecdotes, gut instinct and siloed data analysis to an organization that relies on data for planning and ongoing optimization is a multi-year, gut-wrenching exercise in well-grounded, often challenged, Marketing Analytics, supported by unstinting executive sponsorship in driving change. This presentation details the journey of how Data Analytics has transformed Cisco Marketing’s internal culture that has yielded significant growth in Customer Engagement, Marketing Leads and Marketing Sourced Revenue over the last three years. Bringing together a million online customer sessions a day, several dozen conference engagements, ongoing conversations through contact centers and social media, and wrapping all of these into a few key predictive, performance indicators requires depth of skills and perseverance in Data Analytics, Engineering and Process Change before meaningful results emerge. Apply these learnings to your Data Analytics initiatives to drive change and results in your organization, and welcome the new doors that a successful Data Analytics initiative opens for you and your organization.
Tuesday, April 16, 4:40–5:30pm
Analytics in Research and Practice: Finding Common Ground
Michael Gorman
It has often been said that academia in general, and research in the fields of analytics in particular, has become too remote and inaccessible to practitioners. On the other hand, practitioners often don’t know how to develop productive relationships with academics. In this talk, I describe fundamental differences between academic and practitioner approaches to problem solving, with examples to my consulting and experience in applied research. I describe ways that researchers can better and more frequently bridge this gap and overcome hurdles in such projects. Opportunities for and advantages of “practice based research” are discussed. Practical suggestions for successful collaboration are made, with aim towards fostering greater academic-practice collaboration. I demonstrate how both academics and practitioners can benefit from such collaboration.
-
The INFORMS Section on Data Mining is one of the fastest growing subdivisions in recent years. This track provides a survey of applications in data mining and knowledge discovery.
Tuesday, April 16, 9:10–10:00am
Diagnosing Device Failures to Propose Appropriate Actions
Deepthi Dastari
Gogo is the inflight internet company, whose worldwide inflight Wi-Fi services have made internet and video entertainment a regular part of flying. To provide connectivity on the flight, Gogo installs equipment on the aircraft, which when breaks, is replaced by the maintenance crew and sent to Gogo’s testing facility. Sometimes when this equipment is tested at the test bench, no faults are found with it.Based on the data (airborne logs) we have access to, we devised a method to identify which devices are truly broken/need replacement and validated it with real-cases by conducting a proof of concept. In the process, we built a fault isolation tree that recommends specific actions to be performed by the field technicians. To put this process into production, we built a cloud based solution that allows the technicians to answer a series of questions based on the data presented to them and presents recommendations based on their selections. Building this solution included identifying several data sources that contains information about different attributes of the tail, setting up a data pipeline in the AWS cloud to build features that were part of the fault isolation tree and creating reports to expose the data. We are also exploring the opportunity of automating this solution by applying machine learning techniques like Bayesian belief networks.
Tuesday, April 16, 10:30–11:20am
Contracts Analytics on Cognitive Data Platform to Reduce Risk from Revenue Leakage: Consolidate, Convert & Classify
Pitipong Lin
This paper lays out business requirements and identifies today’s technology to support the consolidation of contracts into a cognitive data platform and application of analytics to quickly gain insights into sales contracts. Companies are seeking better, faster ways to analyze contracts to understand obligations and risks that will help close the deal faster during contracts negotiation. A significant root cause of revenue leakage is risky language in contracts. Often it is due to clients requiring special contract clauses that derivative from the standard template, like “bank guarantee”, or sending contracts written from their legal team for signing. It not only introduces risks but also requires tremendous amount of time in iterative contract legal reviews. Advanced analytics and cognitive/artificial intelligence technologies add value in pre- and post-contract signing. However, there are many gaps to be addressed throughout the technology pipeline. Starting from consolidating contracts from the fragmented repositories into one, to converting the picture ‘pdf’ contract into text for processing, to enhancing the metadata associated with each contract, to making it easily searchable by sellers are examples today’s challenges and pre-requisites before we can even run any text analytics. Furthermore, we need to address technologies to extract metadata, compare clauses for the various use cases from the legal, procurement, delivery, accounting perspectives to reduce risk exposure and speed up contract signing. The audience will learn how we: i) consolidated multiple contracts repositories all over the world into one data platform, ii) developed algorithm to quickly select the most appropriate OCR (Optical character reader) technology to convert scanned contracts into text for the cognitive pipeline, and iii) conducted text analytics to classify languages and types of contracts to produce quality data to allow end users to find and analyze contracts easily.
Tuesday, April 16, 11:30am–12:20pm
Optimal Imputation: Automated Data Quality Assurance and Improvement
Daisy Zhuo
Data quality issues such as missing values and outliers remain a key roadblock to deriving value from big data and machine learning deployments. We developed Optimal Imputation, a novel framework to impute missing data by jointly optimizing the values and the model (KNN, SVM, or decision-tree based models) on the data. In large-scale synthetic and real data experiments, we show Optimal Imputation produces the best overall imputation in the majority of all datasets benchmarked against state-of-the-art methods, with an average reduction of imputation errors by 10-15%. It further leads to significant improvement in regression (0.05 increase in R2) and classification (2% improvement in accuracy) tasks. We demonstrate the impact in real-world applications in insurance, banking, and health care settings.
Tuesday, April 16, 1:50–2:40pm
Anomaly Detection in the Wild
Bradford Tuckfield
Anomaly detection remains crucial for organizations in a variety of industries. It can be used to detect fraud, find data entry errors, optimize revenue streams and cut costs, among other things. This presentation will analyze and explore several machine learning methods that can improve the speed and accuracy of anomaly detection. This presentation will also include practical suggestions for implementing effective anomaly detection programs in real-world organizations.The presentation will begin with the simplest type of anomaly detection: statistical outlier detection. We will quickly cover the best way to make a Gaussian model of a variable, calculate z-scores, and find outliers. We will also cover nonparametric methods that depend on quantiles and interquartile ranges.
We will move on to data transformations. Raw outlier detection is not always a workable solution, since some data naturally follows a fat-tailed distribution. In order to successfully perform outlier detection on fat-tailed data, it is necessary to first apply suitable transformations. We explore log-normal distributions, which are heavy-tailed and which commonly occur in nature, and we show that an easy logarithm transformation enables simple Gaussian outlier detection to successfully detect anomalies.
We will move on to seasonality decomposition, which enables the analysis of temporal trends in data – both long-term, directed trends, and also seasonal, cyclic trends. Seasonality decomposition “breaks down” data into component parts, consisting of a trend component, a cyclic component, and a noise component. Statistical outliers in the noise component of decomposed data provide strong evidence for anomalies in the underlying data, but they are not at all obvious before the decomposition is performed. We show how to perform this type of decomposition and how it enables anomaly detection using retail sales data of automobiles.
Finally, we will cover two advanced types of anomalies: contextual anomalies and collective anomalies. Contextual anomalies are not anomalies except when considered in the context of their immediate neighbors (temporal or otherwise). Collective anomalies are situations in which one individual data point is not an anomaly, but the occurrence of many such data points together constitutes an anomaly. Both types of anomaly require specific data transformation techniques in order to reliably detect them.
We will not only present these anomaly detection methods, but also show ways that we have optimized and improved their performance in innovative ways.
The presentation will include extensive examples of real, working code that enables the implementation of these anomaly detection techniques. We will show code written in R and Python, including a discussion of the differences between them. We will also show examples of some implementation methods that can be written strictly with SQL. We will conclude the presentation with practical considerations drawn from our experience implementing and optimizing these anomaly detection methods.
Tuesday, April 16, 4:40–5:30pm
Applications of Machine Learning to IOT
Adam McElhinney
The proliferation of sensor technologies has resulted in more connected machines than ever before. This change is resulting in huge quantities of sensor data becoming available for analysis. Machine learning algorithms have resulted in a mixed track record of success with these data sources. This talk will give an overview of the state of machine learning as applied to IoT and industrial equipment. It will discuss some of the challenges with current approaches, exciting theoretical advancements and some “lessons learned” from the field.Specifically:
- What do we mean by IoT?
- What is failure prediction and prognostics?
- What is the value of IoT?Differences between physics based approaches to IoT and data-driven approaches to IoT
- What are the challenges from applying data-driven approaches to IoT?
- How can recent advances in machine learning help with the unique challenges of IoT?
- Real-case study that illustrates the application of deep-learning, gradient boosting, transfer learning and other machine learning techniques for IoT applications
- What are the opportunities for future enhancements and exciting research in this area?
-
The Decision & Risk Analysis track describes effective ways to aid those who must make complex decisions. In particular, the talks reference systematic, quantitative, and interactive approaches to address choices, considering the likelihood and impact of unexpected, often adverse consequences.
Tuesday, April 16, 9:10–10:00am
The Art of Decision Framing and Uncertainty Analysis for Clarity of Action
Ellen Coopersmith
How can we ensure that teams operate at peak efficiency while enabling managers to make high-quality, informed decisions? Powerful ingredients combine to achieve this, Decision Quality, across global industries: decision framing, insightful uncertainty analysis and timely dialogue between decision makers and project teams.So, what is a decision frame and why is it important? A decision frame is a group’s bounded viewpoint of a decision problem. It’s important because while all projects have their share of issues and complexity, not all issues are created equal. Structured framing achieves clarity and consensus quickly, which is not only important but critical in defining the range of alternatives which should be considered.
Practical uncertainty analysis brings scenario thinking to the evaluation of alternatives, allowing teams to imagine different possible futures, gain insight and brainstorm hybrid solutions to consider. The result is well thought out clarity of action.
Tuesday, April 16th, 10:30-11:20am
Can We Do Better than Garbage-In – Garbage-Out?
Dennis Buede
Can an analytics approach that receives poor quality (aggregated) data produce useful outputs, outputs that have low mean squared error and are calibrated? A team from IDI confronted this question as part of a research project with the Intelligence Advanced Research Projects Activity (IARPA) to mitigate insider threats. Here insider threats are people, who are driven by rage, national loyalty, or profit to steal, destroy, sabotage data from an organization. This talk describes the motives and behaviors of insider threats and details of our multi-modeling solution, which includes data elicitation activities to address missing data (e.g., correlations). The modeling techniques used range from discrete event simulation to copulas to stochastic optimization for simulation populations, from random forests to support vector machines to naïve Bayesian networks to neural networks for down-selecting the potential threats.
Tuesday, April 16, 11:30am–12:20pm
Precision Medicine vs Accurate Medicine? A Critical Decision Fraught With Risk
Michael Liebman
While scientists can rigorously define the terms accuracy and precision, in healthcare they are used colloquially, andthis affects us personally through clinical decisions that involve friends, family and ourselves. To appropriately use these concepts to guide critical risk management and decision making we must consider the diverse perspectives and priorities of patients, physicians, payers, pharma and regulators. We develop comprehensive models of this complexity that can be objectively applied to any disease, implement the model in a “learning environment” and then use it to identify, prioritize and quantify elements of risk to improve personal and system-based decision making. We utilize a broad range of analytical tools,e.g.,graph theory, stochastic modeling, signal processing, etc., which we apply as appropriate to the specific question that needs to be addressed, and data that is available, across multiple scales. This approach should be transferable to the analysis of other complex networks and system-level problem areas.
Tuesday, April 16, 1:50–2:40pm
Enrich Your Data with Better Questions
Katherine Rosback
What if the secret to creating value with analytics is not so much about the analytics you employ, but more about the opportunities you pursue? Too often, organizations pursue opportunities based on their solution du jour, a hammer looking for a nail. In this session, you will learn how to identify and frame opportunities by doing something that is not as easy as it sounds–asking better questions.
Tuesday, April 16, 4:40–5:30pm
Analyzing Social Media Data To Identify Cybersecurity Threats: Decision Making With Real-time Data
Theodore “Ted” Allen
In 2018, 27.9% of businesses experienced a cybersecurity breach, losing over 10,000 documents and $3M according to the Ponemon Institute. Of breaches known to Ponemon, 77% involve the exploitation of existing bugs or vulnerabilities. In our work, we found that incidents occur in narrow time windows around when vulnerabilities are publicized. Can you optimally adjust your cybersecurity policies and decisions to address emerging threats? Analyzing social media will help you preemptively identify major medium-level vulnerabilities, which managers often ignore, but which contribute to a large fraction of the incidents and warnings. Success requires transforming textual information into numbers, and I present a method, called K-means latent Dirichlet allocation, that identified the Heartbleed virus. I will describe a Bayesian approach as well, and with both methods, you can adjust your cybersecurity as social media identifies new hazards. Related opportunities for closed loop control using Fast Bayesian Reinforcement Learning are also briefly described. The qualitative benefit of experimentalism of these methods enables improved maintenance options.
-
The profession of operations research and advanced analytics is constantly developing, growing, and expanding. This track is intended to bring together practitioners and researchers who are working at the edges of the profession to share new areas, explain open problems, formalize new problem areas that are just coming to the fore, and define challenging questions for further development.
Monday, April 15, 9:10–10:00am
A Tutorial on Robust Optimization
Dick den Hertog
In this presentation we explain the core ideas in robust optimization and show how to successfully apply them in practice.Real-life optimization problems often contain parameters that are uncertain, due to, e.g., estimation or implementation errors. The idea of robust optimization is to find a solution that is immune against these uncertainties. The last two decades efficient methods have been developed to find such robust solutions. The underlying idea is to formulate an uncertainty region for the uncertain parameters for which one would like to safeguard the solution. In the robust paradigm it is then required that the constraints should hold for all parameter values in this uncertainty region. It can be shown that, e.g., for linear programming, for the most important choices of the uncertainty region, the final problem can be reformulated as linear optimization or conic quadratic optimization problems, for which very efficient solvers are available nowadays. Robust Optimization is valuable for practice, since it can solve large-scale uncertain problems and it only requires crude information on the uncertain parameters. Some state-of-the-art modeling packages have already incorporated the robust optimization technology.
In this tutorial we restrict ourselves to linear optimization. We will treat the basics of robust linear optimization, and also show the huge value of robust optimization in (dynamic) multistage problems. Robust optimization has already shown its high practical value in many fields: logistics, engineering, finance, medicine, etc. In this tutorial we will discuss some of these applications. We will also highlight some of the most important (recent) papers on Robust Optimization.
Monday, April 15, 10:30–11:20am
Interpretable AI
Dimitris Bertsimas
We introduce a new generation of machine learning methods that provide state of the art performance and are very interpretable. We introduce optimal classification (OCT) and regression (ORT) trees for prediction and prescription with and without hyperplanes. We show that (a) Trees are very interpretable, (b) They can be calculated in large scale in practical times and (c) In a large collection of real world data sets they give comparable or better performance than random forests or boosted trees. Their prescriptive counterparts have a significant edge on interpretability and comparable or better performance than causal forests. Finally, we show that optimal trees with hyperplanes have at least as much modeling power as (feedforward, convolutional and recurrent) neural networks and comparable performance in a variety of real world data sets. These results suggest that optimal trees are interpretable, practical to compute in large scale and provide state of the art performance compared to black box methods. We apply these methods to a large collections of examples in personalized medicine, financial services, organ transplantation among others.
Monday, April 15, 11:30am–12:20pm
Driving Transparency and Eliminating Reconciliations Across Your Enterprise Value Chain Using Blockchain
Sharad Malhautra
Global Supply Chains thrive on innovations in their business model and technology stacks to help drive growth and reduce costs for the enterprise. The session will include the impact of tokenizing assets and inventory on driving transparency and automation across your value chain. The session will also include several demos of blockchain solutions driving transparency and eliminating reconciliations across enterprise networks.
Monday, April 15, 1:50–2:40pm
Analyzing Everyday Language to Understand People
James Pennebaker
The words people use in everyday language reveal parts of their social and psychological thoughts, feelings, and behaviors. An increasing number of studies demonstrate that the analysis of the most common and forgettable words in English — such as pronouns (I, she, he), articles (a, an, the), and prepositions (to, of, for) — can signal honesty and deception, engagement, threat. status, intelligence, and other aspects of personality and social behaviors. The social psychology of language goes beyond machine learning and, instead, identifies the underlying links between word use and thinking styles. Implications for using text analysis to understand and connect with customers, employees, managers, friends, and even yourself will be discussed.
Monday, April 15, 3:40–4:30pm
Robotic Process Automation
Russell Malz
The rate of change of new disruptive technologies continues to accelerate, and the impact of AI and ML will be bigger and much faster than the impact the Internet had on business. While startups can be built from the ground up based on new AI and ML capabilities, established companies face unique challenges when they look to leverage AI and ML offerings to innovate. Nowhere is this more true than in the analytics domain where both skills and task automation are critical for success. In this talk, learn how innovative companies have successfully used Robotic Process Automation (RPA) to improve efficiency, and how the most forward looking companies are accelerating innovation by establishing the right organizational mindset, and leveraging enabling technologies to gain competitive advantage. Working with big data analytics, and AI strategies at Acxiom, Ayasdi, and Blue Prism, Russ Malz has helped dozens of F500 clients establish and accelerate cognitive strategies.
Tuesday, April 16, 9:10–10:00am
Quantum Computing – Why, What, When, How
Yianni Gamvros
Why Quantum Computing has the potential to significantly disrupt Business Analytics. When will real-world Business Analytics tasks move from classical computers to quantum computers? What are the use cases that can be addressed by Quantum Computing, in the short and medium term? What are current industry and thought leaders working on today? How do Quantum Computers solve optimization problems?
Tuesday, April 16, 10:30–11:20am
Intuition is Unreliable, Analytics is Incomplete
Karl Kempf
In today’s environment, there are many cases where the difference between a good decision and a poor one can be hundreds of millions if not billions of dollars. Decision makers strive to apply their intuition, but intuition is unreliable. Sometimes it is useful, other times misleading. Analytics practitioners want to apply their computational tools, but given the complexity of these cases their models are inescapably incomplete.Nobel Laurates have explored one side of this situation from the perspective of human psychology. Simon pointed out the bounded rationality available to decision makers while Kahneman described the plethora of biases afflicting that same population. But they failed to supply answers to two important questions crucial to analytics professions. 1) How bad are the decision makers left to rely only on their intuition? Stated a different way, when we start to apply analytics, how much benefit can we reliably expect? 2) Can we benefit from utilizing the intuition? Can analytics inform intuition AND intuition inform analytics to supply a solution superior to either technique applied alone?
We briefly supply an answer to the first question based on projects at Intel Corporation over the past 30+ years. Our answer to the second question occupies the bulk of our presentation. This includes evaluation of ideas from the literature including “pre-mortems” and “nudges”, but will focus on two related approaches we have found to be especially powerful. At one extreme, we will describe support systems for decision makers in operations with examples drawn from manufacturing and supply chain. At the other extreme we address systems that support senior management in deciding product development funding to maximizing profits.
Tuesday, April 16, 11:30am–12:20pm
Explainable AI
Jari Koister
Financial Services are increasingly deploying AI models and services for a wide range of application. These applications include credit life cycles such as credit onboarding, transaction fraud, and identity fraud. In order to confidently deploy such models, these organizations require models to be interpretable and explainable. They also need to be resilient to adversarial attacks. In some situations, regulatory requirements apply and prohibits application of black-box machine learning models.This talk describes tools and infrastructure that FICO has developed as part of the platform to support these needs. The support is uniquely forward-looking and one of the first platforms to support these aspects of applying AI and ML for any customer.
What we will cover: (1) Examples of Financial Services applications of AI/ML; (2) Specific needs for Explainability and resiliency; (3) Approaches for solving explainability and resiliency.; (4) Regulatory requirements, and how to meet them.; (5) A platform that provide support for xAI and Mission Critical AI.; (6) Further research and product development directions.
Tuesday, April 16, 1:50–2:40pm
Detecting Tax Evasion: A Co-evolutionary Approach
Sanith Wijesinghe
We present an algorithm that can anticipate tax evasion by modeling the co-evolution of tax schemes with auditing policies. Malicious tax non-compliance, or evasion, accounts for billions of lost revenue each year. Unfortunately when tax administrators change the tax laws or auditing procedures to eliminate known fraudulent schemes another potentially more profitable scheme takes it place. Modeling both the tax schemes and auditing policies within a single framework can therefore provide major advantages. In particular we can explore the likely forms of tax schemes in response to changes in audit policies. This can serve as an early warning system to help focus enforcement efforts. In addition, the audit policies can be fine tuned to help improve tax scheme detection. We demonstrate our approach using the iBOB tax scheme and show it can capture the co-evolution between tax evasion and audit policy. Our experiments shows the expected oscillatory behavior of a biological co-evolving system.
Tuesday, April 16, 4:40–5:30pm
Optimal Pay Determination to Reach Diversity Goals
Margret Bjarnadottir
People Analytics are a fast growing field; quantitative methods are becoming main stream in HR departments. There is a great opportunity for the Operation Research Community to play a significant role in how HR decisions are made in the 21st century. In this talk we will review the growing field of People Analytics and take a deep dive into how data driven decision making can support salary decisions, focusing on demographic pay gaps.The gender pay gap (and other demographic pay gaps) are a topic of discussion in the boardroom, in the media and among policy makers, with multiple new legislation being passed in a number of states as well as across Europe: in Great Britain, France, and Iceland to name a few. While the methodology for determining pay discrimination is known and mostly agreed upon (a log-regression model), how to close a pay gap has remained an open question. Who should get raises and how much? We apply optimization and descriptive analytics to address this knowledge gap. We first describe a cost optimal approach based on statistics and optimization that can meet the “equal pay for equal work” standard for less than half the cost of the naive method of increasing all female workers’ wages equally. In order to balance cost efficiency with fairness we discuss other fairness driven algorithmic approaches that address and close the gender pay gap. These approaches while more expensive than the cost optimal approach can still save significant costs compared to the naïve approach. We further explore the impacts of closing the gap based solely on cost efficiency, which in some cases are surprising, for example we can show that there may exists men within a firm who if they receive salary increases will reduce the gender pay gap. These men strongly typify male employees in terms of traits. We demonstrate the above algorithmic approaches, savings and costs, using real data from our developing partners.
-
The purpose of the Franz Edelman competition is to bring forward, recognize, and reward outstanding examples of operations research, management science, and advanced analytics in practice in the world. Finalists will compete for the top prize in this “Super Bowl” of O.R., showcasing analytics projects that had major impacts on their client organizations.
-
Too many analytics projects never get implemented or used. In most cases the analytics and recommendations were based on great work. But impact comes from effective implementation. This track’s speakers will share their experiences in implementing analytic solutions in their organizations and tips for success.
Monday, April 15, 9:10–10:00am
Analytics in Healthcare – A Dose of Reality
Janine Kamath
Browse headline articles of the Wall Street Journal, the New York Times or business magazines and you will find stories of dynamic changes and challenges facing the healthcare industry. Patient experience, affordability, quality and safety of care, provider burnout, the technology revolution, and skyrocketing administrative costs are all components integral to the transformation of the healthcare system. Leveraging applied analytics to enable transformation is a topic of significant interest across healthcare and related industries. The availability of data, sophisticated models and dashboards alone will not result in high-quality, cost-effective and consumer-focused health care. It is critical that integrated, agile, humanistic, and sustainable analytics implementations be enabled in partnership with consumers and key organizational stakeholders. This session includes sharing implementation experiences, lessons learned and key success factors in the realm of applied analytics to address the formidable challenges of health care today and tomorrow.
Monday, April 15, 10:30–11:20am
Analytics in Action – From Concept to Value
Anne Robinson
Delivering proven value through your analytical assets can often be a challenge. As more and more companies are leveraging the initial insights from their analytics investments, differentiation and competitive advantage needs to be realized through a more comprehensive approach. Complementing a rigorous end-to-end analytic process with a focus on change management, story telling, and the right key performance indicators will enable the required elements for success.
Monday, April 15, 11:30am–12:20pm
Analytics Impact: Like A Marathon, The Last Mile Is The Hardest
Jeffrey Camm
Data availability, data storage, processing speed and algorithms, are not what is keeping analytics from reaching its full potential. It is the less technical side of analytics that hinders impact and adoption. In this session, we discuss success factors and impediments to analytics impact. We review the operations research/management science literature on this topic, discuss what is different in the age of analytics and also draw on our own research and experience with analytics impact.
Monday, April 15, 1:50–2:40pm
Scaling and Sustaining an Analytics Function
Stefan Karisch
Building an analytics function can be challenging, but there are many helpful sources available for getting started and measuring progress. Once an analytics function is established, scaling to achieve results and impact will require ongoing attention. The real measure of success is whether analytics is embedded into all areas of your company. And once that is accomplished, you will need to be prepared to adapt to a constantly changing environment and continually stay engaged with company leaders to share how your team contributes to your stakeholders’ business objectives. This presentation will review some of the risks and challenges involved in a company’s journey to scale and sustain an analytics function, provide some lessons learned through first-hand experience, and share some thoughts on how to keep analytics relevant in dynamically changing environments.
Monday, April 15, 3:40–4:30pm
Implementing Analytics to Automate Decisions at Scale
Carolyn Mooney and Sagar Sahasrabudhe
How can we leverage analytics and its implementation to make decisions in real time and at scale in an online food delivery marketplace? There are three key actors involved in food delivery: diners (ordering food), restaurants (preparing food) and delivery providers (transporting food from restaurants to diners). In orchestrating the actions of these actors, there are a number of key challenges involved: demand prediction, contracting the right number of delivery providers, coordinating handoff of food at restaurants, accurately communicating timing estimates to all the actors and smart routing that can account for multiple business needs. Efficient food delivery systems require automation of these tactical and operational decisions at scale. This is achieved through effective use of data and analytics to power the systems that make those decisions. We always focus on empowering execution driven systems making consistent and repeatable decisions. During this talk, attendees will hear how we manage this at Grubhub by using various technologies, analytics platforms and the process by which they add value to the overall end to end workflow.
-
INFORMS grants several prestigious institute-wide prizes and awards for meritorious achievement each year. This track will feature presentations on the Wagner Prize, INFORMS Prize, and the UPS George D. Smith Prize winner. Innovative Applications in Analytics Award (IAAA) and Hackathon finalists will also present. Special sessions will include presentations about INFORMS Education & Industry Outreach and the CAP program.
Monday, April 15, 9:10-10:00am
2018 Wagner Prize Winner Reprise
Analytics and Bikes: Cornell Rides Tandem with Motivate to Improve Mobility
Cornell University
Bike-sharing systems are now ubiquitous across the United States. We have worked with Motivate, the operator of the systems in, for example, New York, Chicago, and San Francisco, to innovate a data-driven approach both to manage their day-to-day operations and to provide insight into several central issues in the design of their systems. This work required the development of a number of new optimization models, characterizing their mathematical structure, and using this insight in designing algorithms to solve them. Here, we focus on two particularly high impact projects, an initiative to improve the allocation of docks to stations, and the creation of an incentive scheme to crowdsource rebalancing. Both of these projects have been fully implemented to improve the performance of Motivate’s systems across the country; for example, the Bike Angels program in New York City yields a system-wide improvement comparable to that obtained through Motivate’s traditional rebalancing efforts, at far less financial and environmental cost.
Monday, April 15, 10:30-11:20am
Freestyle O.R. Supreme Data Hackathon Finalists
Monday, April 15, 11:30-12:20pm
Certified Analytics Professional: CAP® Program Overview Panel
Moderator Zahir Balaporia, FICO
Panelists: Alan Briggs, Data Robot; Additional Panelists TBD
Monday, April 15, 1:50-2:40pm
Freestyle O.R. Supreme Data Hackathon Finalists
Monday, April 15, 3:40-4:30pm
The Value of CAP®
Moderator: Anne Robinson, Kinaxis
Panelists: Norm Reitter, CANA Advisors, Ranganath Nuggehalli, UPS and Aaron Burciaga, Analytics2Go
Tuesday, April 16, 9:10-10:00am
2019 Innovative Applications in Analytics Award Finalist
A Machine Learning Approach to Shipping Box Design
jet.com/Walmart Labs
Having the right assortment of shipping boxes in the fulfillment warehouse to pack and ship customer’s online orders is an indispensable and integral part of nowadays eCommerce business, as it will not only help maintain a profitable business but also create great experiences for customers. However, it is an extremely challenging operations task to strategically select the best combination of tens of box sizes from thousands of feasible ones to be responsible for hundreds of thousands of orders daily placed on millions of inventory products. We present a machine learning approach to tackle the task by formulating the box design problem prescriptively as a generalized version of weighted k-medoids clustering problem, where the parameters are estimated through a variety of descriptive analytics. The ultimate assortment of box sizes is also well tested on both real and simulated customer orders before deployment into production. Our machine learning approach to designing shipping box sizes is adopted quickly and widely in Walmart eCommerce family. Within a year, the methodology has been applied respectively to jet.com, walmart.com and samsclub.com. The new box assortments have achieved 1%-2% reduction in number of boxes, 5%-8% increase in overall utilization rate, 7%-12% reduction in order split rate, and 3%-5% savings in transportation cost.
Tuesday, April 16, 9:10-10:00am
2019 Innovative Applications in Analytics Award Finalist
InnoGPS: Innovation Global Positioning System
Singapore University of Technology and Design
Traditionally the ideation and exploration of innovation opportunities and directions rely on human expertise or intuition and is faced with high uncertainty. Many historically-successful firms (e.g., Kodak, Motorola) lost direction for innovation and declined. To de-risk innovation ideation, we have developed a cloud-based data-driven computer-aided ideation system, InnoGPS, at Data-Driven Innovation Lab at the SUTD-MIT International Design Centre. InnoGPS integrates an empirical network map of all technology domains in the patent database, with map-based functions to position innovators, explore neighbourhoods and directions to far fields in the technology space. Our inspiration is by analogy from Google Maps for positioning, nearby search and direction finding in the physical space. The descriptive, predictive and prescriptive analytics in InnoGPS fuse innovation, information and network sciences and interactive visualization. InnoGPS is the first of its kind and may disrupt the intuitive tradition of innovators (e.g., individuals, companies) for innovation ideation by providing rapid, data-driven, scientifically-grounded and visually-engaging computer aids.
Tuesday, April 16, 10:30-11:20am
2019 Innovative Applications in Analytics Award Finalist
Machine Learning: Multi-site Evidence-based Best Practice Discovery
Georgia Institute of Technology and the Care Coordination Institute
This study establishes interoperability among electronic medical records from 737 healthcare sites and performs machine learning for best practice discovery. A novel mapping algorithm is designed to disambiguate free text entries and provide a unique and unified way to link content to structured medical concepts despite the extreme variations that can occur during clinical diagnosis documentation. Redundancy is reduced through concept mapping. A SNOMED-CT graph database is created to allow for rapid data access and queries. These integrated data can be accessed through a secured web-based portal. A classification machine learning model (DAMIP) is then designed to uncover discriminatory characteristics that can predict the quality of treatment outcome. We demonstrate system usability by analyzing Type II diabetic patients among the 2.7 million patients. DAMIP establishes a classification rule on a training set which results in greater than 80% blind predictive accuracy on an independent set of patients. By including features obtained from structured concept mapping, the predictive accuracy is improved to over 88%. The results facilitate evidence-based treatment and optimization of site performance through best practice dissemination and knowledge transfer.
Tuesday, April 16, 10:30-11:20am
2019 Innovative Applications in Analytics Award Finalist
Taking Assortment Optimization from Theory to Practice: Evidence from Large Field Experiments on Alibaba
Washington University in St. Louis
We compare the performance of two approaches for finding the optimal set of products to display to customers landing on Alibaba’s two online marketplaces, Tmall and Taobao. Both approaches were placed online simultaneously and tested on real customers for one week. The first approach we test is Alibaba’s current practice. This procedure embeds hundreds of product and customer features within a sophisticated machine learning algorithm that is used to estimate the purchase probabilities of each product for the customer at hand. The products with the largest expected revenue (revenue * predicted purchase probability) are then made available for purchase. The downside of this approach is that it does not incorporate customer substitution patterns; the estimates of the purchase probabilities are independent of the set of products that eventually are displayed. Our second approach uses a featurized multinomial logit (MNL) model to predict purchase probabilities for each arriving customer. In this way we use less sophisticated machinery to estimate purchase probabilities, but we employ a model that was built to capture customer purchasing behavior and, more specifically, substitution patterns. We use historical sales data to fit the MNL model and then, for each arriving customer, we solve the cardinality-constrained assortment optimization problem under the MNL model online to find the optimal set of products to display. Our experiments show that despite the lower prediction power of our MNL-based approach, it generates 28% higher revenue per visit compared to the current machine learning algorithm with the same set of features. We also conduct various heterogeneous-treatment-effect analyses to demonstrate that the current MNL approach performs best for sellers whose customers generally only make a single purchase. In addition to developing the first full-scale, choice-model-based product recommendation system, we also shed light on new directions for improving such systems for future use.
Tuesday, April 16, 11:30-12:20pm
2019 Innovative Applications in Analytics Award Finalist
Transparent Machine Learning Models for Predicting Seizures in ICU Patients from cEEG Signals
University of Wisconsin, Duke University, Harvard University, and Massachusetts General Hospital, Westover
Continuous electroencephalography (cEEG) technology was developed in the 1990’s and 2000’s to provide real-time monitoring of brain function in hospitalized patients, such as critically ill patients suffering from traumatic brain injury or sepsis. cEEG technology has permitted physicians to characterize electrical patterns that are abnormal but are not seizures. As it turns out, these subtle signals recorded by cEEG monitoring are indicative of damage to the brain and worse outcomes in the future, and in particular, true seizures. If we can detect in advance that a patient is likely to have seizures, preemptive treatment is likely to prevent additional brain injury and improve the patient’s overall condition. However, predicting whether a patient is likely to have a seizure (and trusting a predictive model well enough to act on that recommendation) is a challenge for analytics, and in particular, for machine learning. This project is a collaboration of computer scientists from Duke and Harvard with expertise in transparent machine learning, and neurologists from the University of Wisconsin School of Medicine and Public Health and the Massachusetts General Hospital. The predictive model developed from this collaboration for predicting seizures in ICU patients is currently in use, and it stands to have a substantial impact in practice. Our work is the first serious effort to develop predictive models for seizures in ICU patients.
Tuesday, April 16, 11:30-12:20pm
2019 Innovative Applications in Analytics Award Finalist
Using Advanced Analytics to Rationalize Tail Spend Suppliers at Verizon
Verizon
Verizon Global Supply Chain organization currently governs thousands of active supplier contracts. These contracts account for several billions of annualized Verizon spend. Managing thousands of suppliers, controlling spend and achieving the best price per unit (PPU) through negotiations are costly and labor intensive tasks within Verizon strategic sourcing teams. Large organizations often engage a plethora of supplier for many reasons – best price, diversity, short term requirements, etc. While managing a few larger spend suppliers can be done manually by dedicated sourcing managers, managing thousands of smaller suppliers at the tail spend is challenging, can often introduce risk, and can be expensive. At Verizon, we leveraged a unique blend of descriptive, predictive and prescriptive analytics as well as Verizon-specific sourcing acumen to tackle this problem and rationalize tail spend suppliers. Through the creative application of Operations Research, Machine Learning, Text Mining, Natural Language Processing and Artificial Intelligence, Verizon reduced multiple millions of dollars of spend and acquired lowest price per unit (PPU) of the sourced products and services. Other benefits realized are centralized and transparent contract and supplier relationship management, overhead cost reduction, decreased contract execution lead time, and service quality improvement of Verizon’s strategic sourcing teams.
Tuesday, April 16, 1:50-2:40pm
UPS George D. Smith Prize winner
Tuesday, April 16, 4:40-5:30pm
2018 INFORMS Prize Reprise – BNSF Railway
Pooja Dewan and Juan Morales, BNSF Railway
-
This track features leaders from companies and academia to share the application of analytics in marketing functions such as promotions, pricing, advertising, market forecasting, and best practices of analytics in overall marketing, in addition to address1ing emerging technologies. The track provides an open forum for participants to connect with their peers and the invited speakers. Come learn from industry and academia experts on how to use advanced analytics and operations research to learn, share, and network and take away valuable techniques to grow your marketing analytics capability.
Tuesday, April 16, 9:10–10:00am
Avnet
Nishant Nishant
‘Ask Avnet’ intelligent agent was conceived to solve two complex business problems. Firstly, how do you connect an ecosystem comprising of different websites without destroying value; and secondly, how do you leverage analytics to provide better customer experience with finite resources. Join Nishant to learn how what began as an idea on a post-it note has morphed into a successful customer service channel powered by continuous analysis of customer interaction data.
Tuesday, April 16, 10:30–11:20am
Pricing and Revenue Management: Different Flavors for Different Industries
Daniel Reaume
Pricing and revenue management analytics have driven billions of dollars of increased profits across every business sector. But business and technical challenges differ greatly between industries and companies and success depends on tailoring solutions appropriately. This talk presents an overview of such solutions for six verticals – service retail, B2B, automotive (and other OEMs), media, hotels, and cruises. For each vertical, it will address some of the key challenges and present examples of analytics used to address them. Moreover, it will highlight how further company-specific tailoring is often critical to maximizing value.
Tuesday, April 16, 11:30am–12:20pm
Managing All Pricing Levers
Maarten Oosten
One of the challenges of price optimization is that the price that the manufacturer lists for a product is not the same as the price the buyer pays (sales price) or the price that the seller receives (net price). Besides various types of costs related to delivery and sales, there are can be many price levers in play. Examples are end customer rebates, distributor discount programs, distributor charge-backs, royalties, channel rebates, and sales commission. These price levers are also controlled by the sellers, but at different levels, not necessarily the transaction level. For example, a distributor rebate applies to a subset of products for a specific period. When optimizing prices, all these levers should be taken into account.After a brief discussion of the general concepts, we will illustrate the concepts by means of an example: trade promotion optimization. Trade promotion optimization is similar to promotion optimization, excepts that it approaches the problem from the perspective of the manufacturer. The manufacturer negotiates the promotions with the various retailers. Therefore, the manufacturer should model the behavior of the end users as well as that of the retailer. After all, if the promotion is not attractive for the retailer, there won’t be a promotion. In this paper we discuss the challenges this poses in both the estimation of the promotion effects as well as the optimization of the promotion schedule, and propose models that address these challenges.
Tuesday, April 16, 1:50–2:40pm
Aligning to Improve Data Analytics
Lora Cecere
Drowning in data. Short on insights. This is the dilemma of most companies. However, analytics projects, and the use of new technologies, are fraught with issues. Data scientists full of energy and speaking a new language need to be cultivated and aligned to drive value in mainstream processes. Aligning business and data analytics group is tough, but not impossible. In this session, Lora Cecere, Founder of Supply Chain Insights, shares personal experiences from design thinking sessions coupled with quantitative research.
Tuesday, April 16, 4:40–5:30pm
Deploying Product Recommendation Engines at Scale
Kenneth Sanford
Deploying data science projects is difficult for any organization. For marketers looking to compete with the disruptive forces of GAFA and other data science-first companies, the successful deployment of data science projects is imperative to long-term success. In this talk we will discuss people and process strategies to quickly prototype and deploy data science at scale. We will discuss several strategies of data science team organization and several alternative methods of deploying models. The talk will close with a detailed description how a company builds and deploys product recommendations as a REST API at scale.
-
While optimization is one of the core methods in the O.R. toolkit, it can also be viewed as “prescriptive analytics.” Presentations in this track use optimization to solve practical problems encountered by those in practice, in order to achieve the best outcome.
Tuesday, April 16, 9:10–10:00am
The Benefits of Lean Cell Design on Business Processes
Daria Migounova
Lean Six Sigma methodology has been making its way into conversations across manufacturing and industrial giants for years. A way to streamline processes and eliminate waste, Lean Six Sigma concepts allow us to combine data analysis with effective implementation to build lasting change into production.But what about people? This talk will cover a case study about executing sustainable change in a human-focused process using Lean Six Sigma and data analytics, with a touch of psychology.
Sun Life Financial’s “Plan Change” team processes changes to enforce insurance policies. The team had been struggling with elongated cycle times and an ever-growing backlog of requests. In 2018, the Best Practices team initiated a Lean Six Sigma improvement project to identify root causes and implement new structure. Based on the data findings, we set up a “work cell” – a concept borrowed from lean manufacturing – that brought four teams together for 6 weeks to identify improvement opportunities and track data.
Tuesday, April 16, 10:30–11:20am
Combining Choice Modeling and Nonlinear Programming to Support Business Strategy Decisions
John Colias
Using a case study with simulated data, we demonstrate how to integrate a choice model into a customer lifetime value (CLV) simulation and optimization tool. While the methodology is validated with AT&T data, due to the proprietary nature of the results, only results using simulated data will be presented.Because the typical choice modeling study includes both nominal and numeric attributes as drivers of customer value, purchase probabilities, market share, and revenue, the nonlinear programming problem becomes non-trivial, requiring the use of state-of-the-art algorithms. Our solution would make use of several nonlinear programming algorithms using AMPL software.
From this presentation, industry experts will understand the features and benefits of choice modeling, required resources to implement and combine choice modeling and nonlinear programming, and the types of business strategy objectives that can be supported by combining Choice Modeling and Nonlinear Programming.
Tuesday, April 16, 11:30am–12:20pm
Optimizing Product-Site Qualifications in Western Digital’s Supply Network
Karla Hernandez
Western Digital Corporation (WDC) designs, develops, and manufactures a broad range of data storage products. Before any of these products can be manufactured, they must first be qualified at one or more manufacturing sites (qualifying a product is a time-consuming process that ensures that a site is capable of manufacturing the product in accordance with all quality requirements). Although expensive, qualifying a product at multiple sites increases the likelihood that 1) demand can be met for a reasonably broad range of demand scenarios while 2) satisfying site capacity constraints and 3) attempting to meet minimum site-utilization targets. This talk describes an optimization algorithm used by WDC to find a balance between minimizing the number of new qualifications required and satisfying objectives 1-3 above. The algorithm begins with a set of existing product-site qualification pairs and proceeds by adding new pairs one at a time in a greedy manner.
Tuesday, April 16, 1:50–2:40pm
Innovative Rolling Stock Scheduling in Railroad Operations: Optimizing Seat Probability
Erwin Abbink
In this presentation, we discuss a new innovative rolling stock scheduling approach. In this approach, smart card data are used to estimate the number of passengers per train between every pair of stops. Instead of a fixed estimate (e.g. the mean, or the median), we use the complete distribution of expected passenger demand in our rolling stock optimization model.With this integration of passenger routing and rolling stock scheduling, we can optimize the seat probability. The potential benefits of this approach have an equivalent in cost savings of about 50 million Euro per year. Seat probability is one of the main KPIs in the contract between the Dutch government and NS, the main Dutch railway operator. In 2019, a mid-term review of the current concession will take place and all KPIs have a set target for this specific year. For the seat probability KPI, a significant improvement needed to be achieved. As a pilot, the sprinter light train (SLT) fleet was scheduled with the new approach and evaluated in April 2018. Results of this pilot looked very promising for the complete set of schedules. By using this new innovative approach to compute the complete rolling stock schedule starting from December 2018 NS will be able to achieve the seating probability target. Using this new optimization method also required changing the working methods to a large extend. Both on the level of the planning experts as well as on company policies. We will discuss how we supported this transformation.
Tuesday, April 16, 4:40–5:30pm
Optimizing Power Mix Trajectories for Long-Term Policy Making
Violette Berge
In a world of transition, planning the future of our power system becomes a key issue. What amount of renewable energies, storage, distributed resources, thermal resources will compose my future generating portfolio, in order to satisfy a growing need for electricity, accompanied with energy efficiency measures? Answering these questions and computing the optimal mix trajectory requires detailed modeling of the power system and networks, with detailed time and spatial granularity and joint optimization of generation solutions and flexibility solutions (storage, demand management, etc.) deployment. Through several years of research and development, Artelys developed an optimization method based on decomposition techniques to compute optimal trajectories. These techniques launched on HPC clusters enable Artelys to carry out number of studies for long term policy makers. After introducing our optimization model, an application study will be presented on the French case on how to optimize the Renewable Energy Sources development by 2050.
-
This track features analytics leaders from top companies in traditional and nontraditional industries such as travel, hospitality, transportation, entertainment, high tech, media, and retail. The track promotes and disseminates the latest developments in pricing, revenue management, and distribution by providing an open forum that allows participants to connect with their peers and the invited speakers. Come learn from industry experts how to use advanced analytics and operations research to better understand and target your customers, improve your pricing practices and demand forecasts, and drive revenue and market share growth.
Monday, April 15, 9:10–10:00am
Behavioral Influences in Procurement Auctions and Pricing Decisions
Wedad Elmaghraby
This tutorial will cover research in market design for procurement and competitive bidding, with the emphasis on designing procurement auctions and understanding how human behavior affects their performance. We will then explore how online marketplaces have enlarged the set of market design challenges, and discuss recent research in online Business-to-Business and Business-to Consumer electronics markets, and key insights that are relevant for other market sectors.
Monday, April 15, 10:30–11:20am
Fandango 360: Data Driven Movie Marketing and Content Recommendations Platform
Reeto Mookherjee
Fandango has built a targeting, attribution and personalization platform, Fandango360. In this talk, we outline the building blocks of this platform. These blocks fall into three groups:- Stitching together cross-device cross platform digital and offline interactions of users and creation of a probabilistic user graph
- Surfacing predictive behavioral and affinity segments with millions of micro-segments at scale, and
- Generation of propensity scores for known and unknown (cookie) users to future movie slates.
We will conclude the talk with some of the performance marketing results, insights and learnings stemming from some of the studios using this platform in the 12 months or so in 70+ movie marketing campaigns.
Monday, April 15, 11:30am–12:20pm
Dynamic Pricing of Omni-Channel Inventories
Pavithra Harsha
Omnichannel retail refers to a seamless integration of an e-commerce channel and a network of brick-and-mortar stores. An example is cross-channel fulfillment, which allows a store to fulfill online orders in any location. Another is price transparency, which allows customers to compare the online price with store prices. This paper studies a new and widespread problem resulting from omnichannel retail: price optimization in the presence of cross-channel interactions in demand and supply, where cross-channel fulfillment is exogenous. We propose two pricing policies that are based on the idea of “partitions”to the store inventory that approximate how this shared resource will be utilized. These policies are practical because they rely on solving computationally tractable mixed integer programs that can accept various business and pricing rules. In extensive simulation experiments, they achieve a small optimality gap relative to theoretical upper bounds on the optimal expected profit. The good observed performance of our pricing policies results from managing substitutive channel demands in accordance with partitions that rebalance inventory in the network. A proprietary implementation of the analytics that also includes demand estimation is commercially available as part of the IBM Commerce markdown price solution. The system results in an estimated 13.7% increase in clearance-period revenue based on causal model analysis of the data from a pilot implementation for clearance pricing at a large U.S. retailer.
Monday, April 15, 1:50–2:40pm
Revenue Management and Pricing in the Car Rental Industry
Montgomery Blair
Car Rental Revenue Management
Years ago the field of revenue management progressed from yield management to more a holistic approach which includes dynamic pricing. As the evolution continues it is now seeking even tighter integration with marketing, sales, e-commerce, customer experience, all things digital, and the supply-side. This expanding scope of “RM” is primed to capitalize on the advancements of computing power and the proliferation of data. RM should be among the first disciplines to not only benefit greatly from advancements in artificial intelligence and push the frontier of its application to decision science. We will share a glimpse into our journey and highlight some key areas, steps we are taking, and the barriers we face as we expand our cognitive technologies within car rental.- Descriptive analytics with big data
- Predictive analytics and the need for solid demand models. The semantics matter as there is a difference between demand, forecast, and plans⋯unconstrained vs. constrained, input vs. output, etc.
- Prescriptive analytics – advancing beyond traditional optimization
- Change management – As machines do more of the granular day-to-day work the role that people play and the overall process will change.
Monday, April 15, 3:40–4:30pm
Intelligent Retailing Decision Support – A Bold New Vision for the Industry
Hunkar Toyoglu
Customers today are empowered by internet more than ever. They have new expectations and hence are forcing pricing and revenue management business strategies to be changed. As customers become more demanding, intelligent retailing and decision support is increasingly becoming essential in optimizing revenue in an end-to-end personalized retailing environment. Innovation is required to optimize beyond only the room/seat to include total revenue optimization. Applying Artificial Intelligence and Machine Learning models to enable smarter retailing decision support could unlock a novel set of insights on the market that companies can leverage to grow revenue and share. Learn how to leverage data inside and outside your organization to augment traditional retailing wisdom with Machine Learning capabilities. Specifically, we will talk about designing recommender systems to find the best items to display from a large list of candidate retail products and to determine their display order based on the customer segments.
-
Analytics leaders from top companies share the latest developments in pricing, revenue management, and distribution. Use advanced analytics & O.R. to better understand & target your customers, improve pricing practices & demand forecasts, and increase revenue & market share.
Monday, April 15, 9:10–10:00am
Increasing Economic Sustainability of Electric Power Planning Under Uncertainty
Gianmaria Leo
The electric power planning is a critical decision-making process, which aims to achieve the right trade-off between safety, continuity of energy supply and sustainability. This business practice is often challenging, since uncertain demand and operational conditions have remarkable impact. The problem often becomes more complex with the presence of renewable sources: increased risks of supply disruption or energy spillage often arise due to the high variability of renewable generation.Our work focuses on a system serving a restricted isolate electric grid, managed by a major European electricity provider. Our Predictive-Prescriptive pipeline supports the entire process. We introduced a Robust Optimization approach reducing costs while improving sustainability. We compared this new approach with more typical solutions adopted in production, by performing an ex-post analysis of different planning recommendations over twenty days of operations. The introduced Optimization model is computationally effective, providing high-quality daily plans in less than one second.
Monday, April 15, 10:30–11:20am
Crowdsourcing Analytics: Case Study of the Fishing for Fishermen Maritime Data Challenge
Paul Shaprio
Illegal, unreported, and unregulated fishing (IUU) activities is a global problem that threatens ocean ecosystems and sustainable fisheries. By applying innovative analytic techniques to existing data sources, the worldwide crowdsourcing algorithm development competition, Fishing for Fishermen Maritime Data Challenge, sought to develop a method to more effectively identify and react to the global IUU fishing threat. The use of crowdsourcing yielded algorithms that were surprisingly accurate and reliable in their ability to identify fishing activity (98%), and then to help identify the type of that activity (91% to 98%). The algorithms are now publicly available for analysts and law enforcement authorities globally to support combating IUU fishing. This case study highlights the different approaches and technology used to solve the challenges, and illustrates how to gain access to some of the world’s leading algorithmic scientists for a fraction of the cost required to either hire or contract such talent.
Monday, April 15, 11:30am–12:20pm
Improving Traceability to US Air Force Capability Assessments
Calvin Bradshaw
Issue: All US Air Force (USAF) resources planned are not programmed (i.e. resource allocated and budgeted); the delta between the two translate into capability gaps and a level of strategic risk. With limited personnel resource funding availability, senior decision makers need to be able to objectively articulate personnel capability gaps, assess risk and prioritize funding.Background: An enterprise responsible for 60% of the USAF portfolio manages five distinctive core capabilities. A task library database was created to link enterprise core capabilities to Program Element Codes (PECs). Although the PECs are linked to tasks, the amount of specific personnel (by career field) needed to accomplish the tasks versus personnel funded requirements are not connected. This makes it difficult for capability experts to defend their inputs to annual funding assessments.
Question: Is there a way to link Enterprises to Core Capabilities to Tasks to PECs to Career Fields?
Methodology: For the first time, a linkage between enterprises, core capabilities, PECs, tasks and manpower has been developed. We now can provide an objective nomenclatured way to compute risk. The classic approach to calculate risk is to combine the likelihood of failure and associated consequence of a given outcome. An enterprise personnel baseline capability demonstration study is conducted examining over 275 career fields using binomial and sigmoid functions.
Insights: The linkage of Enterprises to Core Capabilities to Tasks to PECs to Career Fields allows senior planners and programmers to assess personnel capability by specific expertise and funding levels. This allows enterprise staff and capability experts to develop objective, defensible core capability assessments.
Application: This analysis can be used as an objective way to compute risk and prioritize personnel resource allocation at the enterprise level. Understanding potential personnel shortfalls at the career field level should better inform core capability analysis, and thus increase credibility and defensibility of strategic risk assessments.
Monday, April 15, 1:50–2:40pm
Application of Text Analysis to Quality Control of Operational Document Sets
Thor Osborn
Business operations are typically guided by procedural and policy-oriented document sets that were developed over time by many contributors. As a document set grows and begins to tax the memory capacity of individuals, the risk that additional documents offer little incremental information increases. Overcrowding of the conceptual space in a definitional document set tends to confound classification. This confounding risk is especially important in Human Resource Management regarding job definition documents, because distinctions in compensation absent clear differentiation of qualifications and job duties expose the firm to legal, financial, and reputation risks. This presentation addresses the business case for differentiating the job description set and demonstrates an algorithmic text analytics-based approach for comparing document differentiation against a contextually derived minimum standard using the Kolmorogov-Smirnov test. Analysis, quality improvement actions, and resulting impacts to differentiability are shown using a corpus of 250 job descriptions representing a large healthcare services organization.
Monday, April 15, 3:40–4:30pm
Dynamic Call Center Simulation at JPMorgan Chase: Reducing Disruptive Business Resiliency Testing
Jay Carini
The JPMorgan Chase Operations Research and Data Science Center of Excellence (ORDS CoE) commenced a multi-year project to provide the internal Business Resiliency team with a discrete event simulation-based application to 1) Support strategic and tactical planning and 2) Reduce the number of required physical shutdown tests (which increase operational costs and negatively impact customer service). In the event of an outage, Business Resiliency seeks key insights about impacted locations:- What will happen to service level during an outage?
- How will mitigation strategies impact service level (i.e. add headcount, reduce volume and/or processing time)?
The approach leverages discrete event simulation modeling to estimate the expected impacts to service level due to an outage. The dynamic design of the model, combined with an integrated user interface and dashboard, allows users to simulate any combination of 100+ call centers and/or 50+ locations, and to customize mitigation scenarios to compare with the “do-nothing” scenario.The presentation will highlight the following key components of the analytics, methodology, and considerations required to provide an end-to-end, data driven solution to the Business Resiliency team:
- Development and validation of the simulation engine
- Overview of underlying techniques required to support the simulation including:
-
- Survival functions to model abandonment behavior
- Fitting historical handle time data to probability distributions by call type and building a handle time function for use in model parameterization
- Generating output statistics through bootstrapping
-
- Overview of the Tableau-based dashboard used to broadly communicate required model insights to the business partners
- Lessons learned through collaboration with internal IT teams, project managers, and the competing interests within the business to develop a viable solution
- Next steps as ORDS leverages simulation and analytics to support Business Resiliency
-
Features O.R. and analytics with a security focus and a keen eye toward influencing policymakers in the national defense and security space, including state and local organizations. This track is the first in a series of events culminating in the 2020 INFORMS Conference on Security.
Tuesday, April 16, 9:10–10:00am
CHUPPET: Learn Simply with Simple Data Science
Matthew Powers
Data overload occurs when organizations are drowning in the amount of available information such that they gain little insight under current business practices. Commercial machine learning software offer solutions to this overload, but bureaucratic, budgetary, and/or technical limitations may prevent the ability to leverage such software. A solution to these limitations is the Excel-based Content Heuristic Unstructured Parsing and Predictive Electronic Tool (CHUPPET), and the follow-on tool CHUPPET Next. CHUPPET and CHUPPET Next identify relevant themes within relatively large sets of text collections while mitigating the effect of analyst bias or lack of subject matter experience. Widely accepted machine learning, data mining, and classification techniques discriminate between relevant terms, while quantifying the relevance and document sentiment so that objective trends are identifiable. The CHUPPET tools enable analysts with varying levels of technological experience to employ rigorous computational methods to unstructured, textual data. Excel is commonly available, familiar to many, CHUPPET and CHUPPET Next require no special installation, require minimal training, and can be tailored by programmers familiar with Visual Basic for Applications. The CHUPPET tools are available for use on request at no cost. As of this writing, the CHUPPET tools are regularly used in the Joint Lessons Learned Division and the Center for Army Lessons Learned to generate periodic reports and to rapidly respond to leadership questions. CHUPPET earned its developer personal recognition from the Chairman of the Joint Chiefs of Staff, General Joseph Dunford, USMC.
Tuesday, April 16, 10:30–11:20am
Modeling Vehicle Fleet Readiness: The Challenges of Mixed-Fidelity Simulation
Abstract to be added.
Tuesday, April 16, 11:30am–12:20pm
Optimizing Army Cyber Branch Readiness and Manning Under Uncertainty: Stochastic and Robust Goal Programming Approaches
Colonel Andrew Hall
The Department of Defense (DoD) Cyber Mission Force (CMF) was established in 2012 to carry out DoD’s cyber missions. The CMF consists of cyber operators with the mission to augment traditional defensive measures and defend priority DoD networks and systems against priority threats; defend the US and its interests against cyberattacks of significant consequence; and support combatant commands by generating integrated cyberspace effects in support of operational plans and contingency operations.Given the unique expertise required of military personnel to execute the DoD cyber mission, the US Army created the Army Cyber Branch (ACB) to establish managed career fields for Army cyber warriors, while providing a force structure with successive opportunities for career development and talent management via leadership and broadening positions, technical training, and advanced education. In order to optimize readiness and manning levels across the Army’s operating and generating forces, the Army Cyber Proponent (Office Chief of Cyber) at the Cyber Center of Excellence sought analytical decision-support to project the optimal number of accessions, promotions and personnel inventory for each cyber specialty across the Army cyber enterprise needed to support a 30-year career life cycle.
We proffer the Cyber Force Manning Model (CFMM), an advanced analytics framework that uses stochastic and robust goal programming approaches to enable the modeling, experimentation and optimization necessary to help solve the Army’s Cyber Workforce Planning Problem under uncertainty. The stochastic and robust optimization variants of the CFMM provide tremendous value by enabling useful decision-support to senior cyber leaders and force management technicians, while optimizing ACB readiness by effectively projecting the optimal number of personnel needed to meet the demands of the current force structure.
Tuesday, April 16, 1:50–2:40pm
A Modeling Framework for Scenario-Based Portfolio Planning with Uncertainty
Shaun Doheney
In order to get the most warfighting capability out of a limited operating budget, the military must make analytically defendable cost-benefit tradeoffs between various programs. To do so, military planners generally use a capabilities-based assessment (CBA) process. While this process varies by Service within the Department of Defense, the process usually begins with the examination of strategic and operational guidance, followed by the execution of numerous studies, wargames, and field experiments. This analysis leads to the identification of capabilities required by the military to execute its responsibilities across a range of military operations. As a part of this CBA process, each Service conducts enterprise-wide risk analysis to compare the costs and benefits of various capability solutions in order to deliver a draft budget necessary to fund the development and sustainment of the best possible fighting force. The development of a draft budget is a complex problem fraught with difficult-to-measure quantities, strong advocacy for existing programs, and political sensitivities. In this talk, we present an interactive modeling approach in which key assumptions may be adjusted by diverse stakeholders to create a conversation. This is accomplished through a SIPmath simulation in native Excel, in which thousands of Monte Carlo trials are run per keystroke without the need for external simulation software. The model showcases a number of analytical features that we have found useful in many contexts, and which can be re-assembled in numerous ways, such as: Scenario-based portfolio planning; SIPmath interactive simulation using the Excel Data Table; S-Curve models of effectiveness; Animated graphical displays; and Potential of rolling up results from lower level models or to higher level models. Our modeling framework offers traceable, defendable linkages between the various mandated products of the CBA process, and the draft budget developed and submitted by the Service.
Tuesday, April 16, 4:40–5:30pm
Security and Critical Infrastructure
Matthew Carlyle
Damage to critical infrastructure systems makes headlines, whether it is a consequence of deliberate acts of people trying gain attention, or from accidents, failures, or natural disasters. The most obvious effects of such damage are the short term, high-cost, high-visibility consequences, but these should not necessarily be the primary concerns when deciding how to protect infrastructure systems. We discuss protecting critical infrastructure systems from the point of view of protecting the long-term function that the system provides, and determining the security consequences of the loss of that function. We review some basic modeling techniques, give an overview of the insights we derive from modeling critical infrastructure from this viewpoint, and conclude with some cautions against drawing “obvious” conclusions about system security without performing the appropriate modeling and analysis.
-
Learn how newer developments are being applied in practice in various industries, to make sure that the right things get to manufacturers, wholesalers, retailers, and consumers when they are needed.
Monday, April 15, 9:10–10:00am
Dynamic Pricing for Varying Assortments
Kris Ferreira
Most demand learning and price optimization approaches in academia and practice rely on learning the demand of each product in an assortment over time via price experimentation. Although these approaches may work well when the retailer offers a static assortment, the approaches fail to learn demand and optimally price when retailers change their assortment frequently. With the growth of e-commerce as well as fast fashion business models, retailers are changing their assortments more frequently. In this research, we develop a demand learning and price optimization approach for retailers whose assortments change frequently. Our approach can be described as a “learning-then-earning” approach that uses conjoint analysis and optimal experimental design to learn attribute-based demand, and subsequently uses this information to price optimally. We test our algorithm in a field experiment at an e-commerce company, which demonstrates that our algorithm quickly learns demand and sets prices that significantly increase revenue.
Monday, April 15, 10:30–11:20am
Analytics in Compliance Risk Management – An Adaptive and Recursive Approach Using Machine-Learning Methods
Jonathan Yan
Compliance risk in supply chain management is one of the most critical risks that businesses are exposed to. While leading the industry in commercial credit and operational risk analytics, Dun & Bradstreet (D&B) has worked with multiple globally diversified enterprises and formulated an adaptive and recursive approach for proactive compliance risk management of global suppliers. This talk will explain what this analytics approach entails and how this approach has successfully helped businesses in today’s challenging compliance landscape. In this talk, we will start with data and its various aspects, which is the base of analytics, and then move on to methods of analytics as well as method comparisons. Finally, we will explain in depth how analytical results can be applied to and incorporated as an input in an adaptive and recursive process over time.
Monday, April 15, 11:30am–12:20pm
Closing the Gap Between Forecasting and Inventory Management
Joshua Hale
Although methods for forecasting and inventory optimization are well established, the intersection of the two is less developed. Nevertheless, minimizing forecast error and inventory cost separately may lead to sub-optimal overall performance. When forecasts are employed for inventory decisions, it is advantageous to consider the resulting inventory performance instead of more commonly used forecast error metrics. Forecasting methods are typically evaluated based on statistical properties without taking into account lead times, inventory cost, or service levels. Similarly, inventory methods are optimized independently of the forecasting process and treat the forecast as simply an input. In this talk, we discuss methods developed to improve the integration between forecasting and inventory management. The goal of this work is to move beyond sequential optimization of forecast and inventory models to a framework in which forecasting and inventory management are treated as an integrated cycle where each one influences the other.
Monday, April 15, 1:50–2:40pm
Variance-Damping or Variance-Amplifying? A Look at Analytics in the Supply Chain
Clark Pixton
Analytics has increased firms’ ability to adapt their decision policies in response to incoming information. The benefits of such an ability are evident. However, we focus on an important but overlooked tradeoff, which comes from the fact that adaptive decision policies may introduce more variation into a business process than do their non-adaptive counterparts. We differentiate between “variance-damping” and “variance-amplifying” analytics, giving common examples of each. Our analysis includes foundational operational decisions such as inventory ordering and pricing, both of which have been the focus of much analytics research recently. Based on our theory, we give managerial insights for a firm’s analytics and supply chain strategies.
Monday, April 15, 3:40–4:30pm
Mitigating Supply Chain Risk by Combining Internal, External, and Open Source Data into a Single User Experience
Harrison Smith
Supply chain risk can be mitigated by fusing a myriad (internal, external, and open source) of data sources in a single user experience. Big data analytics solutions can detect and monitor various supply chain risks including counterfeiting and fraud in known vendor networks and beyond.
This session will focus on:- Counterfeit and fraud in supply chain network — The insertion of counterfeit goods into licit supply chain networks is a growing multi-billion dollar threat to all commercial and governmental sectors. Counterfeiters have been exploiting the internet, especially social media and the dark web, to introduce their products into the market. This threat amplifies when goods are produced in developing countries. Big data analytic solutions leverage natural language processing, machine learning, and data science to detect potential instances of fraud and / or counterfeiting and create decision-oriented, actionable insights for leaders at all levels and across all business functions.
- Counterfeiting and its associated role with Threat Convergence — Changing geopolitical dynamics and the interconnected supply chains present a significant risk to all organizations. Understanding how counterfeiting pays a crucial role in the alliance of: espionage, criminal, opportunists, terrorists, state sponsored entities and cartel syndicates; is paramount in the endeavor of using NLP, machine learning, and other data science techniques to actively assist in identifying these risks.
- Counterfeit detection using natural language processing — If a supply chain stretches to the developing world, particularly to factories that produce small, unsophisticated components, it is potentially more susceptible to risk of counterfeit and/or fraudulent activities. With multi-lingual NLP and translation capabilities big data analytics solutions support risk tagging in original language and English.
- Introduction to advanced supply chain analytics — Four fundamental attributes of counterfeit detection and monitoring technology are contributing to a robust and efficient solution today: fuses multiple data sources; automates activities; improves through machine learning; presents information for effective decision making.
- Client cases and examples
- Other use cases
-
Emerging concepts and innovative technologies are disrupting business and driving new policies, products, services, and channels for increased revenue. Supply chain leaders are evolving their businesses to keep pace and, during this track, our accomplished speakers will share practical applications of new concepts and technologies being implemented within their supply chain analytics programs to maximize their efficiencies while minimizing business disruptions.
Tuesday, April 16, 9:10-10:00am
Reverse Logistics and Machine Learning: The Key to Predictive Repair
Thomas Maher
Tom Maher will discuss how you can improve initial diagnostics and product serviceability through the use of analytics and machine learning. The discussion will revolve around leveraging data provided from the Reverse Logistics Supply Chain, and how you can make determinations to enhance current diagnostic processes. In addition it will explore how you can leverage the same data to predict repair outcomes prior to product arriving at repair centers. Benefits include: a higher first-time fix rate, more efficient repair operations and lower service incidents.
Tuesday, April 16, 10:30–11:20am
Using Data Analytics to Challenge Conventional Thinking
Jay Young and Daniel Windle
Trinity Industries, a leader in railcar manufacturing, leasing, management, and maintenance services has transformed in an industry where institutional knowledge, intuition, and rules of thumb largely drive decision making to an organization where data-driven analysis has become far more normalized.
This journey started with transforming our supply chain to create scenario analysis and optimize based on a range of scenarios. It continued with demand forecasting and analysis as a more complete understanding of the commodities transported by rail and the railcars that carry them. The tools, processes, and vendors used to enable this change will be discussed. This session will focus on our story of data-enabled change.
Tuesday, April 16, 11:30am–12:20pm
ROMEO: A Fast and Transparent Framework for Multi-Echelon Inventory Analytics in Chemical Industries
Baptiste Lebreton
Defining the right level of inventory in multi-echelon supply chains is a key issues for commodity as well as specialty chemical companies. In the past 15 years, the Guaranteed Service Model (GSM) has gained wide adoption in planning software. While the GSM-based approaches bring valuable insights in retail or discrete manufacturing supply chains, these fall short in chemical supply chains where production wheels, tight manufacturing and warehousing capacity constraints as well as variable recipes exist. We present a simulation/optimization approach called ROMEO (Rolling Optimizer for Multi-Echelon Operations) that replicates daily supply chain operations (Order Promising/ATP, Supply Planning) and hence provides analysts with more tractable inventory recommendations that users can relate to. After a quick overview of literature and problem statement, we’ll describe ROMEO’s logic and show how it is currently applied at Eastman Chemical Company to drive inventories down.
Tuesday, April 16, 1:50–2:40pm
Using Classical Optimization and Advanced Analytics to solve Supply Chain Problems
Steve Sommer
LLamasoft will discuss how its Applied Research group combines cutting edge applications of classical operations research techniques with newer advanced analytical techniques to solve a broad spectrum of supply chain business problems. The talk with explore the classical applications within network optimization, vehicle route optimization, inventory optimization, and supply chain simulation to solve detailed supply chain problems at scale. Additionally, it will discuss the application of machine learning and artificial intelligence to improve demand forecasting. Finally, it will touch on the cross section between the classical and analytical techniques and how they can work together to solve more problems.
Tuesday, April 16, 4:40–5:30pm
Anticipating a World of Automated Transport: Cost, Energy, and Urban System Implications
Kara Kockelman
Connected and (fully-) automated vehicles (CAVs) are set to disrupt the ways in which we travel. CAVs will affect road safety, congestion levels, vehicle ownership and destination choices, long-distance trip-making frequencies, mode choices, and home and business locations. Benefits in the form of crash savings, driving burden reductions, fuel economy, and parking cost reductions are on the order of $2,000 per year per CAV, rising to nearly $5,000 when comprehensive crash costs are reflected. However, vehicle-miles traveled (VMT) are likely to rise, due to AVs traveling empty, longer-distance trip-making, and access for those currently unable to drive, such as those with disabilities. New policies and practices are needed, to avoid CAV pitfalls while exploiting their benefits.Shared AVs (SAVs) will offer many people access to such technologies at relatively low cost (e.g., $1 per mile), with empty-vehicle travel on the order of 10 to 15 percent of fleet VMT. If SAVs are smaller and/or electric, and dynamic ride-sharing is enabled and regularly used, emissions and energy demand may fall. If road tolls are thoughtfully applied, using GPS across all congested segments and times of day, total VMT may not rise: instead, travel times – and their unreliability – may fall. If credit-based congestion pricing is used, traveler welfare may rise and transportation systems may ultimately operate near-optimally. This presentation will present research relating to all these topics, to help professionals and the public think about policies, technologies, and other tools to improve quality of life for all travelers.