Discrete Optimization Talks (DOTs)

Discrete Optimization Talks (DOTs) is a virtual seminar series from the Mixed Integer Programming Society (MIPS), a technical section of the Mathematical Optimization Society.

DOTs are virtual discrete optimization talks, organized by Silvia Di Gregorio, Aleksandr M. Kazachkov, and Elias B. Khalil. To receive updates (and the Zoom link) for upcoming DOTs, please join our mailing list. If you would like to give a DOT, please fill out this form or email us.

Topics of interest include theoretical, computational, and applied aspects of integer and combinatorial optimization.

The format is two 30-minute talks each session. From January to April 2024, the seminars are scheduled the last Friday of every month at 12:00 p.m. ET.

A special feature of DOTs is a social component. After a usual talk, you might grab a tea/coffee and chat with other attendees. Why not here too? Join us for some informal discussion after each DOT, as well as throughout the week on our Discord channel.

Videos of past DOTs are posted under Past Talks and can also be found on our YouTube Channel.

Join the mailing list to receive information about our upcoming seminars and how to virtually participate. Joining this list is necessary to receive the password for each DOT (usually sent on the Monday of the week in which there are DOTs). You may also wish to subscribe to our Google calendar (also available in ics format).

Upcoming Talks

Université de Montréal

TBD

Stanford University

AI and the future of optimization modeling

Abstract: Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers, as the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques.

As a glimpse of the future, this talk will introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from natural language descriptions. OptiMUS can develop mathematical models, write and debug solver code, develop tests, and check the validity of generated solutions. Experimentally, OptiMUS correctly solves more than 80% of benchmark problems, more than twice as many as a basic LLM prompting strategy. More broadly, we discuss the potential for LLMs in domains where accuracy and fidelity to real-world data is critical and strategies to augment and safeguard their performance.

Bio: Madeleine Udell is Assistant Professor of Management Science and Engineering at Stanford University, with an affiliation with the Institute for Computational and Mathematical Engineering (ICME) and courtesy appointment in Electrical Engineering, and Associate Professor with tenure (on leave) of Operations Research and Information Engineering and Richard and Sybil Smith Sesquicentennial Fellow at Cornell University. Her research aims to accelerate and simplify large-scale data analysis and optimization, with impact on challenges in healthcare, finance, marketing, operations, and engineering systems design, among others. Her work in optimization seeks to detect and exploit novel structures, leading to faster and more memory-efficient algorithms, automatic proofs of optimality, better complexity guarantees, and user-friendly optimization solvers and modeling languages. Her work in machine learning centers on challenges of data preprocessing, interpretability, and causality, which are critical to practical application in domains with messy data.

Her awards include the Kavli Fellowship (2023), Alfred P. Sloan Research Fellowship (2021), a National Science Foundation CAREER award (2020), an Office of Naval Research (ONR) Young Investigator Award (2020), a Cornell Engineering Research Excellence Award (2020), an INFORMS Optimization Society Best Student Paper Award (as advisor) (2019), and INFORMS Doing Good with Good OR (2018). Her work has been supported by grants from the NSF, ONR, DARPA, and the Canadian Institutes of Health.

Madeleine has advised more than 50 students and postdocs, including six graduated PhD students who later joined Google, Amazon, Two Sigma, the University of Washington, and Tsinghua University. She has developed several new courses in optimization and machine learning, earning Cornell's Douglas Whitney ’61 Engineering Teaching Excellence Award in 2018.

Madeleine completed her PhD at Stanford University in Computational & Mathematical Engineering in 2015 under the supervision of Stephen Boyd, and a one year postdoctoral fellowship at Caltech in the Center for the Mathematics of Information hosted by Professor Joel Tropp. At Stanford, she was awarded a NSF Graduate Fellowship, a Gabilan Graduate Fellowship, and a Gerald J. Lieberman Fellowship, and was selected as the doctoral student member of Stanford's School of Engineering Future Committee to develop a road-map for the future of engineering at Stanford over the next 10–20 years. She received a B.S. degree in Mathematics and Physics, summa cum laude, with honors in mathematics and in physics, from Yale University.

Friday, May 17, 2024 at 12:00pm ET

University of Wisconsin at Madison

TBD

TBD


And sometime later ...

Philipp M. Christophel

SAS Institute Inc.

Cutting Planes: My Résumé of Failures

Abstract: Cutting-plane generators are among the most important components of modern MILP solvers. Together with other components, like presolve and branch-and-bound, they enable us to solve a wide range of practical optimization problems. While cutting planes unquestionably work in practice, a closer look at how and why they work leaves many open questions. In this talk I will touch on three topics related to cutting planes in which I failed to understand what’s going on and, in some cases, probably continue to fail to this day. These topics are: the path mixing cuts first published as part of my thesis and now implemented in some commercial MILP solvers, the relationship among various families of cuts such as lifted flow cover cuts and mingling cuts, and the never-ending mystery of cutting-plane selection.


Recent Past Talks (more here)

University at Buffalo

A branch-and-bound framework to solve a general class of interdependent multi-stage interdiction games

Abstract: We study a general class of three-stage interdiction games that model the sequential interaction of three decision-makers: a protector, an attacker, and a defender. In the first stage, the protector conducts a set of fortification strategies to protect from the attacker the defender's course of actions; in the second stage, the attacker deploys a set of disruptive attacks, parameterized by the fortification strategies, to optimally deteriorate the operators' objective; finally, in the third stage, the operator optimizes their course of action, considering the effects of the previous stages. We focus on a general variation in which the actions taken by the protector can directly affect the solution space of the defender. In other words, the protection actions made in the first stage may also turn solutions for the defender infeasible. General applications of this type of game arise in problems with an underlying network structure, where the protector can alter the composition of the network by adding/removing edges and vertices. In this presentation, we develop an exact solution framework for these problems based on a combinatorial branch-and-bound search. We provide some examples of applications arising from communications and network design problems.

Bio: Dr. Jose L. Walteros is an Assistant Professor in Industrial and Systems Engineering at the University at Buffalo. He holds a Ph.D. in Industrial and Systems Engineering from the University of Florida. Dr. Walteros has extensive experience in developing exact and approximate solutions for solving challenging problems in a wide variety of areas, including attacker-defender games, logistics, transportation and routing, shared-mobility systems management, homeland security, survivable network design, data association, and social networks analysis, with 25+ research publications in these fields. His research has been funded by organizations such as the National Science Foundation (NSF), the Office of Naval Research (ONR), the Air Force Research Laboratory (AFOSR), and the Department of Transportation (DOT), among others. Previously, he served as the President of the INFORMS Junior Faculty Interest Group and the Vice Chair for Network Optimization in the INFORMS Optimization Society (IOS). His papers have appeared in the journals Operations Research, INFORMS Journal on Computing, Networks, Mathematical Programming, Mathematical Programming Computation, Transportation Science, the European Journal of Operational Research, and others.


University of Connecticut

Sports betting with discrete optimization

Abstract: The integration of machine learning and optimization opens the door to new modeling paradigms that have already proven successful across a broad range of industries. Sports betting is a particularly exciting application area, where recent advances in both analytics and optimization can provide a lucrative edge. In this talk we will discuss the landscape of sports betting analytics and focus on where discrete optimization fits into the puzzle.

Bio: David Bergman is an Associate Professor of Operations and Information Management at the University of Connecticut.  He obtained his PhD from Carnegie Mellon University in Algorithms, Combinatorics, and Optimization.  David’s work focuses on developing novel algorithms for large-scale automated decision making using both machine learning and optimization, with applications both in research and in practice,  His research is published in top Operations Research journals and his work in consulting has driven impact for organizations across a wide spectrum of industries. He is also a leader in sports betting analytics and consistently ranks among the best Daily Fantasy Sports players globally.


Princeton University

Differentiable Cutting-plane Layers for Mixed-integer Linear Optimization

Abstract: We consider the problem of optimizing a sequence of mixed-integer linear optimization problems (MIPs) with varying parameters. By combining recent advances in cutting plane generation and differentiation through convex optimization problems, we construct a new differentiable architecture to predict the optimal cutting planes from the key parameters of each instance. During the offline phase, our method maximizes the cutting planes’ efficiency by evaluating the derivative of the solution of the continuous relaxations with respect to the cut-generating parameters. We show on preliminary computational results that, once trained, our architecture computes solutions with low infeasibility and suboptimality with fast and predictable execution times.

Biography: Bartolomeo Stellato is an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. Previously, he was a Postdoctoral Associate at the MIT Sloan School of Management and Operations Research Center. He received a DPhil (PhD) in Engineering Science from the University of Oxford, a MSc in Robotics, Systems and Control from ETH Zürich, and a BSc in Automation Engineering from Politecnico di Milano. He is the developer of OSQP, a widely used solver in mathematical optimization. Bartolomeo Stellato's awards include the NSF CAREER Award, the Franco Strazzabosco Young Investigator Award from ISSNAF, the Princeton SEAS Innovation Award in Data Science, the Best Paper Award in Mathematical Programming Computation, and the First Place Prize Paper Award in IEEE Transactions on Power Electronics. His research focuses on data-driven computational tools for mathematical optimization, machine learning, and optimal control.


TU Delft

Machine-learning augmented branch-and-bound for mixed-integer linear optimization

Abstract: Mixed-integer linear optimization solvers use branch and bound as their main component. In recent years, there has been an explosive development in the use of machine learning algorithms for enhancing all main tasks involved in the branch-and-bound algorithm, such as primal heuristics, branching, cutting planes, node selection and solver configuration decisions. In this talk we mention a selection of results in this area together with some relations between integer linear optimization and deep learning. The talk is based on a joint paper with Lara Scavuzzo Montana, Andrea Lodi, and Neil Yorke-Smith.


Biography: Karen Aardal holds a position of full professor of optimization at Delft University of Technology. Her research interests include integer optimization algorithms, approximation algorithms, applications of machine learning, and the modeling and resolution of problems within energy and health care. She is a former chair of the Mathematical Optimization Society, and has served on the board of the INFORMS Computing Society in three periods. She served on several editorial boards, such as Math Programming Series B, INFORMS J on Computing, and OR Letters. In the Netherlands she is serving on the board of domain Science of the Dutch Research Council NWO.

Tilburg University

IRELAND: Iterative Rule Extension for the Logical ANalysis of Data

Abstract: Data-driven decision making is rapidly gaining popularity, fuelled by the ever-increasing amounts of available data and encouraged by the development of models that can identify non-linear input-output relationships. Simultaneously, the need for interpretable prediction- and classification methods is increasing, as this improves both our trust in these models and the amount of information we can abstract from data. An important aspect of this interpretability is to obtain insight in the sensitivity-specificity trade-off constituted by multiple plausible input-output relationships. These are often shown in a receiver operating characteristic curve. These developments combined lead to the need for a method that can identify complex yet interpretable input-output relationships from large data, i.e., data containing large numbers of samples and features. Boolean phrases in disjunctive normal form (DNF) are highly suitable for explaining nonlinear input-output relationships in a comprehensible way. Mixed integer linear programming can be used to obtain these Boolean phrases from binary data, though its computational complexity prohibits the analysis of large datasets. This work presents IRELAND, an algorithm that allows for abstracting Boolean phrases in DNF from data with up to 10,000 samples and features. The results show that for large datasets IRELAND outperforms the current state-of-the-art in terms of prediction accuracy. Additionally, by construction, IRELAND allows for an efficient computation of the sensitivity-specificity trade-off curve, allowing for further understanding of the underlying input-output relationship.

Bio: Marleen Balvert works on optimization and data science problems that have a direct societal impact. Her research aims to answer questions that arise from other fields, such as bioinformatics and food security. As a researcher at the Zero Hunger Lab, she uses data analytics and optimization to help NGOs achieve Sustainable Development Goal 2: zero hunger. Furthermore, she works on the development of data analysis techniques for large genome data. The goal of these methods is to identify those genetic characteristics that cause individual traits, such as susceptibility to genetic diseases for humans or crop traits such as flavor and yield. Interdisciplinary collaborations are central to her work.