Discrete Optimization Talks (DOTs)

Discrete Optimization Talks (DOTs) is a virtual seminar series from the Mixed Integer Programming Society (MIPS), a technical section of the Mathematical Optimization Society (MOS). Topics of interest are theoretical, computational, and applied aspects of integer and combinatorial optimization. The typical format of each session is two 30-minute talks.

From September to December 2024, DOTs are scheduled the second Friday of every month at 12:00 p.m. ET

To receive updates (and the Zoom link) for upcoming DOTs, please join our mailing list. Joining this list is necessary to receive the password for each DOT (usually sent by the Wednesday in the week of the seminar). You may also wish to subscribe to our Google calendar (also available in ics format).

A special feature of DOTs is a social component. After a usual talk, you might grab a tea/coffee and chat with other attendees. Why not here too? Join us for some informal discussion after each DOT, as well as throughout the week on our Discord channel (though this has recently been inactive).

Videos of past DOTs are posted under Past Talks and can also be found on our YouTube Channel.

The current organizers are Silvia Di Gregorio, Aleksandr M. Kazachkov, and Elias B. Khalil. If you would like to give a DOT, please fill out this form or email us.

Upcoming Talks

October 11, 2024

Western University Ivey Business School

Learning for Nonlinear Optimization Problems

Abstract: The use of machine learning techniques to improve the performance of branch-and-bound optimization algorithms is a very active area in the context of mixed integer linear problems, but little has been done for non-linear optimization. To bridge this gap, we develop a learning framework for spatial branching and show its efficacy in the context of the Reformulation-Linearization Technique for polynomial optimization problems. The proposed learning is performed offline, based on instance-specific features and with no computational overhead when solving new instances. Novel graph-based features are introduced, which turn out to play an important role in the learning. Experiments on different benchmark instances from MINLPLib and QPLIB show that the learning-based branching selection and learning-based constraint generation significantly outperform the standard approaches.

Bio: Bissan Ghaddar is the John M. Thompson Chair in Engineering Leadership and Innovation and an Associate Professor of Management Science and Sustainability at the Ivey Business School working on problems at the intersection of machine learning and non-linear optimization. Before joining Ivey Business School, she worked on energy, water, and transportation network optimization at IBM Research and on inventory management problems at the Centre for Operational Research and Analysis, Department of National Defence Canada. Her work has been published in prestigious journals such as Mathematical Programming, INFORMS Journal on Computing, SIAM Journal on Optimization, among others. Her research has been supported by national and international grants including NSERC, OCE, Cisco, H2020, and Marie Curie International Incoming Fellowship. She serves as the Research Lead at the Ivey Energy Policy and Management Centre and is a fellow at the Balsillie School of International Affairs, engaged in the research cluster on AI, Global Governance, and International Public Policy. She is the Associate Editor for the EURO Journal on Computational Optimization.

University College Dublin

Learning-to-Prune: A Lightweight Supervised Learning Technique for Solving Combinatorial Optimization Problems

Abstract: I will present a lightweight supervised learning technique for solving combinatorial optimization problems that we refer to as learning-to-prune. In contrast to the end-to-end machine learning techniques that attempt to solve optimization problems completely, the goal of the learning-to-prune is merely to predict values for a subset of variables with high confidence. The remaining problem is then solved using the classical optimization techniques. Thus, learning-to-prune speeds-up the solution of the optimization problems. The key advantages of the learning-to-prune framework are that (i) it requires very little training data, (ii) it is easier to integrate algorithmic insights into the learning and (iii) even simple (more interpretable) classification models are quite effective in this framework. We show that this simple approach performs surprisingly well in practice. We consider a range of classical combinatorial optimization problems -- k-median, set cover, max coverage, uncapacitated facility location, Steiner tree problem on graphs, and nurse rostering problems -- and show that this framework provides near-optimal solutions considerably faster than commercial ILP solvers and approximation algorithms on benchmark instances. We also demonstrate a bootstrapping approach to further boost the performance of this technique for large problem instances. 

Bio: Dr. Deepak Ajwani is an Assistant Professor at the School of Computer Science, University College Dublin. His research interests include algorithm engineering, combinatorial optimisation and machine learning. He is a funded investigator with the Science Foundation Ireland Centre for Research Training in Machine Learning (ML-Labs). His Postdoctoral research (2010-2012) was supported by a funding award from the Irish Research Council for Science, Engineering and Technology and IBM Research. He has more than 50 peer-reviewed top-tier conference and journal publications. He is on the editorial board of Machine Learning journal (Springer) and regularly serves as a senior programme committee member/programme committee member of top conferences (WWW, IJCAI, AAAI, ECML-PKDD, ALENEX and CP) in the areas of machine learning, algorithm engineering and optimisation.


November 15, 2024

Zuse Institute Berlin

A Reformulation-Linearization Technique framework for problems with bilinear terms

Abstract: The Reformulation-Linearization Technique (RLT) is a method for constructing cutting planes for mixed-integer and polynomial optimization problems. RLT yields tight linear programming relaxations, but its efficiency is often hindered by high cost of separation and large sizes of resulting linear programs. In this talk we present our bilinear RLT cut generation framework. For mixed-integer problems, it provides a new way of leveraging the strength of RLT relaxations by deriving bilinear relations from mixed-integer linear constraints with at least one binary variable. Implicit product relations are then filtered based on evaluations of their non-redundancy with respect to variable domains and the constraints they are derived from. At the cut separation stage, our framework runs an enhanced RLT cut generation algorithm for both derived and original product relations. This algorithm considers only a subset of variable and constraint combinations, producing relaxations of the same strength with lower computational cost of the separation. Experiments show that the enhanced separation algorithm is crucial for the application of RLT in practice, and that RLT cuts yield considerable performance improvements for MINLP problems and smaller improvements on some subsets of MILP instances. Product filtering further reduces the computational burden of constructing RLT cuts.

Bio: Ksenia Bestuzheva leads the Mathematical Optimization Methods group at Zuse Institute Berlin and is the head of development of the SCIP Optimization Suite, and serves as the head of SynLab within the Research Campus MODAL. She works on algorithms for mixed-integer nonlinear optimization, the themes of her research including relaxations of convex and nonconvex MINLPs, generalized convexity, and the applications of optimization to problems in practical areas such as energy systems, water treatment and maintenance scheduling. Prior to joining Zuse Institute Berlin, she completed a PhD degree in computer science at the Australian National University. She has been actively involved in development of open-source mathematical software, and prior to her work in SCIP, she made contributions to the ASCEND mathematical modelling system and the framework for power systems optimization Gravity, which was awarded the COIN-OR Cup in 2021.

The Ohio State University

Bounds for Multistage Mixed-Integer Distributionally Robust Optimization

Abstract: Multistage mixed-integer distributionally robust optimization (DRO) forms a class of extremely challenging problems since their size grows exponentially with the number of stages. One way to model the uncertainty in multistage DRO is by creating sets of conditional distributions (the so-called conditional ambiguity sets) on a finite scenario tree and requiring that such distributions remain close to nominal conditional distributions according to some measure of similarity/distance. In this talk, new bounding criteria for this class of difficult decision problems are provided through scenario grouping and convolution of risk measures using the ambiguity sets associated with various commonly used φ-divergences and the Wasserstein distance. Our approach does not require any special problem structure such as linearity, convexity, stagewise independence, and so forth. Therefore, while we focus on multistage mixed-integer DRO, our bounds can be applied to a wide range of DRO problems including two-stage and multistage, with or without integer variables, convex or nonconvex, and nested or non-nested formulations. Numerical results on a multistage mixed-integer production problem show the efficiency of the proposed approach through different choices of partition strategies, ambiguity sets, and levels of robustness. 

Bio: Güzin Bayraksan is a Professor and Associate Chair for Research in the Integrated Systems Engineering Department and an affiliated faculty member of the Sustainability Institute and the Translational Data Analytics Institute at the Ohio State University. Her research interests are in optimization under uncertainty, particularly stochastic and distributionally robust optimization, using data-driven, contextual, and Monte Carlo simulation-based methods. She applies these models and methods to solve problems of critical societal interest in water, energy, and transportation systems. Her papers have appeared in top journals such as Mathematical Programming, SIAM Journal on Optimization and Operations Research, and her research has been supported by multiple grants from the National Science Foundation (NSF) and the Department of Energy (DoE). She is the recipient of INFORMS ENRE Best Publication Award in Environment and Sustainability, Lumley Research Award (OSU), NSF CAREER award, Five Star Faculty Award (UA), and the INFORMS best case study award. She has been invited to give plenary, keynote, and tutorial talks at conferences such as the International Symposium on Mathematical Programming (ISMP), International Conference on Stochastic Programming (ICSP), INFORMS Annual Meeting, and the Industrial and Systems Engineering Research Conference (ISERC). She served as the Chair of Stochastic Programming Society (SPS) (2019-2023), the Vice Chair of Optimization under Uncertainty of INFORMS Optimization Society (2017-2018), and the President of the INFORMS Forum on Women in Operations Research and Management Science. 

December 13, 2024

Philipp M. Christophel

SAS Institute Inc.

Cutting Planes: My Résumé of Failures

Abstract: Cutting-plane generators are among the most important components of modern MILP solvers. Together with other components, like presolve and branch-and-bound, they enable us to solve a wide range of practical optimization problems. While cutting planes unquestionably work in practice, a closer look at how and why they work leaves many open questions. In this talk I will touch on three topics related to cutting planes in which I failed to understand what’s going on and, in some cases, probably continue to fail to this day. These topics are: the path mixing cuts first published as part of my thesis and now implemented in some commercial MILP solvers, the relationship among various families of cuts such as lifted flow cover cuts and mingling cuts, and the never-ending mystery of cutting-plane selection.


Google Research / Universidad Adolfo Ibañez

Stochastic facility location problem with outsourcing costs

Abstract: Stochastic facility location problems with outsourcing costs (SFLPOC) optimize facility placement and customer assignment under demand uncertainty. Excess demand beyond a facility’s capacity incurs outsourcing costs. This work addresses SFLPOC, aiming to minimize overall expected costs (installation, servicing, and outsourcing). We model SFLPOC as a two-stage stochastic program. While prior work focused on specific assumptions or small scenario sets, we present methods suitable for general probability distributions. For discrete scenario sets, we improve upon classic Benders decomposition by exploiting the second-stage subproblem’s structure. To handle general distributions, we partition the probability space based on incumbent integer solutions. Coupled with Benders cuts, this provides an exact solution method for common distributions (e.g., Bernoulli, Gaussian). Additionally, we introduce a compact formulation specifically for i.i.d. demand distributions, allowing us to solve even continuous distribution problems to optimality. Computational experiments on established benchmarks demonstrate that our compact formulation consistently finds optimal solutions, while the Benders approach provides strong solutions with proven optimality gaps for general distributions, outperforming sample average approximations. Joint work with Javiera Barrera (Universidad Adolfo Ibanez) and Ivana Ljubic (ESSEC Business School).


Coming later...

University of Wisconsin at Madison

TBD

TBD


Recent Past Talks (more here)

Université de Montréal

On constrained mixed-integer DR-submodular minimization

Abstract: Submodular set functions play an important role in integer programming and combinatorial optimization. Increasingly, applications call for generalized submodular functions defined over general integer or continuous domains. Diminishing returns (DR)–submodular functions are one of such generalizations, which encompass a broad class of functions that are generally nonconvex and nonconcave. We study the problem of minimizing any DR-submodular function with continuous and general integer variables under box constraints and, possibly, additional monotonicity constraints. We propose valid linear inequalities for the epigraph of any DR-submodular function under the constraints. We further provide the complete convex hull of such an epigraph, which, surprisingly, turns out to be polyhedral. We propose a polynomial-time exact separation algorithm for our proposed valid inequalities with which we first establish the polynomial-time solvability of this class of mixed-integer nonlinear optimization problems.

 

Bio: Qimeng (Kim) Yu is an assistant professor in the Department of Computer Science and Operations Research (DIRO) at Université de Montréal. She completed her Ph.D. in Industrial Engineering and Management Sciences at Northwestern University in 2023, and she obtained her undergraduate degree in Mathematics at Carleton College in 2018. In her research, she develops theory and algorithms for mixed-integer nonlinear programming to facilitate the solution of complex models with real-world applications.

Stanford University

AI and the future of optimization modeling

Abstract: Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers, as the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques.

As a glimpse of the future, this talk will introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from natural language descriptions. OptiMUS can develop mathematical models, write and debug solver code, develop tests, and check the validity of generated solutions. Experimentally, OptiMUS correctly solves more than 80% of benchmark problems, more than twice as many as a basic LLM prompting strategy. More broadly, we discuss the potential for LLMs in domains where accuracy and fidelity to real-world data is critical and strategies to augment and safeguard their performance.

Bio: Madeleine Udell is Assistant Professor of Management Science and Engineering at Stanford University, with an affiliation with the Institute for Computational and Mathematical Engineering (ICME) and courtesy appointment in Electrical Engineering, and Associate Professor with tenure (on leave) of Operations Research and Information Engineering and Richard and Sybil Smith Sesquicentennial Fellow at Cornell University. Her research aims to accelerate and simplify large-scale data analysis and optimization, with impact on challenges in healthcare, finance, marketing, operations, and engineering systems design, among others. Her work in optimization seeks to detect and exploit novel structures, leading to faster and more memory-efficient algorithms, automatic proofs of optimality, better complexity guarantees, and user-friendly optimization solvers and modeling languages. Her work in machine learning centers on challenges of data preprocessing, interpretability, and causality, which are critical to practical application in domains with messy data.

Her awards include the Kavli Fellowship (2023), Alfred P. Sloan Research Fellowship (2021), a National Science Foundation CAREER award (2020), an Office of Naval Research (ONR) Young Investigator Award (2020), a Cornell Engineering Research Excellence Award (2020), an INFORMS Optimization Society Best Student Paper Award (as advisor) (2019), and INFORMS Doing Good with Good OR (2018). Her work has been supported by grants from the NSF, ONR, DARPA, and the Canadian Institutes of Health.

Madeleine has advised more than 50 students and postdocs, including six graduated PhD students who later joined Google, Amazon, Two Sigma, the University of Washington, and Tsinghua University. She has developed several new courses in optimization and machine learning, earning Cornell's Douglas Whitney ’61 Engineering Teaching Excellence Award in 2018.

Madeleine completed her PhD at Stanford University in Computational & Mathematical Engineering in 2015 under the supervision of Stephen Boyd, and a one year postdoctoral fellowship at Caltech in the Center for the Mathematics of Information hosted by Professor Joel Tropp. At Stanford, she was awarded a NSF Graduate Fellowship, a Gabilan Graduate Fellowship, and a Gerald J. Lieberman Fellowship, and was selected as the doctoral student member of Stanford's School of Engineering Future Committee to develop a road-map for the future of engineering at Stanford over the next 10–20 years. She received a B.S. degree in Mathematics and Physics, summa cum laude, with honors in mathematics and in physics, from Yale University.

University at Buffalo

A branch-and-bound framework to solve a general class of interdependent multi-stage interdiction games

Abstract: We study a general class of three-stage interdiction games that model the sequential interaction of three decision-makers: a protector, an attacker, and a defender. In the first stage, the protector conducts a set of fortification strategies to protect from the attacker the defender's course of actions; in the second stage, the attacker deploys a set of disruptive attacks, parameterized by the fortification strategies, to optimally deteriorate the operators' objective; finally, in the third stage, the operator optimizes their course of action, considering the effects of the previous stages. We focus on a general variation in which the actions taken by the protector can directly affect the solution space of the defender. In other words, the protection actions made in the first stage may also turn solutions for the defender infeasible. General applications of this type of game arise in problems with an underlying network structure, where the protector can alter the composition of the network by adding/removing edges and vertices. In this presentation, we develop an exact solution framework for these problems based on a combinatorial branch-and-bound search. We provide some examples of applications arising from communications and network design problems.

Bio: Dr. Jose L. Walteros is an Assistant Professor in Industrial and Systems Engineering at the University at Buffalo. He holds a Ph.D. in Industrial and Systems Engineering from the University of Florida. Dr. Walteros has extensive experience in developing exact and approximate solutions for solving challenging problems in a wide variety of areas, including attacker-defender games, logistics, transportation and routing, shared-mobility systems management, homeland security, survivable network design, data association, and social networks analysis, with 25+ research publications in these fields. His research has been funded by organizations such as the National Science Foundation (NSF), the Office of Naval Research (ONR), the Air Force Research Laboratory (AFOSR), and the Department of Transportation (DOT), among others. Previously, he served as the President of the INFORMS Junior Faculty Interest Group and the Vice Chair for Network Optimization in the INFORMS Optimization Society (IOS). His papers have appeared in the journals Operations Research, INFORMS Journal on Computing, Networks, Mathematical Programming, Mathematical Programming Computation, Transportation Science, the European Journal of Operational Research, and others.


University of Connecticut

Sports betting with discrete optimization

Abstract: The integration of machine learning and optimization opens the door to new modeling paradigms that have already proven successful across a broad range of industries. Sports betting is a particularly exciting application area, where recent advances in both analytics and optimization can provide a lucrative edge. In this talk we will discuss the landscape of sports betting analytics and focus on where discrete optimization fits into the puzzle.

Bio: David Bergman is an Associate Professor of Operations and Information Management at the University of Connecticut.  He obtained his PhD from Carnegie Mellon University in Algorithms, Combinatorics, and Optimization.  David’s work focuses on developing novel algorithms for large-scale automated decision making using both machine learning and optimization, with applications both in research and in practice,  His research is published in top Operations Research journals and his work in consulting has driven impact for organizations across a wide spectrum of industries. He is also a leader in sports betting analytics and consistently ranks among the best Daily Fantasy Sports players globally.


Princeton University

Differentiable Cutting-plane Layers for Mixed-integer Linear Optimization

Abstract: We consider the problem of optimizing a sequence of mixed-integer linear optimization problems (MIPs) with varying parameters. By combining recent advances in cutting plane generation and differentiation through convex optimization problems, we construct a new differentiable architecture to predict the optimal cutting planes from the key parameters of each instance. During the offline phase, our method maximizes the cutting planes’ efficiency by evaluating the derivative of the solution of the continuous relaxations with respect to the cut-generating parameters. We show on preliminary computational results that, once trained, our architecture computes solutions with low infeasibility and suboptimality with fast and predictable execution times.

Biography: Bartolomeo Stellato is an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. Previously, he was a Postdoctoral Associate at the MIT Sloan School of Management and Operations Research Center. He received a DPhil (PhD) in Engineering Science from the University of Oxford, a MSc in Robotics, Systems and Control from ETH Zürich, and a BSc in Automation Engineering from Politecnico di Milano. He is the developer of OSQP, a widely used solver in mathematical optimization. Bartolomeo Stellato's awards include the NSF CAREER Award, the Franco Strazzabosco Young Investigator Award from ISSNAF, the Princeton SEAS Innovation Award in Data Science, the Best Paper Award in Mathematical Programming Computation, and the First Place Prize Paper Award in IEEE Transactions on Power Electronics. His research focuses on data-driven computational tools for mathematical optimization, machine learning, and optimal control.


TU Delft

Machine-learning augmented branch-and-bound for mixed-integer linear optimization

Abstract: Mixed-integer linear optimization solvers use branch and bound as their main component. In recent years, there has been an explosive development in the use of machine learning algorithms for enhancing all main tasks involved in the branch-and-bound algorithm, such as primal heuristics, branching, cutting planes, node selection and solver configuration decisions. In this talk we mention a selection of results in this area together with some relations between integer linear optimization and deep learning. The talk is based on a joint paper with Lara Scavuzzo Montana, Andrea Lodi, and Neil Yorke-Smith.


Biography: Karen Aardal holds a position of full professor of optimization at Delft University of Technology. Her research interests include integer optimization algorithms, approximation algorithms, applications of machine learning, and the modeling and resolution of problems within energy and health care. She is a former chair of the Mathematical Optimization Society, and has served on the board of the INFORMS Computing Society in three periods. She served on several editorial boards, such as Math Programming Series B, INFORMS J on Computing, and OR Letters. In the Netherlands she is serving on the board of domain Science of the Dutch Research Council NWO.