Discrete Optimization Talks (DOTs)

DOTs are virtual discrete optimization talks, organized by Aleksandr M. Kazachkov and Elias B. Khalil. To receive updates (and the Zoom link) for upcoming DOTs, please join our mailing list. If you would like to give a DOT, please fill out this form or email us.

Topics of interest include theoretical, computational, and applied aspects of integer and combinatorial optimization.

The format is two 30-minute talks each session. From January to April 2021, the seminars are scheduled tentatively for the last Friday of every month at 1:00 p.m. ET.

A special feature of DOTs is a social component. After a usual talk, you might grab a tea/coffee and chat with other attendees. Why not here too? Join us for some informal discussion after each DOT, as well as throughout the week on our Discord channel.

Videos of past DOTs are posted under Past Talks and can also be found on our YouTube Channel.

Join the mailing list to receive information about our upcoming seminars and how to virtually participate. Joining this list is necessary to receive the password for each DOT (usually sent on Monday mornings). You may also wish to subscribe to our Google calendar (also available in ics format).

Upcoming Talks

Robert Hildebrand
Virginia Tech

Compact mixed-integer programming relaxations in quadratic optimization

In joint work with Ben Beach and Joey Huchette, we present a technique for producing valid dual bounds for nonconvex quadratic optimization problems. The approach leverages an elegant piecewise linear approximation for univariate quadratic functions due to Yarotsky, formulating this (simple) approximation using mixed-integer programming (MIP). Notably, the number of constraints, binary variables, and auxiliary continuous variables used in this formulation grows logarithmically in the approximation error. Combining this with a diagonal perturbation technique to convert a nonseparable quadratic function into a separable one, we present a mixed-integer convex quadratic relaxation for nonconvex quadratic optimization problems. We study the strength (or sharpness) of our formulation and the tightness of its approximation. Further, we show that our formulation represents feasible points via a Gray code. We close with computational results on problems with quadratic objectives and/or constraints, showing that our proposed method i) across the board outperforms existing MIP relaxations from the literature, and ii) on hard instances produces better bounds than exact solvers within a fixed time budget.

Bio: Robert is an assistant professor in the Grado Department of Industrial and Systems Engineering (ISE) at Virginia Tech. He obtained his PhD in 2013 at the University of California, Davis under the supervision of Matthias Köppe. Afterward, he spent two years in Zurich, Switzerland as a postdoctoral researcher at the Institute for Operations Research in the Department for Mathematics at ETH Zurich. Subsequently, he was a Goldstine Fellow Postdoctoral Researcher at IBM Watson Research Center in Yorktown Heights, New York, and also participated in the semester-long Simons Institute program on Bridging Continuous and Discrete Optimization at UC Berkeley.

Robert's current research interests are Mixed-Integer Nonlinear Optimization, Cutting Plane Theory and Practice, Redistricting, Stochastic Scheduling and Machine Learning.

Ruth Misener
Imperial College

Partial Lasserre relaxation for sparse Max-Cut

A common approach to solve or find bounds of polynomial optimization problems like Max-Cut is to use the first level of the Lasserre hierarchy. Higher levels of the Lasserre hierarchy provide tighter bounds, but solving these relaxations is usually computationally intractable. We propose to strengthen the first level relaxation for sparse Max-Cut problems using constraints from the second order Lasserre hierarchy. We explore a variety of approaches for adding a subset of the positive semidefinite constraints of the second order sparse relaxation obtained by using the maximum cliques of the graph's chordal extension. We apply this idea to sparse graphs of different sizes and densities, and provide evidence of its strengths and limitations when compared to the state-of-the-art software.

This work, with is joint with Juan Campos and Panos Parpas, is available as a preprint here: http://www.optimization-online.org/DB_HTML/2020/10/8063.html

Bio: Dr Ruth Misener (http://wp.doc.ic.ac.uk/rmisener/) is a Professor in Computational Optimization in the Imperial College London Department of Computing. Ruth received an SB from MIT and a PhD from Princeton: both degrees are in chemical engineering. Foundations of her research are in numerical optimization algorithms. Applications include bioprocess optimization under uncertainty, petrochemical process design and operations, and explainable scheduling. Ruth's research team makes their software contributions available open source (https://github.com/cog-imperial). Ruth received the 2017 Macfarlane Medal from the Royal Academy of Engineering and the 2020 Outstanding Young Researcher Award from the AIChE Computing & Systems Technology Division. Ruth is an associate editor for Computers & Chemical Engineering and INFORMS Journal on Computing.