Connor Lawless
banner
lawlessopt.bsky.social
Connor Lawless
@lawlessopt.bsky.social
Stanford MS&E Postdoc | Human-Centered AI & OR
Prev: @CornellORIE @MSFTResearch, @IBMResearch, @uoftmie 🌈
There's been a lot of work using LLMs to formulate MILPs, but how do we know that the formulations are correct?

Come chat with Haotian at poster W-515 to learn about our work on automatic equivalence checking for optimization models!
July 16, 2025 at 6:49 PM
Our empirical results highlight that existing pointwise approaches for recourse can fail to catch potential fixed predictions, whereas our approach (provably) succeeds!
July 14, 2025 at 4:15 PM
We model the problem as a mixed-integer quadratically constrained program that runs in seconds on real-world datasets.
July 14, 2025 at 4:15 PM
This paradigm lets us spot fixed predictions before deploying a model, lets us audit public models for recourse (even if we don't have any available data!), and gives interpretable summaries of regions with fixed predictions to help with debugging.
July 14, 2025 at 4:14 PM
In this paper, we introduce a new paradigm for algorithmic recourse that aims to certify recourse over an entire region of the feature space!
July 14, 2025 at 4:13 PM
Existing approaches to algorithmic recourse focus on verifying recourse on an individual-by-individual basis, which can cause model developers to miss potential fixed predictions, requires a lot of data, and makes it difficult to debug recourse issues!
July 14, 2025 at 4:12 PM
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!
July 14, 2025 at 4:11 PM
In addition to a bunch of quantitative experiments, we ran a user study with a prototype system to inform design recommendations for future interactive optimization systems. Check out the paper for more details!
March 25, 2025 at 6:59 AM
We built a hybrid LLM and CP system that uses LLMs to translate user requests in chat into operations on an underlying CP optimization model to schedule a new meeting. This gets the best of both worlds - the flexibility of LLMs with the decision making power of optimization!
March 25, 2025 at 6:58 AM
Building optimization models in practice involves a ton of back and forth between optimization and domain experts to understand a decision making problem. Can we enable domain experts to craft their own optimization models instead? We study this through the lens of scheduling.
March 25, 2025 at 6:57 AM
Surprisingly, we can get high performing configurations from our framework - outperforming solver defaults on a number of real world problems, without solving a single MILP!
March 16, 2025 at 5:48 PM
We introduce a LLM based framework with some algorithmic bells and whistles (ensembling, solver specific context...) to capitalize on LLM strengths while addressing these challenges.
March 16, 2025 at 5:47 PM
Unfortunately, LLMs aren't a natural fit for configuration. Parameters are problem specific, LLMs have stochastic outputs, and frankly - it's a tough problem!
March 16, 2025 at 5:46 PM
Can we get better problem-specific solver configurations without the big computational price tag?

In this paper we show that we can thanks to Large Language Models! Why LLMs? They can identify useful optimization structure and have a lot of built in math programming knowledge!
March 16, 2025 at 5:44 PM
MILP solvers ship with a ton of parameters that can have a massive impact on solver performance (over 70% for separator configuration alone!), but are notoriously difficult to set.

Existing approaches for algorithm configuration require solving a ton of MILPs leading to days of compute.
March 16, 2025 at 5:41 PM
Have you developed and/or implemented an optimization model to solve a real-world use case? We want to hear from you! We're extending our study on workflows in optimization modelling for a couple last interviews: tinyurl.com/3ejdr7su
February 3, 2025 at 10:47 PM