š #92 Operations Research 2.0: an open letter
Why usability beats elegance, how DecisionOps closes the gap, and a pragmatic curriculum for Masterās programs.
Last year around these dates, I wrote a six-part series of articles.
I donāt know if itās the time of the year, but I wanted to do an update to those articles.
I think most of the blame goes to having read a research paper from Laura Albert et al. where they explore the same topic.
You guessed it right: Iām talking about the systemic issues that keep OR from mainstream adoption and how to improve the situation for our field.
We reached to similar conclusions, and thatās a good sign but also a reminder that knowing what to change isnāt the same as changing it.
So today in Feasible weāll cover:
š£ļø Alignment (and where to go further)
šŖ The tools to make it happen
š An open letter to Masterās programs: introducing the Product Track
Are you ready? Letās dive in⦠šŖ
š£ļø Alignment (and where to go further)
If you didnāt read my OR Reimagined series of articles nor the research paper, I strongly suggest you do.
But if you donāt want to spend time now, there are some points covered in them, like:
šŖ The accessibility and transparency gaps.
OR tools require specialized knowledge and thereās a steep learning curve; you can see that issue reflected on the many PhDs or advanced Masterās programs needed for the jobs.
Compare it to Machine Learning where you just need to import a library and 10 lines of code after that you can classify cats and dogs.
On the transparency, itās ironic because OR is mathematically transparent but practically opaque.
There are some efforts on explainable optimization (XOpt) like expopt or the way some solvers like Timefold can structure the output to explain it better.
š The lack of marketing/storytelling. Most of the OR successes are invisible either because of confidential reasons (saving millions with route optimization is usually proprietary), or because the field itself is not widely known.
And in this latter part, the name may play a crucial role.
One year ago I argued we didnāt need a change on that part. Now I think we donāt need that urgently, as I donāt think itās the primary reason for the lack of knowledge around OR, but it could benefit the field.
Anyway, it seems that ORās wins are infrastructure: invisible when working, catastrophic when failing. Itās like plumbing: nobody celebrates good plumbing, but everyone complains when it breaks.
āļø The theory-practice divide. This is also a crucial point: what academia studies is usually irrelevant for the industry.
So thereās a big gap between the two. I see four parts where it shows up:
Problem selection. Academia usually studies problems because theyāre theoretically sound, but the industry asks for things like optimizing routes for 50 trucks with messy data, while drivers arenāt following the routes and customers are constantly changing their minds.
Solution approach. While there might be value in achieving optimality for some problems, if it takes 4 hours then itās -most probably- less useful than getting suboptimal solutions in 5 minutes.
Model complexity. In academic settings, youāre responsible for your code, but in industry settings youāll work within a team that need to understand your code, or even worse, youāll leave the company some day and someone else need to take care of it. So usually, the simpler, the better.
Code quality. The industry needs to run those beautiful algorithms and models in production environments, not in a lab. So they need things like automated tests, data preparation, error handling, logging and monitoring, documentationā¦
Thereās one point in the paper that, at least to me, needs a bit more clarification.
As I was reading it, it seemed that open-source ecosystems are the main reason ML succeeded and the main bottleneck in OR. And that we donāt have open datasets.
On the contrary, the field is full of really good open-source solvers like HiGHS, Timefold, or CP-SAT (from Google OR-Tools), and there exist open datasets for the most common problems like the VRP, QAP, or CSPs. You can see a list of 40 solvers and 9 datasets (or pages with datasets of all kind) in my Notion database with OR resources.
The bottleneck isnāt code availability anymore; itās usable pipelines and shared, decision-centric benchmarks. We need something like the Kaggle for OR.
And all of those points are a diagnosis, but theyāre not a cure.
As a summary, what I think now is that academia doesnāt prepare engineers enough to i) build models easily (and easy models neither) nor ii) productize those models for others (backend and/or frontend systems) to consume it.
So if we agree on why, the next step is how: what tools make this change possible?
šŖ The tools to make it happen
If you remember the Part II of OR Reimagined, I showed what are the missing layers when solving optimization problems, and I identified two specific layers.
š£ļø Problem definition.
Thereās this usability layer where math-heavy frameworks dominate the OR world.
One of the main issues I saw when I started solving optimization problems is that you either needed big math foundations or big algorithm foundations. If we want to appeal to those outside our field as ML does, we need the āimport a library and 10 lines of code after that you can classify cats and dogsā kind of thing.
But we donāt have that.
The most engineering-friendly modeling approach Iāve used so far is Timefold, and Iād be very happy to see more solvers like that one. Why? Because it lets every software engineer model optimization problems without being mathematicians or algorithms experts.
Instead of writingā¦
minimize ā c_ij * x_ij
subject to ā x_ij = 1
You can simply writeā¦
(...)
private Constraint assignEachTask(ConstraintFactory factory) {
return factory.forEach(Assignment.class)
.join(Task.class, Joiners.equal(Assignment::getTask, Task::getId))
.penalize(HardSoftScore.ONE_HARD)
.as_constraint(āAssign each task onceā);
}
(...)
The constraint is -more- readable.
The model itself is -more- testable.
And the developer doesnāt need a PhD.
ā»ļø Deployment: from prototype to DecisionOps.
DecisionOps is to OR what MLOps is to ML.
But we need similar things in this regard:
š¢ Versioning and experimentation. Each model run tracked so we can compare solutions through different parameterizations.
š§Ŗ Testing. So that we can confidently develop new constraints or change them without fear of breaking everything.
š Monitoring. The thing needed to watch KPIs over time or performance degradation.
If you donāt want to write an entire DecisionOps pipeline yourself, you have proprietary tools like Nextmv and also open-source ones like Cornflow.
For both -problem definition and deployment- the tools exist today. Whatās missing is connecting them into a reusable pipeline. The technology is ready; the mindset isnāt.
How can we change that?
š An open letter to Masterās programs: introducing the Product Track
The other day I let ChatGPT to do some research about Masterās programs worldwide. The output was pretty long, but it can be summarized as:
Graduates are highly employable for modeling and analytics, but a truly product-aware OR engineer usually layers on extra training in software engineering and business communication & product sense.
To be fair, that was my main hypothesis. In fact, you can see that reflected on this very same newsletter as Iām usually linking my posts to those topics. Soā¦
If I could write one letter to any OR department in the world, it would say something like the following.
āMathematical rigor and algorithmic excellence are our identities, our assets.
Those are the things for which companies pay a lot of money in every industry.
Cost savings, margins improvement, or reducing the carbon footprint can be achieved thanks to our specific knowledge.
We just need one more step towards offering the industry what it deserves: engineers able to build, ship, and adopt that knowledge in any organization:
šļø Build
The goal at this phase should be to translate optimization models into usable software. A mathematical model or an algorithm is not the same thing as having an optimization engine that can be connected to other parts of the software.
We can leverage tools like FastAPI or Flask to develop APIs so the models become callable, and if we stand to reproducibility as we do with algorithms, we should teach containerizing tools like Docker.
This is what moves algorithms from āit works on my machineā to āit works everywhereā.
š¢ Ship
The goal here should be to integrate the best practices from software engineering so that we can safely move the optimization engine to production environments.
We can include here things like automated testing with tools like pytest or Junit, and some CI/CD ideas to productize engines more easily with GitHub Actions and the like.
This will create a working pipeline that runs tests automatically before moving trustable code to production.
šŖ” Adopt
The goal here should be to connect optimization outcomes to business language, including all the stakeholders in the conversation.
Clear communication through business KPIs and simple dashboards is key to explaining results to non-technical users.
Tools like Streamlit or Plotly Dash may help here.
In this Masterās program, the math and algorithms remain. The difference is that students graduate knowing how to put their models into motion.ā
ā If you canāt wait for your university to build this track, build it yourself:
Take a FastAPI course.
Containerize one of your models with Docker.
Develop tests with pytest.
Setup CI/CD for automated testing with GitHub Actions.
Build a simple UI that connects to the optimization engine and show results in Streamlit.
That way, youāll get all the tools in your arsenal to be the OR Engineer that the industry is looking for.
š Conclusions
One year ago, I described some of the most pressing issues in our field: commoditization, mindset, marketing, and practicality.
A couple months ago, Laura Albert et al. somehow validated it academically by identifying those issues and comparing Operations Research to Machine Learning. It can be incredibly useful, as long as we approach it with intention. They also suggested 10 actions to take to improve the situation.
Now Iām updating my thoughts and providing a path forward built on usability and education reform.
The tools exist. The education path is clear. Letās hope everything aligns in the next few years.
But one of the tools weāve seen today, DecisionOps -treating OR like software, with versioning, testing, and monitoring- deserves its own deep dive. Iāll explore that soon.
And then thereās Agentic AI. Iāve recently read about a job posting from one of the biggest retailers worldwide where they want expertise in OR and Agentic AI. Think it carefully: itās a signal of whatās going to come. Agents are here to make automated decisions and assist users in a new fashion way. And those decisions will need OR at their core.
If we succeed, OR will become invisible for the right reason: not because itās forgotten, but because itās everywhere, powering the agents that reason, plan, and decide.
Letās keep optimizing,
Borja.