📈 #74 The engineering side of Operations Research: tactics to amplify your modeling value through better software practices
Ensuring reliability, maintainability, and business impact
How many times have you read that Operations Research has three pillars?
You name them: optimization knowledge, software engineering skills, and business acumen.
I’ve repeated this multiple times.
But I never provided more details.
So today I want to focus on engineering and how learning from software developers can make you a faster, more reliable OR professional.
We’ll cover:
🔧 Why OR needs better engineering
📐 3 Software Engineering strategies for OR Engineers
🔭 Beyond code: becoming better engineers
Are you ready? Let’s dive in… 🪂
🔧 Why OR needs better engineering
Many OR projects succeed in the mathematical modeling phase but fail in implementation.
This is usually because they're treated as academic exercises rather than software products.
The mathematical model might be elegant and theoretically sound, but without proper software engineering practices, it remains trapped on a researcher's laptop… Inaccessible to actual business users and unable to handle real-world data variations.
This fact leads us to the works on my laptop syndrome.
When an optimization model only "works on your laptop", several critical issues arise:
Knowledge transfer is difficult when the original developer leaves
Updates require manual intervention from the original developer
No automated testing exists to verify results with new data
Business users can't access the solution independently
Models aren't connected to production data systems
Scaling to larger datasets becomes problematic
Thus, your work becomes a proof of concept instead of a business asset.
And there are hidden costs of poor software practices in OR, like technical debt (messy, unstructured code that’s hard to update), fragile models (one change breaks everything), maintenance overhead (only the original author can debug it), and loss of trust (stakeholders lose confidence when bugs happen).
In short:
A brilliant model with bad code dies quietly.
Operations Research isn't just about solving hard problems, it's about solving the right problems reliably under practical constraints.
The difference between a good OR solution and a great one isn't just mathematical elegance but the ability to consistently deliver value in production environments with changing data, evolving business needs, and multiple stakeholders.
Reliability and usability often matter as much as theoretical performance.
Traditionally, OR Engineers were modelers working in isolation or research teams, while Software Engineers handled deployment, systems, and tools.
The gap between modeling and software caused friction: models were hard to integrate, and engineers struggled to understand solver logic.
But now OR Engineers are expected to bridge both worlds and speak both math and code.
How can we, as OR Engineers, improve our value delivery?
📐 3 Software Engineering strategies for OR Engineers
I’m sure there are more strategies, but I wanted to highlight three.
Embrace them to be ahead of 90% of OR Engineers:
💡 Think in code design following SOLID principles
The SOLID principles from software engineering can help OR engineers design cleaner, more flexible code.
Especially as models grow more complex and business needs evolve.
Here’s how each one applies:
S - Single responsibility. Each module should do one thing. Separate your data loading, model building, constraints, objectives, and solution analysis. This makes your models easier to debug and update.
O - Open/closed. Your model should be easy to extend (e.g. new constraints) without changing the core logic. Use modular components that can be added or removed like building blocks.
L - Liskov substitution. If you swap a Gurobi solver with an OR-Tools one, nothing should break. Design around interfaces for easy substitution.
I - Interface segregation. Avoid bloated “God classes.” Split responsibilities: one interface for solving, another for solution analysis, another for tuning. Each part stays lean and focused.
D - Dependency inversion. Your business logic shouldn’t rely directly on a solver like Gurobi. Depend on abstract interfaces and inject specific implementations: it keeps your code flexible and testable.
Applying these principles can create more maintainable, extensible systems that evolve smoothly with changing business requirements.
🧪 Test-Driven Modeling
You’ll find many resources on Test-Driven Development (TDD) out there, but here’s what it means for OR: it’s a way of writing code that prioritizes exhaustive testing to guarantee high-quality products.
There are some benefits of developing that way: it helps you catch errors early, improve model documentation, and force clear thinking.
With that in mind, how can you take advantage of that strategy?
Define the success criteria before coding the model. Write small examples with known solutions. Did you just develop a new constraint? Test it with a minimal example. Not only that, create corner cases so that you can easily catch strange behaviors.
If you want examples about writing small unit tests for data transformations, constraints, or cost functions, just take a look at Timefold docs where they teach you how to test constraints for Timefold models, but the ideas behind them will apply to your model too.
🤖 Automations with CI/CD pipelines
Your model shouldn’t be a black box you run manually once a week. It shouldn’t be run via notebooks or brittle scripts that are hard to use.
Really.
With the myriad of software tools available, it’s easier than ever to:
Deploy your model on any kind of machine.
Pass automatic tests before deploying.
Deploy automatically in the cloud.
You just need to use Docker and GitHub Actions to do that for you so you can easily deliver value.
If you want a deeper dive on CI/CD for Operations Research projects, take a look at this post by Nextmv.
🔭 Beyond code: becoming better engineers
Adopting software engineering strategies isn't about turning you into a software developer.
It's about making you a better engineer who can deliver robust solutions that create lasting business value.
The goal is to enhance your work by giving you tools to make it more reliable, maintainable, and impactful.
Not every OR project needs a full CI/CD pipeline or formal verification of the SOLID principles, but knowing when and how to apply these tools can be the difference between a successful project and a forgotten prototype.
If you want to reduce the barrier and start applying the previous strategies, I recommend you start small.
→ Write one test for a function you consider crucial.
→ Break your code into something more manageable, separating data, model building, solver calling, and outputting analysis into different parts.
→ Validate your model across edge cases, even manually, so that you’re more confident in delivering high-quality code before it’s deployed.
After that, you can start thinking about bigger things like creating dozens of automated tests (not only unit but also functional or integration tests), easily deploy to a production environment, and modularize everything so that code is more easily readable and maintainable.
Adopting these principles will significantly improve your ability to deploy models quickly, reduce bugs, and make updates with ease.
By embracing both the mathematical rigor of Operations Research and the engineering discipline of software development, you’ll unlock the full potential of optimization to solve the most complex business challenges.
Not just once, but reliably, repeatedly, and at scale.
🏁 Conclusions
I’ve said it multiple times, but I’ll repeat it just in case…
Operations Research is about getting models and algorithms into the hands of users.
Today we’ve seen the importance of embracing good software engineering habits so that you can accelerate your work, reduce errors when deploying into production, and increase your impact.
The pattern is clear: engineering maturity amplifies modeling value.
Which of these habits are already part of your workflow?
Let’s keep optimizing,
Borja.