đ #94 Vibe-coding Operations Research: what gets better, what gets harder
AI coding agents make development faster, but debugging trickier. Experimentation cheaper, but quality unpredictable. Here's my honest view.
I spent one hour building an OR agent last week. Five years ago, that wouldâve taken me a day.
What happened?
Last week I posted âI vibe-coded an OR agentâ.
That sentence hides two big shifts.
The first one is that I vibe-coded something, that is, I used AI coding tools to write the code for me. That has implications for you and me as developers of models, algorithms, and systems that work.
The second one is that I built an OR agent, an AI agent that actually does Operations Research. That has implications for the field itself, and for the way we think about optimization in a world of agents.
Today, letâs focus on the first one.
AI coding companions like Claude Code, Codex CLI, or Gemini CLI make development dramatically easier by writing the code themselves, not you.
But this has consequences.
Do you need to be an expert to effectively vibe-code OR, or does vibe-coding make expertise less necessary?
When AI writes 80% of your code, what are you actually contributing? How do you stay sharp as an OR engineer when youâre typing less? Are we becoming better problem solvers, or just better at delegating to AI?
Letâs unpack that in Feasible:
đ Skills that matter more
đ Automated, not obsolete
đ§ The open questions
Are you ready? Letâs dive in⊠đȘ
đ Skills that matter more
When people started using GenAI to write code, there was a catastrophic view of developers being replaced by AI.
The knowledge accumulated by AI is bigger than the one accumulated by humans in general, thatâs true. And thus AI is lowering the barrier for developing products.
But that doesnât mean youâll instantlly have more competition.
That means youâll need to focus on other areas.
Which ones? youâd be asking. Letâs see some of them, but this is not a restricted list.
đ§Ș Experimentation
Experimentation becomes cheap with GenAI. You can test infinite approaches to solve your problems in less time.
If you take my previous article as an example, I myself tested an idea (agents + solvers) in less than one hour.
I donât know exactly how long that wouldâve taken that automation, but I can guarantee you I couldnât write that automation in less than one hour.
That means that before GenAI, Iâd only test ideas I was really confident about.
But now I can explore more of them.
What if instead of using Python + Google OR-Tools I want to test Java + Timefold? What if instead I wanted to code my own heuristics and local search methods? What if now I want to get the board with computer vision?
We can iterate more on formulations and ideas, and validate (or invalidate) assumptions faster.
đ§ Design thinking
As a consequence of the previous one, design thinking will be more relevant as youâll need to think in business solutions, not just solutions to optimization problems.
And here I see two kinds of design thinking.
One that goes to architecture design, where youâll design systems with clear interfaces. This module reads data, this one solves, this other one presents results. The AI can implement each module, but youâll be the designer of them.
The other one goes to product thinking, where youâll spend more time on thinking which problem youâre actually solving, whoâs going to use it, or what does success look like. But youâll spend less time on how to code the thing itself.
đą Problem formulation
If you want to let Claude Code build the Queens solver, you wouldnât say âwrite a constraint programming modelâ.
Youâd rather say âthe Queens canât touch each other, canât be in the same color region, canât be in the same row or columnâ.
The clearer my business rules, the better the code.
The AI can translate formulation â syntax. But it canât ask your stakeholders the right questions to get that formulation.
Thatâs still on you.
đ Debugging
Hereâs the uncomfortable truth: debugging vibe-coded OR models is harder because you didnât write them. You donât have the mental model of how it works, unless you personally approve every single change made by the AI. Which turns the development into something slow (slower than if you write the code yourself).
But debugging also gets better in some ways: I can ask Claude âwhy is this constraint infeasible?â and it can explain the logic back to me, or create a plan to understand why.
Itâs like pair debugging.
Youâll talk with the AI to create plans that find the bug, and over time youâll build your own sense of what to do, youâll create a better intuition when working with it.
â»ïž DecisionOps
The gap between a prototype and production shrinks as vibe-coded models can go from idea to working demo in hours.
But does that mean theyâre production-ready?
I donât think so. Most of the times, they arenât. Youâll need -more- discipline to not ship vibe-coded experiments into production.
And that discipline has a name: DecisionOps. Vibe-coding accelerates creation, DecisionOps safeguards adoption.
đ Automated, not obsolete
At the same time there are skills that matter more, there are skills that matter less.
Itâs not like theyâre going to disappear, itâs just theyâre being automated.
Keep reading with a 7-day free trial
Subscribe to Feasible to keep reading this post and get 7 days of free access to the full post archives.



