📈 #72 ICLR 2025 meets Operations Research: 10 papers to take a look at
From LLM-powered routing to generalist neural solvers
Three days left for the International Conference on Learning Representations (ICLR) 2025.
Three days for one of the top three conferences in AI and ML.
Three days for Operations Research content too.
I know I told you the other day that I see optimization everywhere. And now I see it on AI and ML conferences too.
Today you’ll read in Feasible 10 papers to take a look at in ICLR. Most are posters, some others are interesting submissions as conference papers. Anyway, I’ll give you:
📌 1-liner
💡 Why that paper matters
🛠️ Proposal
And, of course, a link to the submitted paper.
Ready? Let’s dive in… 🪂
🤖 1. Learning-Guided Rolling Horizon Optimization for Long-Horizon Flexible Job-Shop Scheduling
📌 1-liner:
AI-powered scheduling for long-term, flexible manufacturing operations.
💡 Why it matters:
Integrating learning into Rolling Horizon Optimization (RHO) allows for faster and better solutions to long-horizon scheduling problems by reducing redundant computation across overlapping decision windows.
Flexible Job-Shop Scheduling is a core problem in manufacturing and operations. Improving solve time and quality here has direct ROI in production efficiency and resource usage.
🛠️ Proposal:
The authors propose a neural network to identify parts of the schedule that don’t need to be re-optimized in every horizon, reducing compute while improving outcome quality.
🔗 Paper
🚛 2. Boosting Neural Combinatorial Optimization for Large-Scale Vehicle Routing Problems
📌 1-liner:
Smart, scalable AI for solving massive delivery routing problems.
💡 Why it matters:
VRPs are central to logistics, delivery, and ride-sharing platforms. Better large-scale performance directly translates to cost savings and customer satisfaction.
Carefully designed architectures and decoding strategies make neural solvers more scalable for large vehicle routing problems.
🛠️ Proposal:
The paper improves decoding strategies and uses grouped node embeddings to make neural solvers more scalable to thousands of nodes.
🔗 Paper
🗣️ 3. Decision Information Meets LLMs: The Future of Explainable Operations Research
📌 1-liner:
Talking optimizers: using LLMs to explain complex OR decisions clearly.
Keep reading with a 7-day free trial
Subscribe to Feasible to keep reading this post and get 7 days of free access to the full post archives.