Over time, optimization solvers have become dramatically faster and more powerful—but paradoxically, model run-times haven’t always decreased. Why? Because as computational power increases, so do our ambitions. We add more products, more time periods, more constraints, and more real-world nuance. Models that once included a dozen decision variables now include thousands or millions. So even as solvers improve, the run-times often stay roughly the same—within the boundaries of what people are willing to tolerate, whether that’s a few seconds, a few minutes, or overnight.
This is a common pattern in technology. Think about how graphics have evolved: from 640×480 resolution, to 800×600, then 720p, 1080p, 2K, 4K, and now 8K. As hardware got faster, we didn’t use the power to speed up what we already had—we raised the bar. We demanded richer visuals and more immersive experiences. Optimization is no different. As solvers improve, we don’t just solve the same models faster—we build more realistic, impactful, and integrated models. It’s not about shrinking runtime. It’s about unlocking value within the runtime we’re willing to accept.
Consider the challenge of building an optimization model for a national retailer tasked with optimizing its supply chain. From the outset, it’s clear that the full-scale problem is far too large to solve directly—so simplification becomes the first step. We spend weeks or even months preparing the data to fit into a model that will solve in a reasonable amount of time.
- Instead of modeling every individual store, the network is aggregated into zones—perhaps grouped by the first three digits of a ZIP code.
- Then, we encounter edge cases: certain zones operate under unique constraints, so they must be modeled separately.
- We can’t include every SKU, so we select a representative subset and group similar items together.
- And if the model is still too large, we turn to the 80/20 rule—retaining just the subset of decisions that drive most of the value, and discarding the rest.
These aren’t mistakes—they’re trade-offs made out of necessity. But what if they weren’t necessary?
What happens when you don’t need to shrink the problem?
Now imagine giving that modeler access to a solver that’s 100 times faster. The biggest benefit isn’t getting the same answer in 6 seconds instead of 10 minutes. The real breakthrough is eliminating the need to spend months simplifying the model just to make it solvable.
Faster solvers don’t just save time at runtime—they save time upstream. They reduce or even eliminate the countless hours spent aggregating data, excluding edge cases, trimming variables, and making assumptions just to get the model to run. That’s where the hidden cost lives—not in solver speed, but in the labor and compromises required to make a complex problem solvable.
Just as importantly, modeling problems in fuller detail doesn’t just reduce the labor involved—it drives better business outcomes. When you stop simplifying or aggregating away critical data, you make better decisions. That can mean reducing total miles driven, improving warehouse space utilization, better aligning staff with demand, or boosting service levels. The real prize isn’t just speed—it’s the measurable improvement in KPIs that comes from making fewer assumptions and solving the problem more precisely.
Organizations gain the freedom to stop simplifying the world just to fit it into a model.
- We can stop aggregating products or locations and represent them in full detail.
- We can stop discarding the “last 20%” of demand data that doesn’t fit cleanly but still matters.
- We can move beyond placeholder assumptions—like assuming a fixed percentage of warehouse space is allocated to certain product groups—because now we can include everything directly in the model.
When someone says, “My models already solve in under five minutes,” it’s easy to overlook what’s being left on the table. The opportunity cost isn’t in the runtime—it’s in the compromises made to get there. What they’re missing is the opportunity cost: the time and insight lost when models are oversimplified to fit an artificial runtime constraint. A solver that unlocks better models, faster decisions, and fewer assumptions delivers far more business value than one that just runs a bit faster.
Rather than asking whether a model solves quickly enough, it’s more powerful to ask: What are we leaving out to make it solve that quickly? A solver that’s dramatically faster doesn’t just improve performance—it transforms the entire modeling process. Faster solvers don’t just save time. They change what’s possible.
Curious what a faster solver could unlock for your team? Let’s talk.
Leave A Comment