Over the years, service organizations have built elaborate optimization models to drive improved operational performance, particularly when it comes to scheduling team members or assets. Conceptually this makes a lot of sense and has driven significant savings and business opportunities. Nevertheless, I’ve seen too many cases where these models don’t adequately factor operational realities and business variability, often leaving missed opportunities on the table.
The first time I came across such a situation, an organization had invested heavily in a state of the art scheduling system that attempted to mathematically optimize various scenarios to create the tightest possible planning of an expensive resource pool. When implemented, the model demonstrated significant savings and no one questioned the validity of the approach. However, an inquisitive leader had noticed that on months where the system had done a stellar job in optimizing the deployment of resources, his month end financials were typically well above target. And on months where the system couldn’t pressurize the resources as well, his month end financials were typically better than target.
As we probed, we found that on months that weren’t as “optimized”, there were less irregular operations, less rework and less customer misses, all of which were driving up hidden costs. In other words, an optimal planned solution drove additional operational costs into the business and was more difficult to manage. As we further analyzed the data, we discovered that it wasn’t that the business was reaching a tipping point but rather there were two key locations which were stress points in the overall operations. With high volumes at key points during the day, these locations experienced very high variability in performance. To improve performance, all that was needed was to reduce pressure and build-in sub-optimal buffers at key times during the day to improve performance across the system.
A similar situation came forward a few years later. In this case, a state of the art system optimized the overall scheduling but also didn’t factor in the variability associated with key tasks and simply worked with average durations. As the variability of these tasks was driven by the time of day (i.e. traffic patterns) and the type of activities, the system would create an optimal solution that simply couldn’t be achieved at key times during the day, causing sig- nificant customer challenges. Regrettably, the optimization algorithm had been hard coded into the system, making it very difficult to adjust to the operational needs.
In both of these cases, there were some common elements, namely:
- The system was too big to adequately visualize and understand;
- There were no indicators or processes built-in to see these opportunities – in both cases they were stumble upon by accident;
- There was no way to contrast the plan with the day of operations scenarios;
- These systems were not self-healing or adaptive by design – in other words, never would the optimization factor in an understanding of what actually happened;
- Front-line team members shared concerns with occurrences that they had noticed but were dismissed as there was blind trust in the system.
In a service or transactional environment, often these opportunities are not noticed as it becomes very complex to visualize these constraints. With huge amounts of data, the inefficiencies caused by an optimization system becomes very difficult to demonstrate and find while the impact on operational reliability, costs and customer experience can be quite high.
Have you encountered similar situations where an exercise to optimize caused more operational chal- lenges? H ow was this counteracted with a view of the operational reality?