In the recent Industry Week article, “Finding Manufacturing Performance Gaps with OEE”, the author Louis Columbus discusses the value of Overall Equipment Effectiveness (OEE) as a metric for driving manufacturing operational excellence. He went on to play devil’s advocate to all of the enthusiasm today for using OEE as the centerpiece of manufacturing improvement programs. I agreed with his conclusion that “if OEE is not applied properly and supported by accurate and unambiguous data, it could hinder an organization from achieving its true operational excellence potential.”
This post is part 3 in a series I wrote to respond. In the second post , I challenged the first two “lessons learned” Mr. Columbus shares in his article. First, he cautioned against comparing OEE results across multiple facilities or production lines. Next, he discouraged the use of OEE in a company’s pay-for-performance plan, if they have such a compensation plan in place. I hope you will read Mr. Columbus’ article and my last post if either of these considerations applies to your situation.
In this final segment, I want to consider the last two “lessons learned” the author discusses.
Lesson Learned #3: Don’t just trust the aggregated number
An investment advisor will tell you not to look at the historical return without considering risk and volatility. A procurement specialist won't recommend a ‘buy’ of a critical component solely on price. As I pointed out in the second segment of this series, no single metric [ever] tells the whole story - that is certainly true for the aggregate OEE metric as well. Any operations scorecard that assumes so is misguided. Even knowing OEE component scores and trends are insufficient.
Again, the real “lesson learned” here is that metric scores need to be backed by data and analytics that enable the manufacturing team to drill into root cause issues and use that insight to address recurring situations that impact operational excellence. Understanding the drivers that impact OEE results is critically important to achieving optimal performance. Building a comprehensive, data-informed program around OEE that educates and empowers all members of the production team on what influences its outcome is a worthwhile undertaking.
A robust IIoT platform like Spyglass provides the foundation to analyze and identify underlying OEE performance issues. Spyglass scales to hold the large quantities of historical data often needed to spot true anomalies and persistent performance trends. It puts that intelligence into the hands of front-line manufacturing team members so they can evaluate properly and act quickly. As Mr. Columbus states, “there is a growing potential for Industrial Internet of Things technology to provide trusted, up-to-the-moment OEE data, enabling additional insights into performance fluctuations based on equipment effectiveness and efficiency.”
Lesson Learned #4: Factor in equipment setup times
Completely agree with Mr. Columbus here. But keep in mind that planned setup time is only one of the “six big losses” captured in OEE. However, I believe it is worth calling out because it is a potential “cheat” if not monitored closely.
Here is one way that “cheating” can occur. Imagine a manufacturing plant with several machines (or lines) with the capability to produce the same product. If the plant is not running at full capacity, one or more of these machines could be unscheduled and not be “planned” to be in production for some time. Then, while idle, setup or adjustment activities could occur. Sometimes this is even concurrent with maintenance efforts. When Planned Production Time resumes, no lost time is captured in the OEE calculation for the setup already completed.
In isolation, this practice is not inappropriate. It makes more sense to make setups and adjustments when no production is scheduled. It could even be considered a “best practice” because otherwise unanticipated issues – longer than expected setup, multiple adjustment cycles, or even stoppages at startup – can delay scheduled production output and potentially negatively impact customer delivery performance.
But if OEE is a benchmark over time or across similar production assets, variation in capacity utilization can distort OEE results, specifically Availability percentages. That is why some manufacturers also track TEEP. “Total Effective Equipment Performance,” TEEP is simply OEE multiplied by the capacity utilization percentage. Sometimes capacity utilization percentage is also referred to as schedule loss. In other words, the loss of productive output from an asset because it was intentionally not scheduled to produce anything during a specific period.
Production teams don’t always support the use of TEEP as a performance metric. They rightly argue that capacity utilization is often not a factor that they have primary influence or control over. For this reason, TEEP is not a good measure to base performance reviews or compensation for manufacturing personnel unless the operational strategy is to achieve maximum production output and optimize asset utilization. When leaders are empowered to schedule and increase or reduce capacity as appropriate, TEEP can be a more effective metric tool than OEE alone.
Mr. Columbus rightly points out that a focus on setup time reduction can be a powerful area of focus for OEE improvement. And the focus on getting all team members involved in the root cause analysis and improvement efforts is spot on. This emphasis applies to setup time reduction as well as other out of the box thinking aimed at the other five of the “six big losses.”
Final thoughts on OEE “Lessons Learned.”
Operational excellence arises from setting a culture of openness and empowerment. It’s a willingness to try to new approaches. It’s recognizing that yesterday’s problem, or opportunity, isn’t necessarily tomorrow’s. Yes, you need the right data, and the tools to analyze it properly made accessible to all team members. But in many ways that is the easy part. Getting the team energized about digging in, asking good questions, listening to others, and using data to learn and inform the conversation is where leadership “rock stars” make the difference. Supporting your teams to take risks and learn along the way is the true difference maker.
There is no single, perfect performance metric. Setting a goal and choosing a measuring stick is only the beginning of the operational excellence journey. But as metrics go, OEE is pretty “SMART.”
- It is Simple and Specific, which is to say easily explained and understood.
- Measurable. The formula is straightforward even if it requires lots of clean data to calculate results consistently and accurately.
- If implemented properly it is definitely Attainable. With a continuous improvement approach, any team can start at their current level of performance and improve going forward.
- In “Lesson Learned #2”, I made a case for Relevance. If operational excellence is strategic to your company and its business performance, OEE has a place on your scorecard.
- And lastly, with the right data analytics platform supporting the program, it can be Timely and actionable.
With fast changing manufacturing conditions, market dynamics, and supply chains your operational excellence improvement program needs clear alignment to the here and now, not the last period or the prior year. Investing in strong OEE measurement and operating analytics can be a competitive differentiator and disrupter for your organization.
We appreciate hearing your perspective on this topic and love learning how leading manufacturers are making “digital transformation” and “Industry 4.0” real in their organizations. If you’d like to author the next chapter of innovation and success for your organization, let us hear from you. We’d love to help!
This post was authored by Mark Adelhelm