Skip to main content

Replies sorted oldest to newest

All of the items on your list are deficiences in the process which should be resolved by policy and supported/enforced by management. SOPs or BPs need to be written so the workforce knows what is expected of them. This goes for every item on your bullet list.

Firstly, it's really hard to measure what you don't track. You need to figure out a way to compile the missing data to compare against.

If you're good at tracking parts leaving the warehouse you can compare work order parts against the parts leaving.

For maintenance work and failure codes you rely on your techs to enter that data.
RM
I suppose as a starting KPI you could graph the number of failure codes entered against the number of work orders written. You could measure improvement with that.

I already mentioned parts leaving vs parts usedn on work orders.

You could measure work order time entered in the CMMS vs total man-hours worked for the period.

Charting each of these will give you the means to track improvement.
RM
Terry O',
IMHO, people don't use the CMMS data well because they don't believe it.
That is because data entry quality is generally very poor.
One cause of this is 'long' drop-down lists, with many entries choosing the top 1 or 2 items, rather than sift through a long list.
The available selection of entries in drop-down lists does not match what people see in real life.
Errors made at the time of entry are hard to 'catch' and rectify.
CMMS programs do not always give 'instant feedback' to those entering data, so it feels like they are filling a bottomless pit.
People may feel less motivated to do 'meaningless' tasks like data entry, when there is 'real work' out there.
Continuous reduction in staff numbers does not help retain motivation.
Class room training is invariably the preferred option for people entering data. This has its place, but has very poor retention. Much better to mentor and provide active help-desks ("what do I do now" questions).
In the old days, when most entries were in long text, these errors were less - though we had a good dose of Broken & Fixed kind of entries then.
If we can't fix the root causes, we should not be surprised at the results.
RM
Is n't it better to specify a minimum mandatory data entry which would include all the necessary data as pointed by Wallygator below? This will prevent deliquent data or work orders.


quote:
Originally posted by Wally gator:
I suppose as a starting KPI you could graph the number of failure codes entered against the number of work orders written. You could measure improvement with that.

I already mentioned parts leaving vs parts usedn on work orders.

You could measure work order time entered in the CMMS vs total man-hours worked for the period.

Charting each of these will give you the means to track improvement.
RM
My replies are embedded in bold below:

quote:
Originally posted by Vee:
Terry O',
IMHO, people don't use the CMMS data well because they don't believe it. - Quite true when they know the data do not reflect the reality.

That is because data entry quality is generally very poor. - Can be improved or solved by having minimum mandatory data entry. The CMMS engineer should be able to specify the mandatory fields. Then followed by user training and CMMS expert user support.

One cause of this is 'long' drop-down lists, with many entries choosing the top 1 or 2 items, rather than sift through a long list.
The available selection of entries in drop-down lists does not match what people see in real life. - This can be avoided in new cases or improved for existing codes by coming with a component and failure code lists which should be reviewed by equipment experts or let the equipment experts give the the code list. Btw, ISO14224 has a good code list which API tries to adopt and adapt. API seems to extend the ISO14224 failure codes for oil & gas upstream eqpt to dowstream equipment failure codes. In view of this, there is no need to reinvent the wheel.


Errors made at the time of entry are hard to 'catch' and rectify. True but the technicians should be trained upfront and well-versed with their respective failure codes. Also, the eqpt reliability engineer should pay serious attention to data ewntries into work orders which contain useful data for analyses later on ie. proper data/information gathering.

CMMS programs do not always give 'instant feedback' to those entering data, so it feels like they are filling a bottomless pit. People may feel less motivated to do 'meaningless' tasks like data entry, when there is 'real work' out there.
True if the CMMS is implemented without the KPIs section specified and rolled out. Eg for SAP PM, the PMIS (Plant Maint Information System should be ready when rolling out the CMMS to plant users who can then perform the respective KPIs themselves if they wish to see their work or equipment performance.

Continuous reduction in staff numbers does not help retain motivation. - True but maint mgrs should keep watch of the KPI report.

Class room training is invariably the preferred option for people entering data. This has its place, but has very poor retention. Much better to mentor and provide active help-desks ("what do I do now" questions). - Yes, CMMS support engineer should be handy and can be called anytime during office hours. Another moethod is online training which can show demos or tutorials anytime necessary.

In the old days, when most entries were in long text, these errors were less - though we had a good dose of Broken & Fixed kind of entries then. - In fact, CMMS nowadays still allow for this text entries following each failure codes if perfect match is not found but not compulsory to enter because the data in text form canNOT be easily analyzed. Some people says can be analyzed by a special program using keywords but I haven't seen its effectiveness and speed.

If we can't fix the root causes, we should not be surprised at the results. - I hope the above root causes can be remedied with a bit of effort and CMMS implementation can be brought to the next level.
RM
Last edited by Registered Member
Terry O, your bullet points below indicate that the CMMS implementation is not comprehensive and defective. The work is half-cooked! I can sense that any CMMS implementation with that kind of bulletpoints are very basic. The view that CMMS is merely a maint accounting system should be discarded as well.

quote:
Originally posted by Terrence O'Hanlon:
Our research is showing many companies fail to use the CMMS/EAM to the fullest by:


  • Failing to tracking all maintenance work
  • Failing to track all spares
  • Not using effective failure codes


Are there good key performance indicators for use of CMMS/EAM?

Terry O
RM
Last edited by Registered Member
Expectations, training and rationale are very important items to your chances of success. Roll them out with at the very least the maintenance manager and craft supervisors present to show technicians it has full support from management. I work in an extremely regulated industry in which it is probably easier to get these expectations implemented. Also, follow the meeting with KPIs so your tech groups know it is being paid attention to. And when there are successes resulting directly from the new data, display that too.
RM
Josh,
You have a lot of experience with the use of CMMS, so I shall defer to your views.
However, I am not sure that 'mandates' work, in this context. People find workarounds, so the intent of the mandates is often defeated. If we cannot get people motivated to enter data correctly, it becomes a continuous uphill struggle.
Many companies fail to recognize the importance of good implementation of CMMS. Like all implementation work, it follows the iceberg model - 90% of the work is not visible!
Two steps that will improve the CMMS quality
1. The top Management Team should enter the CMMS at least once or twice a month and query it to check KPIs or Dashboards. Once people know the bosses are interested, they will pay more attention. But ..... how many of the Top Team including the Maintenance Manager are fluent in navigating through the CMMS?
2. Peer audits work well; in one Company, field supervisors/engineers are encouraged to audit once a quarter, relatively limited scopes of CMMS entries in areas other than their own. The idea is to look for missing, incomplete or incorrect entries, closing out of work orders when completed etc. Their reports are sent to and discussed with the CMMS focal point. She/he arranges training, usually one-to-one to those responsible.
RM
I agree with your 2 suggestions below.

1) Top mgt involvement - it's quite sufficent if the Maint Mgr reviews the KPis from CMMS once a month. I have heard we haven;t got time to close our work orders, that's why our KPis are poor. SO we gave them 1 or 2 weeks in advance notice to close all work orders before generating the KPIs to get the buy in to the data.

2) Peer audits - Yes, in fact publishing the KPIs for respective disciplines e.g. static, rotating, electrical & instrument would encourage competetion and good data entry.

In addition, it's possible to cascade down the KPIs to individuals levels in CMMS which can be used for staff performance.
RM
About walkarounds CMMS, once the CMMS is used, it's easy for the supervisor to get his worklist from CMMS every morning for the morning meeting or discussion and he has to ensure its closure by the technician who does the work and/or himself.

So, it's quite not a question of mandating work. We can agree to specify what data to enter, otherwise the data in workorders are not complete for KPI or reliability analyses. In fact, those guys would be happier once they see their work results after the KPIs are generated. It shows we have done some works here.

Without the minimum mandatory data entry, the data in CMMS is quite useless when analyzed.
RM
Vee,

Some interesting points you’ve highlighted there. As vendor, we’ve certainly seen increased interest in KPI reporting abilities. Dashboard tools are an area of which our clients are particularly keen on exploring.

Supported by advances in technology, we’ve been quietly developing an interactive tool for top management. Our user friendly Dashboard tool offers the ability to drill-through data, to analyse performance and raise awareness of any maintenance department. Providing the data has been inputted correctly, data analysis through dashboard tools will enable top management to react as required to improve services.

Regards

Rob
Tabs FM Ltd
www.tabsfm.com
RM
Work order closure is important but most important is how to ensure quality of input? Best to show the benefit and regular audit.We issue NC if the input is not complied with relaity,audit and checks are perfect.It helps you keep 90% accurate.

No doubt that it should be made simple but it is not always possible.One more important point is that more focus must be on failure items an d information arising out of preventive maint. task becaue that is whrere you would go back to find most of history.
Spares should be included in the cost analysis KPI,you must know how much you are investing in Spares/services.It works. Even we do have internal rates to keep the check and balances!
Capacity utlisation ,as already mentioned by some one above, is really helpful.
But the most important is that a process is created in the departmen where these issues are discuused and given importance.ROI is a big factor,you must be able to show that by monitoring there you are achiving results in numbers!!!
RM
I want to add one thing here these are only tools.
If a tools not used properly It will not function or functions improperly.
If a tool is not properly prepaired it will not work properly.
If the person using the tool doesnt know how why etc it wont work properly.
If the tool is not cared for and checked and repaired constantly it will not work properly.
Summing this up short, expect less from it then you put in, it has its own degree of quality.

Also be warned!
If it was a perfect 100% relationship then when bean counters start hacking $$$ there is no room for readjustment.
KPI are just that indicators not justifications.
It just nudges in a direction you follow up and investigation is what makes it work.

Example being; A worker says your coffee pot doesnt work. It makes sense to see if switch is in on position and plugged in before implieing about replacing it...
RM
Last edited by Registered Member
Several years ago I set up a series of indicators which feed a KPI called "Work Order Content Evaluation". This was origionally developed to aid in insuring that failure codes were being added to all demand maintenace work orders (Emergancy -ER & Corrective - CM). It checked for four criteria. These were Labor Hours, Failure Class Code, System Code, and Asset# &/or Location. Our implementation of MAXIMO 6.0 processes 30,000 plus work orders per month and we initially attempted to have a multi level screening process. This consisted of the lead for Tech writing the Work Order, followed by the Planner for the group, followed by the Supervisor. THe result of this manual process was work orders with missing data 30-40% of the time. Not very impressive. As a result I then developed a web based interface that uses an Oracle view of the backend tables to run queries and provide graphical guage displays that would show the current % of all work orders in the system for each of the four criteria above. These collectively feed the Overall "Work Order Content Evaluation" KPI. This resulted in raising the Content Eval to 95%+. This resulted in a major shift in the attitude of the maintenanc Org regarding data collection strategy and was supported well by uppermanagement. After approx 1 year I expanded this criteria from 4 to 10 criteria to include Sub-System Code, Remedy Code, Component Code, Problem Log Entry Present, Solution Log Entry Present, and Materials Charged (when Repalce was a remdy). This new requirement saw Content levels drop to the 25-30% ranges, for a period of one month after which they quickly rebounded to the 95% + range. The driving force behind this was that management had adopted this metric as a performance goal for the Group leads and supervisors. I consider this a successful KPI that had a positive effect on the overall quality of data in our EAM. The lessons learned form this experience include:
Automation of the Monitoring Process is Critical. This is the most challenging activity as you MUST develop mesurements within you application to indicate if you are adhearing to the processes outline in your documentation (SOP, DOP, BUS documents). These documents alone will not insure compliance.
The EAM is not a silver bullet it is merely a tool to monitor the performance of your miantenance system. One of the bigggest ADVANTAGES of many EAM applications is the ability to customize the functionality of the App. One of the bigggest DISADVANTAGES of many EAM applications is the ability to customize the functionality of the App. NO , that wasn't a typo as this is the proverbal double edged sword. Another problem is that many of the examples for failure coding provided by the major players are poor at best and down right counter productive at worst. Most completely ignore the concept of data normalization in their examples and this results in the MEGA lists of codes that I saw disscussed in other posts. I have achieve failure segregation levels of less that 2% using 3 levels with no more than five selections on any level. This means that by collecting three pieces of data we limit the area of affect to less that 2% of the make up of a complex piece of automation machinery. If anyone is interested I have attached a word doc with screen shots of the metrics I track for Work order content.

In conclusion the initial goal was to develop good recording discipline that would allow for the development of other metrics and tools such as Top 10 Trending by Equipment and Location, PM Backlog tracking, PM Performance index, etc.


rgds,

Mark

Attachments

Files (1)
RM
Last edited by Registered Member
I see this is a very old thread. no worries.
The question was ..identify good KPIs for use of CMMS/EAM.

I'd like to first state some assumptions:
> Core Team and stakeholders are actively involved
> Business Analyst role exists - and regularly surveys the user level as to problems/issues
> Reliability Team exists - and has regular meetings (and utilize data/reports from CMMS)
> CMMS functional side expert position/role exists
> Benchmarking is regularly performed by stakeholders
> Core Team maintains a punchlist of future changes (software, process, organization)
> This punchlist provides content for long-range 5-year CMMS plan optimization
> Users received "blended training" at some point pre or post roll-out
> The "end game" is clearly defined, along with department goals/objectives.
> From the above KPIs and metrics are established.
> Advanced processes (where true ROI resides) are understood - and implemented
> etc....
KPIs are nice to have. I also like analytical reports as I can see more information. Plus, with interactive dashboards (business intelligence analytics) I can drill-down on worst offenders.
But to answer the question...
1. Percent backlog "fully planned" at any moment
2. Backlog growth (show last 10 weeks) - by craft estimates
3. Percent reactive maintenance by shop/area
4. Aging report
5. Slice-and-dice by (plant) system
6. Top 10 pareto style outputs
7. PM schedule compliance

w/br
john reeve
RM

Add Reply

×
×
×
×
Link copied to your clipboard.
×