Jump to content
Test ×

UptimeJim

Member
  • Content Count

    15
  • Joined

  • Last visited

  • Days Won

    5

UptimeJim last won the day on January 3

UptimeJim had the most liked content!

Community Reputation

16 Good

1 Follower

About UptimeJim

  • Rank
    Member

Personal Information

  • Name
    James Reyes-Picknell
  • Headline
    Helping companies by telling them the truth, not what they want to hear
  • Current Position
    Principal Consultant
  • Company
    Conscious Asset
  • Industry
    Management Consulting
  • Location
    Barrie, ON, Canada
  • LinkedIn Profile

Recent Profile Visitors

43 profile views
  1. I like the use of "metric" as opposed to KPI. Regarding availability of "tailors" - few have them in-house, but everyone has access. Jim
  2. Andrej - I agree with the fewer is better premise and that the set of KPIs used will mature with the organization. I believe it is key to really think through what information those KPIs can provide and how it might be interpreted or misinterpreted. The example of misusing MTBF (above) is a case in point. I've seen "downtime" used in a way that drove massive investments in spares that were simply not needed. I've seen availability (in its various forms) used to mislead general management into thinking things were just fine, when in fact, they were not. We can choose from among many KPIs that we can measure, but it's the consequences of using them that we want to understand. They can and sometimes do drive behaviors - we want to make sure that they drive the right behaviors. Unlike an error signal in a control system that is used to adjust an actuator - the human "actuator" also has the ability to make choices. The results are not always so easily predictable. We need to consider not only what we are measuring (error signal) and what we want to encourage (control adjustment), but also what the organization (human actuator) is likely to interpret as intent (e.g.: can this get me in trouble?) and therefore produce a response that may or may not match the desired outcome. More information can produce better outcomes to a degree, but it can also lead to confusion. Each KPIs represents an observation on something that is / is not happening. What we want to address are the underlying causes if those trends are unfavorable. Single data points rarely convey the whole story. Too many data points don't inform, they distract. What is important in or to one organization, may not be important in another. What helps one, may harm another. KPIs are sort of clothing - a tailored suit will fit better, at least until the wearer changes shape. One size does not fit all and certainly won't fit all over time. Thoughtful, experienced tailors who can tune into the organization's culture are needed.
  3. Greetings, The situation of a production manager challenging data validity because his experience doesn't match the number he sees on the monitor is quite common. Firstly, the use of MTBF as a form of performance measure is probably not wise - it's useful (with other parameters) in reliability work but rather meaningless on its own, particularly as an indicator of production performance. Secondly, the concept of "mean" is not well understood. Mean, is but one parameter used in a continuous distribution function to describe failure experience. Using it alone is akin to describing a person as having a given height and nothing else. KPIs need to be carefully selected with thought given to what they can / can not tell you and how that information is meaningful. Jim
  4. I've done a lot of reliability work as have a number of my colleagues. MTBF is one of the parameters needed to do proper analysis (e.g.: Weibull) so it's a valuable piece of information. Of course knowing whether the failure is age / usage related, random or infant mortality is also very important in making decisions about failure management approaches. Unless we run our own "studies" to capture data that we can rely on, most of us will rely on data captured in the CMMS/EAM. All too often that data is not fit for purpose, at least not without a considerable amount of effort to scrub it clean. Aside from incomplete / missing records of failure events, we must determine whether or not the WO that comprises any given CMMS/EAM "record" was done to correct a failure or for some other reason. We are interested only in those events where a failure actually occurred, or clearly would have occurred in a very short time had intervention not been carried out. If we simply count up number of times an asset was worked on we will probably include PMs performed, proactive preventive change outs (which are usually done well before failure), repairs in response to false alarms, etc. In speaking with colleagues both in the field and in academia, the quality of data available for reliability purposes is usually very low. Taking that just a bit further, we get into that situation because we don't set our CMMS/EAM systems up to capture data that is useful to reliability. Many of those systems aren't even capable of doing it very well. Programmers are not engineers and most engineers are not reliability engineers (or even reliability conscious). What are your chances of finding a CMMS/EAM that is actually designed with reliability in mind?
  5. My first CMMS was a home grown system called, "Dynamic Equipment Information Systems" (DEIS) at the PetroChemical complex where I worked as a maintenance engineer. It was a very basic work order system that provided job plan details, parts lists and history. Each job was recorded in text fields and all the history was printed with any work order. The thickness of the work order print out was an indicator of troublesome equipment, or a long BOM. Our refinery (next door) was using a paper based system. Both worked well for work management and since discipline of recording what was found wrong and corrected was actually pretty good, both systems had fairly useful information for reliability purposes. I put that down to the discipline our craftsmen had and the attention that we paid to what they wrote. My consulting days began some 7 years later, when CMMS were still largely replacing paper based systems. They were more complex and feature rich and even handled spares inventory. Some handled automated buying and other functions. However, our customers seemed to be struggling far more than I did with getting good information to make reliability decisions. I've probably seen hundreds of different systems, some easy to use, some that were user hostile, some with very basic functionality working very well and some that were feature and functionally rich but under-utilized. I've only seen one or two that I thought were bad for the job they had to do in the customer's working environment. Most work well but most are also poorly implemented, poorly (or not) supported, operated by poorly trained or untrained maintainers, and incapable of generating needed reports without extra programming, extra software bolted on or a lot of effort manipulating data on spreadsheets. Most customers today are more "data distracted" than "informed". Their CMMS' add cost but little real value. I do not blame the software (in most cases). The problems usually arise from poor implementation, poorly thought out business processes (automating the old and not taking advantage of new functionality), poor fit of functionality to requirements, poorly stated requirements (e.g.: focused on technical specs rather than functionality), lack of training, no training, training by the person sitting next to you (learning others' bad habits), rushed implementations (out of budget, time), etc. In some cases the systems are far too complex for maintenance and reliability purposes. The systems available are not well designed to give basic failure and proactive maintenance related history information (e.g.: did it fail? what was failed? what was the failure mode? can you identify the cause of the failure? if it hadn't yet failed, would it have failed soon? was the job a result of some PdM finding? etc.). Designers of these systems are not reliability engineers so the data being gathered doesn't answer the questions that need to be asked. All too often the data being gathered does not provide information that is fit for purpose. In "our world" of maintainers too few really understand failure modes and failure management strategies. Although we are supposed to deliver "reliability" we focus on "maintenance". Arguably we have the emphasis in the wrong area - the means, not the ends. We don't use RCM as much as we probably should. We've failed to inform the programmers who design these systems of what we really need (many of us really couldn't define it well anyway) and for the most part the programmers don't know what they don't know. The end result is a myriad of systems with a lot of unused functionality, little of which (used or unused) actually helps us to improve reliability and reduce unwanted breakdowns that in most cases (by far) could have been foreseen.
  6. Raul - I believe that having contracts for parts supply is a good idea, regardless of when executed. For fast moving parts that should be easy to set up as the supplier will have a more or less guaranteed income. For slower moving items it could be challenging - how does the supplier get compensated for effectively storing items on behalf of the company that may or may not use them? For items that may never be used, the problem becomes even more complex. Those latter items are "insurance spares" in every sense of the words. As for tacking the maintenance planning - it also must be done and sooner rather than later if the company is to benefit fully. However, recognize that planning alone isn't enough. Planners and stores-persons are in most cases not equipped to handle risk based determination of spares requirements. They will need some help, perhaps a tool that performs such calculations, to achieve that. I disagree with Wirza's third point about using the technical manual as a starting point for initial sparing. Manufacturer's manuals are often flawed in their maintenance recommendations. Getting into that is a whole different topic. The manufacturer knows what the asset can do, but not what it will be asked to do. The best place to start is with a work forecast based on RCM results, not manufacturer's forecasts. Where asset availability is being constrained by parts unavailability, regardless the cause, and the unavailability is causing substantial loss of revenues, then I would suggest that attempts to reduce stores are only going to make the matter worse. The cost of holding spares, until such time as plant reliability can be improved and spares requirements forecast more accurately, is very likely less (even much less considering that most of it is already a sunk cost), than the cost of lost revenues.
  7. What a topic! Lack of parts is a refrain (complaint) I hear over and over from customers all over the world. Sometimes there really are very poor spares management practices, sometimes stores has discarded needed materiel, sometimes policy does get in the way, but more often than not there are problems with planning and scheduling being flawed and far too short sighted. Planners expect the parts to be there, and the stores person needs to be told what spares need to be there. Even with diligence on both "sides", parts unavailability becomes a problem. Of course maintainers taking the issue into their hands will often stash parts in shops, etc. so that they know they have their critical spares. That serves to distort the information available to the stores-keepers so they are then left making decisions about buying on the basis of flawed, incomplete and inaccurate information. The problem is not so easily solved either because maintainer behavior will need to change, planning forecasts need to be far better than they usually are, and stores needs to get smarter about how it determines quantities to hold, buy and about what is NOT needed in stores any longer. Consider that parts are a form of insurance against downtime. Insurers don't use simplistic calculations to determine what coverage to offer - there's a whole array of statisticians (actuaries) looking at risks and where to and not to put their money. Most companies don't give anywhere near enough thought to spares, how much to invest in it, which spares to carry to get the most uptime, etc. You need to know what creates demand - failures and preventive work do most of that. Preventive maintenance should be planned and scheduled with a very easy to forecast demand for spares. Predictive and detective maintenance do not usually consume spares, but they do uncover failures that must be fixed (which usually do consume spares). That demand is also fairly easy to forecast if you use failure statistics to forecast what failures you will find. The only "surprises" should arise from those failures you allow to occur, likely randomly and (if you've done your RCM work) only on assets where some downtime is tolerable. Fast moving items (fasteners, fittings, electric devices, some bearings, etc.) can be managed with the simpler stores calculations for min / max / EOQ / ROP. Even if you don't forecast these, you can gather usage information based on actual usage. For the slower moving items the data won't build up for a long time so that approach breaks down. Lead times are often longer and those will drive up the need for spares, even with low demand. You need to forecast demand. It is based on failure rates and task frequencies. Clearly, the more of your work is proactive, the more easily you can forecast demand for most items. If you want to lower your risk of stockouts, then the math gets more complex. Very few companies use the sort of spares calculation tools that are needed and few stores people seem to have the mathematical knowledge to use them. Depending on the simple algorithms built into your CMMS/EAM is just not good enough. The software tools I've seen are not part of any CMMS/EAM - they are stand alone tools that perform the analysis. The results need to be transferred to your stores management software and then the analysis itself needs to be kept up to date. Someone needs to stay on top of the situation and that someone, needs to understand what she / he is doing, not just plugging numbers into formulas. There's quite a bit of science behind getting it right. Airlines and military organizations manage this reasonably well. They put investment into making sure asset availability is a top priority - like buying insurance. Unfortunately where failures are less critical and only cost us downtime, lost production and the concessional regulatory violation, or accident, we don't typically give this nearly the attention it deserves.
  8. I am not a CMMS / EAM "user" per se, but I do work with a lot who are, using a variety of systems from SAP (at the high end) to some relatively unknown cloud based packages that are best suited to a single shop operation. I would agree with Narender. The actually software you choose / use isn't really all that important - it's all about the user and how they use it. I've seen SAP used poorly and hated (more often than not) but also where it's been used very effectively and liked (I haven't found anyone that truly "loves" it yet). Likewise for Maximo, Infor and dozens of others. In speaking with the CEO of one of those software companies (a big one), he described his product as a box - you put stuff in, shake it up, pull stuff out. He pretty much described any data base for any purpose and that was his point - it's just another tool. What I observe in many cases is that companies are data distracted. The tool is not really a tool then, and maintainers don't have a lot of time to spend mucking about with a computer that isn't helping them get their real job done. It is critical to know what you want to do with it before you commit to the tool. Implementation processes usually start with business processes. Wrong!!!! Do those before you select the tool, then get the closest match, don't trust what the salesperson says, they must demonstrate to prove capability. Their promises about functionality are often based on what is in development, not what actually exists. Don't cheap out on implementation effort and training - both can kill the best of systems. I'd sum it up to say that it probably doesn't really matter which software you use, so long as it has some basic work management and reporting capability and is easy to use.
  9. You might also consider RCM-Re-engineered (RCM-R). Our method includes risk and various codes associated with reliability data interchange. We also include some of the math that you will find useful. I've been doing RCM since the mid-1980's (originally using military standards) and have found the functional approach to FMEA (as built into SAE JA-1011 compliant RCM methods) to be the most efficient. It is advisable to actually take some training though. Even reading the books (which are all quite good) won't be enough without practice. It's too easy to mis-interpret some of what is in the books and head off in the wrong direction. Training and a few pilot projects with experienced facilitators will get you off on the right foot. Templates for PMs are somewhat dangerous unless you know they were developed in an operating context very close to your own - be careful.
  10. I took the exam and earned the CMRP in 2014. The concept that SMRP had was that it would be recognition of experience combined with expertise. There was an extensive reading list and at that time, I was unaware of any courses to help in preparation. The exam was intended and described as a recognition of accumulated expertise and experience. I believe it had meaning because you really needed to know your stuff to pass, SMRP did not endorse any courses that may have been aimed at getting one through the exam and as far as I know, that is still the case (although they do endorse training providers). The exam was long (as it is now), not too difficult if you had broad and deep experience, but people without experience would have struggled - and many did. Some colleagues (typically the less experienced) took more than one attempt to pass. Since then I've seen several courses emerge to prepare people for the exam, often offered at the same venue and concurrently so that course participants could write the exam immediately. I've been asked by training organizations here in N America and overseas to teach such courses. Those organizations sell training and could earn fees on the coat-tails of a reasonably well recognized certification that they really had nothing to do with. I have consistently refused to teach such courses. I was perplexed to see that happening. In my opinion, such prep courses combined with the exam, cheapened the designation's meaning. It was being degraded from a recognition of experience and expertise, to a recognition of the ability to remember material over a short time frame, solely to pass an exam. Such short-cuts on real experience don't result in much retention of what is taught, so how much (aside from a designation) has one gained? I renewed my membership and designation, but eventually decided to allow both to lapse. While I believe SMRP to be a terrific organization, I've seen it do nothing to arrest the degradation of the meaning behind its CMRP designation. I also see similar certifications appearing and being awarded by for profit organizations, and in recognition of the completion of courses and passing of exams. That doesn't mean the individuals don't have the expertise and experience to earn some recognition, but there is no mechanism aside from payment of a fee and passing of an exam, to attest to that. Again, I believe the genuinely well qualified are being recognized along with those who are not so well qualified. Can such designations really be relied upon to mean very much?
  11. Raul - yes, it is challenging to keep people focused where they need to be. You need to ramp it up on the PM team as I think I mentioned. Discipline is challenging and that's what your supervisors are there to enforce - discipline in execution and consistency and with the schedule. It's about sticking to what needs to be done, not about disciplining people. Your supervisors will need to be on board. Like the techs, they need to be a part of crafting the improvement initiative.
  12. Splitting into teams makes sense. I've used that approach before and it works, but avoid the temptation to move people from PMs to reactive. You may need to ramp up the PM team effort somewhat gradually, especially if your anticipated PM workload is substantial. Estimate your hours for the PMs that you have (that might already be in standard job plans if you have them) and apportion people based on the hours of work vs. total workforce capacity. The rest will, by default, be in reactive mode. You may want to consider one or two technicians dedicated as "shift repair" who respond to operations requests and handle smaller jobs as they arise, around the clock. The bulk of your reactive team is probably going to be scrambling with bigger jobs. As you ramp up PMs you may want to perform a simple PM Review/Optimization exercise to validate that the PMs are indeed the right PMs. One possible reason for being in such a reactive state already may be that people lost faith in the PM program because they perceived it wasn't working. Be careful about perceptions though, when it comes to task frequencies. Condition Monitoring should usually reveal that there are no problems and only occasionally catch them. If it catches many problems, then it is very likely also missing many. Preventive work will often seem wasteful because it results in discard of components that appear to be just fine. If the failure modes are truly age / usage related, then that's exactly what they should notice. A bit of RCM-R training would be useful to help inform those decisions. The "improvement team" will need engineering talent. You may struggle to include technicians if reactive workload is still very high, but including them is the way to go if you can swing it. Keep to simpler techniques like 5 Whys that everyone on the team will understand. The Engineers should be experience people, not juniors. The problems they are solving need to be well defined and the engineers on that team will need to keep in mind that it is easy to get side tracked and solve the wrong problem.
  13. Show some leadership first, insist on performance of some essentials and use reliability to give you some revenue generation opportunities. Costs can be brought down later with better maintenance practices, but quick wins (needed for production) will come from asset reliability. You need to spend some money to help your people understand what "good" looks like - clearly you are walking into a situation where they do not. Keep that training fairly high level (overview) and get their ideas about what needs to change. Asking them for their input will help morale and begin a shift in attitude. Using those ideas you can build a longer term improvement plan. Costs are not important at this point so demonstrate you care by investing a small amount in training. You need to up your game on production and that means increased reliability to get out of break-down mode. Your people are used to break then fix, and they need to know there is a better way - training! Some quick wins will come from bad actors using root cause analysis methods. Those enable increased production and revenue generation - your budget shouldn't be touched if you pick the ones that are causing the most downtime. There's no need to get fancy - use 5 why's and make sure you can prove the answers you get with some form of evidence. Success with those will give some breathing room to get more of your workforce doing proactive maintenance. If you have a PM program - follow it. If you don't, then you need one - see Appendix C in my book, "Uptime - Strategies for Excellence in Maintenance Management" (3rd edition). If necessary put dedicate some of your people to those PMs (e.g.: an oiler or lube tech), Get your planners planning - not supervising and not chasing parts. Use your MRO support people (Supply chain) to get what the plans say is needed. Don't schedule work without all the parts available.
  14. Greetings everyone. I'm Jim, co-author of "Uptime - Strategies for Excellence in Maintenance Management" (2nd and 3rd editions, 2006 and 2015) and "Reliability Centered Maintenance - Re-Engineered" (2017). I'm a professional (and certified) management consultant with over 42 years in the field of R&M and 24 in consulting. Specializing in R&M and their management, mostly with larger industrial companies in just about every industry where physical assets are of prime importance. I'm an author, blogger (https://consciousasset.com/front-page/blog/), speaker (lots of conferences), trainer (public courses and in-house) and of course adviser to senior management on R&M. Happy to be here to share!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use, Privacy Policy and use of We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue..