Transcat Webinars and Online Learning

Calibration’s Role in the Manufacturing Jigsaw Puzzle:

Metrology in Manufacturing & Measurement Assurance Programs

Howard Zion | Transcat's Director of Service Application Engineering

In this Webinar, Howard Zion, Transcat’s Director of Service Application Engineering, examines the relationship between calibration and manufacturing quality measurement assurance. Howard reviews the role of calibration as it relates to the manufacturing process, and explains why the existence of a calibration program does not necessarily guarantee measurement assurance. He then lists the critical components of a true measurement assurance program, and details the steps needed to make a reliable evaluation of your measurement decisions.

Sarah: Good afternoon and thanks for joining us to hear about calibration’s role in the manufacturing jigsaw puzzle. My name is Sarah Wallace from Transcat, and I'll be your moderator this afternoon. Our presenter is Howard Zion, he's Transcat's director of service application engineering. So, for the first part of our time together today, we'll have the presentation, and then at the end of the call, we'll answer your questions.

You'll notice in your webinar controls to the right, there's a box for questions, so you could just put your question in there, and then at the end of the presentation, we'll go through there and answer them.

I also wanted to mention that this webinar is being recorded today. Each of you will receive a followup email with a link to the recorded webinar in the slide of today's presentation, probably about two hours after the presentation finishes today.

So, at this time, I'm just going to turn it over the Howard.

The Big Picture: Manufacturing Goals

Howard: Thank you, Sarah. Good afternoon or good morning, wherever you are coming from. I want to talk to you about some concepts of how calibration ties to manufacturing. Things that start in manufacturing and then push to the need for calibration, and making sure there's no disconnects there, because that can happen. And it does happen more often than it should.

We're going to cover what the manufacturing goals are. We'll talk about how things get segregated or split out as different people in the company are responsible for different functions and departments. And then the piece of puzzle that's all about the calibration support piece. And sometimes that appears to fit; sometimes it really doesn't fit, but it looks like it should. And then following up with how we make sure you get your pieces to fit properly.

Manufacturing goals, the big picture there is to make products that are:

  • In market demand because you want to sell them, and if people don't want them, you don't want to be into that type of a product. So, market demand is important.
  • You want it to be profitable.
  • You want your product to be effective.
  • And you want your product to be safe.

So, the whole big picture can be split into parts, and if we put it all together -- some of this animation is a little bit slow -- but if we put it altogether, we can see how it all fits.

So, all of these functions and others are required to make this happen, to be able to manufacture products of any type. Some of the functions of the manufacturing process are split into separate departments necessarily. Some of the sub-functions, some of the sub-assemblies of the product are either outsourced or handled by different departments internally. And that in itself causes miscommunication sometimes, or a lack of communication -- not intentional -- but it can cause things to get dropped or missed.
Back to Top of Page ↑

Puzzle Pieces: The Components of Manufacturing Quality

Example: Our materials are purchased from multiple sources, depending on what they need and the quantities, and the purchasing people get involved to get the lowest cost, or making sure that you have good quality but low cost. Some assembly can be farmed out to local tool shops.

Some companies enforce the quality of the parts on to their supplier to make sure they're receiving good parts. Other companies have receiving inspection to check sample of the supplier parts for quality, or maybe 100% inspection to make sure they get what they pay for and it's going to work in their product.
Back to Top of Page ↑

So, all of these things cause different functions to get split out, and then that puts a bigger burden on communication – not only verbally, but through documentation to make sure people understand the job they're doing and what they need to do to make sure that you get, as a corporation, what you're asking for and what you're paying for.

So, how does that affect the calibration function, as one focal point? It goes back to understanding what the original point of calibration was for in manufacturing. And that's really to make sure that you don't lose that connection between designing your product, finding methodologies to manufacture the parts of your product to get your product to market, and then the pieces that tie to making sure you have good measurements on all of those parts of the product.

And making sure you have good measurements means that you need to elect suitable instruments and a number of other things and make sure that your calibration is supporting what is needed for those measurements on the product.

That creates the calibration silo, especially if it's outsourced but even for an internal lab that can happen. And how does that affect manufacturing? How does that affect the goal of the company in making their product?
Back to Top of Page ↑

Calibrations Don’t Always Equal Measurement Assurance

What can happen is, calibrations may be taking place, but that doesn't necessarily mean that measurement assurance is in place. And we'll be getting to talking about the components of measurement assurance in a minute, because it's not just calibration.

So, as an example, I'll use an Omega HH82A, which is a temperature indicating device. And one of the examples I have seen in my experience is one of these devices which, by the way, has a channel A and a channel B. So, there's two channels that can be used for temperature management. It can also be used to measure differences between temperature values at two probes. So, it requires calibration to make sure that if this is being used on the production floor to make quantitative measurements about the product being good or bad, then that needs to be calibrated. Both channels, and by the way, each of the channels can handle one of four thermocouples, J, K, G, and E.

And so, what I'm seeing is calibrations that have been performed on these, in one example a calibration that was performed on this, where only channel A was calibrated, and only two thermocouples types were calibrated. And the question to that manufacturing quality manager was, "Where are you using this? How are you using it? Let's find out for sure what you're doing with this."

And it turn out both channels were being used, all four thermocouple types, and so they didn't have any longer any measurement traceability on channel B or the other two thermocouples on channel A.

That creates a problem because they're making decisions about the product being good or bad, and they really don't know if that instrument is telling them the right answer.

So, on the left you can see, taking it back to the analogy of puzzle pieces and how things fit together, that it looks like the piece on the left which fit into the calibration portion respectively, but the piece on the right looks like it could fit in there as well. And if you take the larger piece above it and put it into place, you find out that the piece of the left really isn't the right piece. So, it's the one on the right that shows that it fits into the calibration piece.

And so, just like that, the jigsaw puzzle example, some calibrations can look, to the untrained eye, like, "Yes, I got the calibration done. I have to certificate to prove it. It looks like I'm doing what I should for that requirement in my organization." Yet when you get down to looking at the details of it, it wasn't fully calibrated, and it really doesn't support the operation like it should.

So, you're at risk, in that situation, of passing product that could be actually bad. Or, the alternative to that is that you could be accepting product that really should have been rejected.

So, with this example, what similar risks might you have with the calibrations that you're currently receiving? If it happened with this item, it can happen with others. And I've seen it multiple times where calibrations really aren't supporting the production process. And that breaks down the entire measurement assurance program.

So, if manufacturing of any type is interested in maximizing their profits and making sure they have good product safety and quality, then they must have an active measurement assurance program.
Back to Top of Page ↑

Measurement Assurance Programs

So, let's talk about a measurement assurance program and what that consists of.
There may be additional components to this, but this gets the meat of it.

1. The Right Tool for the Right Job

So, if manufacturing of any type is interested in maximizing their profits and making sure they have good product safety and quality, then they must have an active measurement assurance program.

2. Regular Calibration

It has to be regularly calibrated because you're making decisions, again, about the product.

So you're basing those decisions on instrumentation that gets you quantitative values. And those calibrations have to support the process for the instruments used. They have to supply the correct tolerances. There are situations where you can get incorrect tolerances based on using a different procedure or source of specifications that can cause that.

Calibrations have to be valid, meaning, there's an uncertainty that support that you can actually quantify those values of the measurement of the calibration. And so, that's all very important in making sure you don't lose that piece of what you're trying to do in the manufacturing process.

So, in an internal lab, they probably have a greater chance of getting it right because they should be tied in, to some degree, with what's going on on the production floor. And if they're not talking back and forth, then that's where that can get lost.

And other thing is, metrology is a fairly small world, and a lot of people have gotten their training through the military, and people are comfortable with using what they've learned in the military, and that is the military procedures, cal procedures. Sometimes those procedures are modified, don’t cover the full calibration of the instrument because it fits what the military's need was. Sometimes they change the tolerances from what the manufacture had, or they don't keep up with the changes of what the manufacturer states as they update their specification.

And so, if there's any disconnect there, it follows through to whoever uses those procedures. So, you have to be very diligent in making sure that it meets what the customer's expectation is.

The customer's expectation, or the user of the instrument, should be, or usually is, based on what the manufacturer says that instrument can perform to. And that is their specification sheet. And usually, if they have a valid, good metrology practice calibration procedure, then you'll follow that as well. So, you have to be very careful that the calibration isn't drifting away from what the intent was when the person selected the right tool for the job.

3. Using the Instrument Correctly

Then once it's calibrated correctly and you've picked the right tool for the job, you've got to use it correctly. And that can be a matter of training of the operator. It could be gage R and R studies to figure out how to minimize variances in the use of the instrument, complexity of how to use the instrument. So, there's a lot that goes around that piece of measurement assurance. If you have everything calibrated right, and you pick the right instrument, and somebody uses it wrong, there goes your measurement assurance, and there goes the whole idea of manufacturing a product that's known to be good.

4. Accounting for Process Measurement Uncertainty

Accounting for irregularities in the production process, or process measurement uncertainties. A lot of people don't think about this concept, but there are uncertainties in every measurement that's made and not just in the calibration lab. And so, on the production floor, I kind of alluded to using the instrument tool correctly, gage R and R studies. There could be a number of things that you need to pay attention to, outside of just the calibration of the instrument in making sure it's reading correctly, that can affect the measurements on the product. And you've got to make sure that you're taking those into account.

And then when you get your information back from a calibration event, you've got to determine if you have an out-of-tolerance investigation that needs to be performed, and you need to do that correctly so that you're taking those errors of the instrument back to the decisions that were made about the product or any process where that instrument was used.

In doing that, you'll find out if you made some decisions that could have accepted bad product or rejected good product.

5. Out of Tolerance (OOT) Measurement Investigations

6. Corrective Actions for Out of Tolerance

And then corrective actions for taking that out-of-tolerance impact, if you found that there's potential problems with accepting the product or vice versa, you've got to take some corrective actions there. And that could be instrument recall, it could be rework of your product or components, a number of actions that could be taken. But that was the whole point of traceability in the first place in calibrating the instruments to make sure you know that you had good product. And if you know that you may not have, you've got to take action on that.
Back to Top of Page ↑

Temperature Measurement Example

So, I want to go through an example, simple temperature measurement. Manufacturing engineers designing a process where it needs to measure solution or she needs to measure solution used to treat the product. The process measurement is 350 degrees Celsius. The engineer determines that, outside of a two degree window around that value, you start maybe having problems and the product isn't treated the way it needs to be.

So, if that's the determination, the manufacturing engineer needs to figure out for their product where those limits would be for a process.

Now you have to pick the right tool for the job. What is the instrument that's suitable for this measurement? Would it be an instrument with an accuracy that's equal to those process limits, plus or minus two degrees? Most people would understand that's not a good idea because the instrument is allowed now to drift that full amount over its cal-interval, and that can directly impact previous process measurements and decisions about the product.

So, let's say the instrument has been calibrated today, and it was adjusted to nominal. So, it's reading perfectly. Or within the uncertainty of the measurement, anyway. The instrument is used now on that same day to measure that solution temperature, and the temperature is right at nominal. Everything is aligned and perfect. The sun is shining, birds are chirping, everyone is happy. That process measurement is good.

Now, of the last day of the instrument's calibration interval is going to used before it's recalibrated, used to measure the temperature solution and, low and behold, it's right at 350 degrees Celsius, right at the nominal point.

So, in that cal-interval of that instrument, as we used it to make measurements about this process, on the first day and last day, at least, we see that it was at nominal, and we're happy. Process measurement is good.

Now the instrument goes in for temperature re-calibration, and then subsequently it's adjusted and return it to nominal. As found, readings show that the instrument drifted to the upper limit of 352 degrees C. Remember, pick an instrument that has the same tolerance as the process measurement that we're trying to make. Now it has drifted to its upper limit. What does that do?

That drift occurred over its cal-intervals. It probably didn't happen all at once unless the instrument was damaged or something. But that is called an in-tolerance reading. And there's no plague to the person getting that cert back saying, “I should check into some problem that I might have.” They just see that it's in tolerance and that it was adjusted back the nominal, and they've gone their way.

But did that move all at once, or a little bit over time? Probably a little over time is typically the situation. And how does that impact the measurements or the decisions about the process since the last time that instrument was calibrated?

Because you don't know, or if you don't have any information that would tell you otherwise, you have to assume the last known good condition of the instrument was the last calibration performed. Everything that instrument touched over that period of time, even though the instrument was in tolerance, you're going to take a look at how that impact affected your decisions – everything has to be reviewed for potential risk.

So, it wasn't reading at nominal in your process, as your thought it did. And on the first day, since we have to go all the way back, unless you have some other type of checks and balances in place, like checks on the instrument to see if it was drifting, then you have to assume you go back to first day when likely it wasn't bad, but you don't know, so you have to say it wasn't actually reading 350. The instrument that I used for that was actually higher than that. When I bring it back to where it should have been nominally on the instrument, that brings my process measurement down the same amount.

And now I'm really at the borderline of the process acceptance level, and that lower tolerance is where I was actually sitting. Same thing with the reading on the last day at the end of the cycle. That process measurement is good because it's in the tolerance. It's at the lower limit, still in the tolerance.

Should have you concerned, though.

What if some of those readings throughout that interval, of that instrument drifting over time, was at 348, and you said, "That's in the tolerance level. I'm good to go." Now that you know that effect of the instrument being at its upper limit and the fact that it should have been lower by two degrees, it no longer would be acceptable. And that in-tolerance calibration on the temperature instrument was not flagged as a situation that puts you at risk.

Do you see the dilemma here? It's not just about out-of-tolerance situations that flagged you. You must do some evaluation of, and in fact, on, your decisions. It's about any shift in that instrument and making sure that you understand what it did to the decisions about your process.

Because now you have a situation when you're actually well beyond the acceptable tolerance for that solution in the process of manufacturing. And that process measurement was actually bad. That's what we call a false accept situation.

So, we really want to pick an instrument that has the same tolerances as the process that we're looking at or trying to measure. We want something more accurate. What if we picked something that had an accuracy twice as good as the process that we're measuring? Now if it drifts to its upper limit over its cal-interval, process readings would only have been off by half of the process tolerance, and that helps you. But there is still a risk situation.

What if an instrument was selected that is four times as good as the process tolerance? Now, if it drifts, then your process readings will only be off by one quarter of process tolerance. Again, if the instrument only drifted to its outer limit. And the whole thing about manufacture specification on how their instruments will perform is about two things. About setting the cal interval over which it will hold those values, and then determining what those values are for accuracy and other parameters. So the manufacturer says, “I am expecting that the majority of my products that I made for you, the instruments I make, will hold these tolerances over this period of time. So because they should, it is not likely that they will go beyond that, although it does happen. But where now we’re using that kind of logic or thought process to determine the right instrument for the job, the suitability of the right instrument for the job.

Four to one ratio is usually where traditionally people have gone. We could look for ten times better, hundred times better, but the problem is there is limits to technology that won’t get you there in some cases depending on the parameter of measurement, and eventually it becomes cost-prohibitive to go too far with that concept. And, as I said, a four to one ratio is usually sufficient to reduce to an acceptable level the probability that an out-of-tolerance would have had an impact on the product or the process decisions during that cal interval. For some measurements this four to one ratio can’t be achieved, so you have to live with the higher risk and you got to know how to manage that to your benefit.

So with this four to one ratio that we’re looking at, the instrument has four times less tolerance limit than the process tolerance. If your reading was at 348 during the use of their instrument over its cal interval. Now it really should have been 347.5. If that’s a quarter of that tolerance, that’s what the instrument’s tolerance would be and you still are in a situation, even with the four to one, where you could have false accept decisions on the process or the product. You still have to deal with that, but it minimizes it and gets it into a more manageable level. So here we still have a process measurement that’s bad. We still have a false accept situation.

How do you deal with that?

Lesson learned here is even in-tolerance results can impact your process measurements.
Back to Top of Page ↑

Guard Band Your Process Limits to Reduce Measurement Risk

I’d be willing to bet there is a number of people in the audience who never realized that. Hopefully this has helped you understand why. All of your cal data should be reviewed against your process measurements to understand the impact to the product. And I call that, not an out-of-tolerance NCR, because it’s an in-tolerance NCR evaluation. At this point I am sure a lot of you are shaking your head saying, “Are you kidding me? I’ve got to do all this extra work for in-tolerance?” Hold on. I’ve got a better solution to help you out there.

Impact studies are expensive. You don’t want to have to do those. It consumes valuable resources, very costly time. It is rework, which means you are not working on new product which means that takes away from your profitability. Costs thousands of dollars per evaluation event. Parts or products that you've passed already with false accept situation, may have already been released or shipped by the time you get the calibration information in or out-of-tolerance on the instrument you used to make those decisions.

Sarah, I think I've lost the control here. I can't forward.

Sarah: Go ahead and try one more time.

Howard: No.

Sarah: Alright. I am going to take control for one moment. Sorry about this everyone. Okay. There you go.

Howard: We're back on track. Thank you. So if product has been released or shipped to distribution warehouse, to clients, it is out of your control. Now you might have to do a product recall. That could be very expensive. You don’t want that. If it hasn’t been released, you might have to do re-work and that too is expensive because again, it consumes workers to now re-work the product instead of making new product and plus the cost of the materials, if you have to scrap it. So this risk can be reduced. That's the good news. And the concept there is guard-banding the process limits.

We will talk about that here. For this guard-band first you want to determine your realistic tolerance limits for the process as part of your normal determination suitability of the instrument process tolerances and comparing those. And then you want to take the instrument tolerance which is the value expected or maximum drift of the instrument over its calibration interval, and back that off from your upper and lower limits.

To arrive at new upper and lower acceptance limit so that if the instrument drifts over time, over its calibration interval to its maximum value, you negate the need to do even in-tolerance... Well, this would be an in-tolerance situation throughout its interval, if it was only to the maximum value. You negate the need to perform those in-tolerance, non-conformance report evaluation.

And that process measurement is now protected unless the instrument drifts further than its tolerance limit and then you still have an out-of-tolerance situation that you have perform an NCR evaluation on. But that should significantly reduce your risks from where you are today.
Back to Top of Page ↑

So, to summarize all of this:

  1. Make sure you implement a good measurement assurance program that takes all of the different components into account, because the goal there is to protect your product, to make sure you are not making bad decisions about it.
  2. Understand and exercise good suitability of the instruments, the right tool for the job.
  3. Guard-band your process tolerance limits to reduce costly NCR evaluations or, if you're not aware of the fact that in-tolerance can affect that, mitigate that altogether now.
  4. Understand both the process that you are trying to do for your manufacturing and the calibration of the instrument to ensure that the intent of preserving good measurements on the product is not lost. That’s not just from the manufacture engineer, that’s the operators who are actually performing the test, the person in charge of giving the calibration time, whether it's an internal lab or somebody who outsources it. All of those people need to be tied together with these concepts to protect the measurements, decisions made on the product. Everyone has a role.
  5. Be thorough in your non-conformance report evaluations to make sure, again, that you are using all that information that you are spending money on to ensure whether those decisions were good or bad and fix that process if it wasn’t.

So if you are in over your head, get some help. We are here to help you. Any questions at this point?

 

Questions and Answers

1. Can you define uncertainty and just give some examples? ➩

2. My company is trying to reduce cost by lengthening the calibration intervals of some of our equipment by reviewing past cal data. Is there a general document or guidelines I can look at to approach this process? ➩

3. Do I understand you correctly. Is it possible to have a piece of equipment that has accuracy which supports the process' accuracy requirements, yet the uncertainty related to the user, determined by gage R&R, could mean it is not an appropriate piece of equipment? ➩

4. For clarification, is the 4X calibration based on the LAL limit or the process limit? ➩

5. What are the most common or more importantly, the highest risk you see when companies use their own associates to perform calibration of the company's equipment? ➩

6. How can I be sure my calibration service provider is using this practice? ➩

7. Does the calibration lab calibrate to the manufacturer spec or allow for differences? ➩

8. When you talk calibration, are you talking for the calibration of the instrument that you use to calibrate the sensor that controls the process, or are you talking the calibration of the control and controls the process? ➩

9. Is there a way the determine gauge accuracy? Example, we have a counter that measures cable length, and we have to calibrated cable. I would like to validate the counter is at a 0.6% accuracy. How should we use what we have to determine the counter accuracy? ➩

10. What would be the recommended level or type of training of an individual tasked with determining measurement uncertainty? ➩

11. So, Transcat offers a service of auditing an existing cal program? ➩

12. Would it be easier -- I'm not sure where the comparison comes in -- but would it be easier for the engineering department to change drawings to allow for the equipment tolerances? ➩

13. Is there a specification defining how long a piece of calibrated electronic test equipment can sit on a shelf before putting into service? ➩

14. What or who do you think would be considered as a valuable certification body for calibration? ➩

15. Do you believe a three-point calibration is best in class or should it be a five-point calibration? ➩

16. In the manufacturing environment, what would be considered a minimum training level to perform calibrations in-house, i.e, the tech level? ➩

17. Okay, can you not determine calibration frequency of the instrument you're using, on the product you are using it on as far as how critical it is to your process? ➩

18. Okay, how do I know when my instrument needs calibrated? Do I need to wait until, or very close to the expiration date, or can I use it a few days after the expiration date? ➩

19. Does a piece of equipment need to calibrated if it is compared to a known measurement? ➩

20. Okay, does the criticality calibration interval depend on the criticality level? ➩

21. Are there any regular publications, such as magazines, that address the world of measurement assurance and calibration? ➩


1. Can you define uncertainty and just give some examples?
Howard: Sure, I am not sure what level are you approaching the question with, so I will start with some basics. Uncertainty of measurements really goes to... Let’s say that you are using the recipe that your mother used for making cookies and you've decided that you are going to make cookies too. And for some reason you think you've followed everything that she has done, and you make the cookies and they just don’t turn out the same because it is not as crisp or not as soft or doesn't have the same flavor, but you are not sure why that happened.

The reality behind that is that there is a number of variables that can happen in measuring a flour or the type of flour you used or the type of chocolate chips used if they were chocolate chip cookies, the baking altitude, temperature of the oven. You know, there is number of things that come into play. All of those variables that can change the outcome are uncertainties of the measurement. So if that helps you, then that’s what we are talking about with measuring equipment as well. All of the things that do in the calibration lab to determine the true value of the instrument, or as close as we can get to it. You never get to an exact value but within some estimate of the variables that can cause you to be off in making those determination are the uncertainties of the measurement. Same concept.

And the production process. When you use an instrument... Let’s say for example you are using a dry block calibrator, if you're familiar with that. That’s where you have an instrument that generates temperature. You have one or more wells in the instrument where you put temperature probes in them. And you are making comparisons between two temperature devices. And so in that process, if your probe is not the right size for the well, if there is variances in temperature from well to well, if there is variance in height of the length or the depth that you put the probe into the well, the amount stem of the probe is sticking out above that causes the cooling effect, there's a lot of variance there in making a measurement in your process, which is the determination about the process or your product.

But, not matter what you are doing there is uncertainty surrounding the values that you are coming up with, and you need to nail those down and understand in them or control them if you can..
Back to All Q&As ↑

2. My company is trying to reduce cost by lengthening the calibration intervals of some of our equipment by reviewing past cal data. Is there a general document or guidelines I can look at to approach this process?
Howard: Yes, absolutely. The NCSL international organization puts out recommended practices and their recommended practice one, or RP1, document is all about the different methods for determining calibration intervals, and it is anywhere -- I think there is one of six methods in there -- anywhere from simple as whatever the manufacture recommends to formulas that you might look at based on the history of the instrument to very complex equations for some more complex methods there. So you want to look at that and see which one fits your needs and then implement that. So you can get that on NTSL International's website.
Back to All Q&As ↑

3. Do I understand you correctly. Is it possible to have a piece of equipment that has accuracy which supports the process' accuracy requirements, yet the uncertainty related to the user, determined by gage R&R, could mean it is not an appropriate piece of equipment?
Howard: Yes and no. That could be that it's not appropriate because it's too difficult to use and to train people on, or maybe too costly to do all of that. But, with that aside, that's unusual. It could be that that throws your measurement assurance program off. So, you've got a good instrument that has good accuracy, that's suitable for the process, has a good calibration on it, but you can't get the operators to be able to use it correctly or it causes so much influence on that process that you can't control it. Then, yes. That would cause it to be no longer valuable to you for your process because it affects your measure assurance and your decisions about the product.

So you might either find a way to make that one work, or find a different product.

Sarah: Okay, moving right along:
Back to All Q&As ↑

4. For clarification, is the 4X calibration based on the LAL limit or the process limit?
Howard: Repeat that again, I didn't get the part about 4X.

Sarah: Yeah, 4X. Is the 4X calibration based on the LAL limit, or the process limit?

Howard: Oh, well, that depends on what you want to do there. You can either control that internally by saying, I know that if the tolerance that gets reported back to me, which, by the way, the calibration in that case would follow the manufacturer's tolerances by default. If it comes back to me, and it's outside these values, then I know that I need to take action.

So you set those up internally, knowing for each instrument what those values are, and you're looking for that when you review the cal data. The alternative to that is to ask your calibration supplier for a customized calibration that has those tolerance limits to your needs. Therefore, when you get the report back, or the calibration report, when it says out-of-tolerance, now you know you have something you've got to look at.

So there's several ways to handle that. And there may be a cost to handling it that way, but that's an option.

Sarah: Okay.
Back to All Q&As ↑

5. What are the most common or more importantly, the highest risk you see when companies use their own associates to perform calibration of the company's equipment?
Howard: I would say that in the calibration process there's a lot of areas there that can be a problem. You know, the accreditation process is valuable and if you use it to help your technicians understand how the things they do, the actions the take, the ways that they handle the measurement impacts their ability to get a good answer for their calibration measurement.

So, you can become accredited and not do that. You know, there's ways to do that. Then you don't gain the value from it, but the accreditation can help those technicians become more proficient in those concepts and beware of what they're doing in the calibration event.

Beyond the normal calibration training, you get if you're in the military or elsewhere. So, the biggest thing I think I see is just jumping to a conclusion that you don't need to put a limitation on an instrument if you believe you're customizing it to the client's need. In that case, there's no forewarning unless the operator is looking at the cert to see, "You didn't calibrate certain functions or you didn't calibrate the full range of the instrument, or you changed the tolerance limits for some reason."

That to me is a huge red flag for auditors. They could find an issue there where you believe that you're using the instrument that's fully calibrated and you haven't looked to see that it's limited, and there's nothing the indicates to you that it's a limited situation. Either by the tolerances or the functionality of the instrument.

So I see that too much, and that is a huge risk problem that I don't know it a lot of people understand is a risk to them.

Sarah: Okay, next question is:
Back to All Q&As ↑

6. How can I be sure my calibration service provider is using this practice?
Howard: You know, it's just like anything else you buy. You take a look at it to see if it's good quality and has what you expect. You can't just drop it and leave it. You've got to keep checking on it occasionally.

Awareness of these kinds of concepts is one way to take a look at your calibration supplier, and start looking for things that seem out of place. Another way is to hire a consultant to help you with that, somebody who's an expert at it, if you're not familiar with it or not comfortable with those decisions. It doesn't take much for somebody who really knows what they're looking at, to reveal to you what the problems may be.

Those shouldn't be biased answers, just to get businesses. That's what their goal is. It should be things that you yourself can go verify and say, "Yeah, that is their problem, and it's not getting me where I need to be."

Sarah: Okay
Back to All Q&As ↑

7. Does the calibration lab calibrate to the manufacturer spec or allow for differences?
Howard: Well, I'll tell you what Transcat’s philosophy is and what our calibration policy states. That is that we calibrate as a default service to the manufacturer's tolerances using methodologies that support that process. The methodology piece means that we're looking at, hopefully first a good manufacturer's calibration step-by-step procedure.

And if they do not have that, or it's not sufficient to support a good calibration, then we're going to be looking at other documentation. That could be a military document, a guidance document, ASTM, ASME, ANSI, ask for documents... There's a number of ways that you can get to that.

If we don't see anything that makes sense out there, from the manufacturer or from other organizations then we'll design one ourselves, write it ourselves and use other source documents to support that. Why we ran it the way we did. Now, not everyone does that, so that's a question you have to have directly to your calibration supplier, whether it's internal or external to your company.

The question should be, "What is your default service?"

The second part of that question was, "Can you deviate from that?" Absolutely. Customers can have us meet their needs, meaning they can change the tolerances if they don't need the accuracy the manufacturer states for the instrument. They can change the interval if they want. That’s their risk that they're taking when they do that. They need to be knowledgeable to make those decisions. And they can change the test points if they want, included in the calibration to reduce them, increase them, whatever they need.

Clearly, a calibration supplier should be there to support your needs for your process. And so, that should be indicated as a different cal than what the default of manfacturer's calibration will be. And you need something in your quality system that identifies that.

The way we do that is, if it's less than what the manufacturer will provide or the tolerances have been changed, then we're going to make it a limited calibration. And if it's greater than what the manufacturer would normally do, there's additional test points or whatever, we call that a customized calibration or a customer requested calibration. And there's words on both the label and the certificate that indicate that difference.

Sarah: Okay. We're getting a lot of good questions here.
Back to All Q&As ↑

8. When you talk calibration, are you talking for the calibration of the instrument that you use to calibrate the sensor that controls the process, or are you talking the calibration of the control and controls the process?
Howard: Calibration applies to both of those. So, the example I gave was about a process measurement for temperature, and you're using a temperature measuring instrument with a probe to make those decisions about that process. And then you send that instrument in for a calibration on the instrument.

And so there are instruments out there that are calibrated then used to control other processes that are used to make decisions. And those are both calibrations. It's more of the system cal for a control system. But it's still a calibration to get it aligned so it's reading properly. But the decisions made using that system are good decisions.
Back to All Q&As ↑

9. Is there a way the determine gauge accuracy? Example, we have a counter that measures cable length, and we have to calibrated cable. I would like to validate the counter is at a 0.6% accuracy. How should we use what we have to determine the counter accuracy?
Howard: We're talking about kind of like a footage counter, and you're using a single cable length that you must have measured somehow, so you've got to take a look at that measurement process to make sure you have no uncertainty surrounding it. And then if those are acceptable in relation to the tolerance of the footage counter, then it would be okay to use that cable length to measure the accuracy of that footage counter by wrapping it around it and seeing if it's sufficient or running it through it to see if it's giving you the same result.

If it's not sufficient to do that, then you've got to come up with another methodology.
Back to All Q&As ↑

10. What would be the recommended level or type of training of an individual tasked with determining measurement uncertainty?
Howard: It's an evolving process for an individual, and it starts by getting your feet wet, quite honestly. For a lab that becomes accredited, and I can tell you from experience when we went through this process in our labs, it was a difficult process to go through because a lot of our employees didn't have a mathematical background or statistics background to support the concepts to be able to get into it.

Now, you can train someone to go through the step-by-step process of creating a budget. But that leaves a lot of room for error because a difficulty in creating uncertainty budgets is understanding all the sources of error that can happen in the measurement. And if you don't get all of those, it's not complete. So, it's kind of, you don't know what you don't know, and then your budget is incomplete.

So, part of it is having good experience. And I would say somebody with... For this particular measurement, three to five years experience just to get started with those concepts. If you're doing multiple parameters, across the board, you need to have that kind of experience in all those different measurements. And then good math, algebra backgrounds, statistics backgrounds, and training on that would be a huge help as well.
Back to All Q&As ↑

11. So, Transcat offers a service of auditing an existing cal program?
Howard: Yes, we do. We have metrology consulting that we offer. And we have done that for a couple of our clients. We are more than willing to do that for you. It doesn't take long for us to walk through your facility and grab some examples and look at the certs and help you understand where the flaws are, where things are being done that are good.
Back to All Q&As ↑

12. Would it be easier -- I'm not sure where the comparison comes in -- but would it be easier for the engineering department to change drawings to allow for the equipment tolerances?
Howard: Change what?

Sarah: To change drawings to allow for the equipment tolerances.

Howard: To allow for the equipment... Meaning that your current drawings don't have them and should you add them. I assume that's what that means. Feel free to type back if you have verification on that, but let's go with that for now.

You're drawing of your part or product should reflect the tolerances that are allowed for the design. And then it should call out which ones are critical for the calibration lab to perform if it's not all of them. So, some may just be simple reference datum. Some may be measurements and tolerances that are critical. If it's a radius of an inch to make sure that it's ergonomically comfortable on the product, you may or may not care to have that checked for calibration. But if it's something that aligns and tries to fit with another part, that's something you'd want to identify for the tolerance on those. And yes, that should be on the drawing.

Good engineering control change on your drawings is critical to communicating to all parties that need to be looking at that and working with it.
Back to All Q&As ↑

13. Is there a specification defining how long a piece of calibrated electronic test equipment can sit on a shelf before putting into service?
Howard: Not to my knowledge. I can't think of anything else off the top of my head that would be something that would give you guidance on that. That's more of a... What is the environment? Well, first of all, good storage and handling of the instrument, making sure you understand what the manufacturer recommends for how you store the instrument in the right condition is a big piece of that.

Now, it could still sit for a long time. To me, that's a situation that you're better off recalibrating it than guessing. So, running it through that process again when you're ready to use it again is a pretty good idea.

The other thing I want to say that this kind of bring up is many companies -- and this is another thing I see, not just internal labs doing, but their quality departments -- are not catching the fact that when you calibrate an instrument, it's the as received values that are so critical to all those decisions you've been making during that last cal-interval.

What I see is a company will decide, well, we don't need this instrument anymore, or that product line was shut down, whatever the reason, and they just archive it. They just deactivate it without doing a close out calibration. And that close out calibration is the end of your traceability process. It is critical. And people just discard that, and then they have no idea if they made bad products or decisions about the product. So, it's critical that you do that on a regular basis.
Back to All Q&As ↑

14. What or who do you think would be considered as a valuable certification body for calibration?
Howard: Oh, I see. So, there's a couple of points there. One is the training for a technician to become qualified to calibrate instruments, and that largely comes now from educational facilities, institutions like [inaudible 00:51:10] College is one that comes to mind. There's two and four year colleges that offer quality and metrology related degrees. So, I would recommend that for their training.

And then for certification of a technician, the American Society for Quality has the certified calibration technician certificate, and that's the same as any other certificate they offer. You have to take the body of knowledge, understand everything are you need to know, study for it, take the four hour exam, open book exam, and pass that to be able to be certified. And then you have to maintain a certain number of points over a period of time to become requalified. And if you don't have the number of points from attending seminars like this from attending [inaudible 00:51:55] measurement science conference, the work that you do on a daily basis, if you don't accumulate enough points during that time period, then you must retest to maintain that certification.

It's usually pretty easy to get enough points to maintain it, especially if you're working in the industry.
Back to All Q&As ↑

15. Do you believe a three-point calibration is best in class or should it be a five-point calibration?
Howard: That all depends on the instrument design. I assume that we can take that and apply it to a multimeter that has multiple ranges for voltage. And on that, you're going to want to check the linearity of the device on one range at least. And then check the other ranges. And that could be single-point cals. But again, it depends on where you're using it, how you're using it that needs to go back to that process, to see how critical that is to the measurement.

Sarah: Okay, next question.
Back to All Q&As ↑

16. In the manufacturing environment, what would be considered a minimum training level to perform calibrations in-house, i.e, the tech level?
Howard: Yeah, I think that also ties to the other question about certification. For someone to be able to make good decisions about the process of calibration being done correctly, it doesn't just have to be the technician, although it's a good idea for the tech to be aware of those concepts.

Whoever's managing that operation really needs to understand how this all ties together to what the manufacturing decisions are of our product. Starting the calibration process to make sure it's supporting that. Then the technicians can be monitored, supervised, trained as they come along.

A more senior technician then, obviously with more years of experience should get involved at the point with the same concepts, manager of the lab looking at... Understand how that all ties together. You should have a succession plan that keeps bringing people in that direction.
Back to All Q&As ↑

17. Okay, can you not determine calibration frequency of the instrument you're using, on the product you are using it on as far as how critical it is to your process?
Howard: The calibration interval?

Sarah: Yeah, well they say, "Calibration frequency" but I'm wondering if that means interval.

Howard: Well, calibration frequency based on the production process [inaudible 00:54:40]... No. That calibration interval should be really focused on the instrument's ability to perform. I'd start with the manufacturer's recommendation first, and then monitor the success or failure of the instrument over time, and let it guide itself to where it should be.

Including, to the point that if it's continually failing on the same function and you're constantly having to get it repaired or adjusted, that's creating a lot of risk in your system and you should get a different model or product to take care of that issue.

Howard: You can use that historical information to help make the decisions about the cal interval after that initial point. It shouldn't be based on what the process is doing.
Back to All Q&As ↑

18. Okay, how do I know when my instrument needs calibrated? Do I need to wait until, or very close to the expiration date, or can I use it a few days after the expiration date?
Howard: Don't take this the wrong way, there's no magical time period there, and it's not going to turn into a pumpkin the day of its due date. The thing is that you're taking a risk the longer it goes. So, if you're going to use it, if you're going to make a decision to change the cal interval, lengthen it, or you're going to extend the cal interval beyond its due date, or use it beyond its due date, you're taking risk there.

You've got to either build in check standards throughout that cal interval to monitor it and see that it's not changing significantly. If you're going to do that or be willing to accept the risk. Because if you end up with an out-of-tolerance situation, even after that extension or lengthened cal interval, you now have that much more product that you've got to go back and look at and understand that you might be recalling that.

So, it's a cost decision, a risk decision and it depends on really the cost of what that recall could be or the rework of the instrument if it’s wrong.

Sarah: Okay, we have enough time for about two more questions.
Back to All Q&As ↑

19. Does a piece of equipment need to calibrated if it is compared to a known measurement?
Howard: Okay, so you're kind of using it, it sounds like, as a transfer center depending on what you're comparing it to. Transfer measurements, if it's truly a transfer measurement instrument, that should be done on a regular enough basis that you don't allow the drift of the short-term stability of the instrument or of the long-term instability of the instrument to cause and effect of transferring those measurements from the known standard to the process.

Other than that, and that is a limited volume of instruments that you should be doing that with. Other than that, you should have your instrument regularly re-calibrated to monitor it. And again, that decision on interval goes back to product risk and decisions of product and where you're using it and what those costs are. So you kind of balance that, to figure out what that optimal, your cal interval, wi