What is the difference between TAR and TUR
A robust quality control system should have controls in place to ensure that calibration accuracies and/or expanded uncertainties are sufficiently small so that the adequacy of the measurements made is not affected. The terminology utilized for the adequacy of the measurements has evolved over the years and can be a source of confusion.
In the past, a minimum test accuracy ratio (TAR) was deemed acceptable by most. The TAR is based on the manufacturer’s published accuracy specifications for the unit under test versus the manufacturer’s published accuracy specifications for the calibration standards. A test accuracy ratio (TAR) of 4:1 is widely accepted in the calibration community. In other words, the accuracy reported by the manufacturer of the standards used to calibrate is at least four times smaller than the accuracy reported by the manufacturer of the device being calibrated. The reason this ratio is so widely accepted is because it should theoretically be large enough to compensate for the unknown systematic and random errors associated with the measurement process.
More recently, the term “uncertainty” has been used to describe the adequacy of measurements. For the purpose of this discussion, uncertainty is considered to be the result of an uncertainty analysis, supported by an uncertainty budget. The uncertainty analysis identifies contributing factors that may introduce error into the process. An uncertainty budget quantifies the various random and systematic errors that have been identified. The result is expressed as an expanded uncertainty using a coverage factor of k=2 to approximate the 95% confidence level.
There has been some confusion with the two terms because “uncertainty” is often incorrectly interchanged with the term “accuracy”. Uncertainty is more encompassing as it includes the manufacturer’s published accuracy specifications as well as errors that can occur during the measurement process.
The uncertainty value allowed for the introduction of the test uncertainty ratio (TUR). The TUR is based on the manufacturer’s published accuracy specifications for the unit under test versus the expanded uncertainty analysis result of the process and standards used for the calibration. The next obvious question is if a TAR of 4:1 is widely accepted, what is the correct test uncertainty ratio? The answer to that question really requires some consideration on the part of the owner of the device to be calibrated.
Accreditation bodies, such as A2LA, publish a laboratory’s Scope of Accreditation. That Scope includes the “best uncertainty” capability of the laboratory based upon the “expanded uncertainty” as supported by an uncertainty analysis with uncertainty budget. With this information you can calculate the TUR for the device you wish to have calibrated and allows you to quickly evaluate an accredited calibration laboratory’s ability versus your needs. This evaluation of your needs should include the allowable amount of risk you are willing to accept. The larger the calculated TUR, the more confident you can be in the measurements you will make.
If you have questions about TAR, TUR, uncertainties, or just calibration, in general, feel free to reach out. Cross Precision Measurement has a team of measurement and quality experts available to answer your questions and help you get the calibration services you need.
See How Our Precision Measurement Team Can Help Improve Quality, Increase Efficiency, And Reduce Risk
Latest Articles and White Papers
The reasons for automating manual processes are well known at this point. For tasks that are the 4D’s – dull, dirty, dangerous or difficult to fill – a robot or...