OOS in production is not only an indicator that quality might be at risk. Other problems may result in a hazard to the health and safety of the consumer, a potential breach of legal trade requirements, and an economic loss for the company. When this happens, raw materials, manpower, and asset utilization are mobilized in a process that ends with poor results. Products must then be reworked or disposed of. In many cases, the detection of an error may trigger tedious and costly recall actions that impact the brand negatively.
Food regulations as BRC, IFS, SQF, or FSS C22000 require instruments to be checked or calibrated periodically. For example, the BRC Global Standard for Food Safety, Issue 6, stipulates in Chapter 6.3:
“The company shall identify and control measuring equipment used to monitor CCPs…. All identified measuring devices, including new equipment, shall be checked and where necessary adjusted at a predetermined frequency, based on risk assessment…. Reference measuring equipment shall be calibrated and traceable to a recognized national or international standard and records maintained.”
While the standard calls for instruments to be adjusted when necessary, it remains silent with regard to how accurate results should be defined and verified. The applied principles are consequently diverse throughout the industry. In many cases, the principle of “what you see is what you get” is applied.
In this environment of misconception, scales are the last part of the production chain to be suspected when OOS results occur. OOS then becomes a necessary evil, when it should not.
Measurement Uncertainty and Minimum Weight
State-of-the-art strategies for consistently accurate and reliable weighing consist of scientific methodologies on instrument selection and testing.1 Within these methodologies, industry misconceptions on weighing are widespread, including “what you see is what you get.” What do we mean by that? Here’s an example: A user weighs a product on an industrial floor scale and gets a reading of 120kg, which he believes is the true amount of material. However, this reading might not exactly reflect the amount weighed; in other words, the amount weighed might differ slightly from the instrument reading. This is due to the so-called measurement uncertainty, which is applicable to every instrument you might think of.
Measurement uncertainty is determined in calibration, and the results are issued in appropriate calibration certificates. In general, the measurement uncertainty of weighing systems can be approximated by a positive sloped straight line—the higher the load on the balance, the larger the (absolute) measurement uncertainty (Figure 1). Looking at the relative measurement uncertainty, which is the absolute measurement uncertainty divided by the load expressed as a percentage, we see that the smaller the load, the larger the relative measurement uncertainty. If you weigh at the very low end of the instrument’s measurement range, the relative uncertainty can become so high that the weighing result cannot be trusted anymore.
It is good practice to define accuracy (tolerance) requirements for every weighing process. Weighing in the red area, as indicated in Figure 1, will result in inaccurate measurements, because here the measurement uncertainty of the instrument is larger than the required accuracy of the weighing process. Consequently, there is a specific accuracy limit for every weighing instrument: the so-called minimum sample weight, better known as the minimum weight. This is the smallest amount of material that will satisfy the specific weighing accuracy requirement.
While measurement uncertainty is described in great detail in the literature2,3, we want to emphasize that for weighing small loads on analytical and microbalances, the dominant factor in measurement uncertainty stems from repeatability (expressed as the standard deviation of a series of weighings). Samples and standards that are typically weighed on these balances are usually small loads in comparison with the capacity of the balance.