Global regulators continue to focus on food fraud, whether deliberate or accidental. The food industry has risen to the challenge by finding innovative new tools to monitor food and ingredients along the supply chain. Its next step should be to bring the same level of care to the laboratory, where food samples are tested for quality.
Supply Chain Traceability to Mitigate Adulteration
Before discussing the laboratory, let’s look at the example of the supply chain, where food manufacturers have been tremendously successful in using traceability to improve the safety of their products.
In the global food system, food supply chains have become complicated. The integrity of the supply chain is only as strong as its weakest link, so food manufacturers are identifying the places on the chain where adulteration is most likely—and then targeting them for special scrutiny.
There are two factors that make it more likely food will be adulterated. The ease of adulteration is one—foods like fruit, vegetables, and whole fish are much harder to adulterate than highly processed foods. A second motivation is financial gain. When crop failures or product shortages drive up food prices, sellers are more likely to substitute a substandard ingredient. That’s why the food industry has an adage about sourcing products, “If the price is too good to be true, it probably is.”
To avoid adulteration, suppliers rely on detailed supply chain management that includes history, audits, and product traceability. Traceability today still relies on paper documentation to some extent, but technologies such as RFI (radio frequency identification) devices or simple barcodes have helped eliminate falsification of records as food passes from one producer to another. Food producers are even using blockchain to ensure secure record traceability. The industry has shown it is ready and willing to adopt new technology to keep food safer.
Analytical Science to Identify Adulteration
But the quality of food products stands not only on the quality of the documentation from the supply chain, but also on the quality of the data from analyzing the product in the lab.
In the laboratory, the scientific community is very good at quickly developing analytical tests for food fraud—but only once a specific threat or vulnerability is identified. Scientists also create analytical tests for unintentional contamination—from poor-quality ingredients, the breakdown of legitimate ingredients, or the manufacturing process. And with heightened concerns about allergens, contamination that may once have been considered “harmless” now needs to be treated seriously. Witness the February 2018 recall of almonds found to contain traces of wheat and soy.
But the volume and reach of the global food chain make it impossible to conduct complex testing on every ingredient or product. As a result, manufacturers often put their faith in certificate of analysis reporting—but that has its own vulnerabilities, as demonstrated by the pet food melanin contamination case.
A more practical approach combines non-targeted screening with statistical analysis of trends and database-matching to look for anomalies. This level of screening usually takes place in governmental or institutional oversight laboratories because it requires sophisticated, expensive instruments like high-resolution mass spectrometers.
The question then becomes, can we rely on the data from these central testing laboratories? Or should we extend the scrutiny we bring to supply chain distribution records to the laboratory test data that supports food integrity?
Concerns About Laboratory Data
The good news is that the specificity and detection limits of analytical science tools continue to advance. But even with the best tests in the world, laboratories still rely on analysts and laboratory staff to perform tests accurately, reliably, and correctly. And the human element is not infallible.
In extreme cases, staff can be motivated to commit fraud for economic gain. A more insidious problem is when individual analysts feel pressure to “polish” the data, perhaps driven by a desire to meet performance metrics or deadlines, earn recognition, or reduce stress.
It is important to note, though, that the reasons and motivation for adjusting or excluding test results do not automatically indicate fraud. Laboratory procedures must allow for the correction of errors, or for the investigation of incorrect results. Unusable, unreported, or orphan data may be caused by overly simplistic or lax documentation practices, staff inexperience, or particularly challenging analytical techniques. Waters Corp. is partnering with government agencies and universities to combat these problems by creating training centers to educate analysts on how to properly prepare samples, run the instrumentation, and interpret test results and other skills.
Still, it’s known that fraud and data polishing is happening in multiple fields. Analytical test fraud has been uncovered in forensic drug laboratories in the U.S. In the academic world, laboratory testing has been found to be intermittently falsified, driven by the motivation to “publish or perish.” In the pharmaceutical field, the FDA and other global pharmaceutical regulatory agencies are increasingly looking for signs that laboratory analysts may have corrected or hidden results that indicate a study or quality test failure. They are increasing scrutiny of analytical records created by testing laboratories, both those supporting new drug development (GLP and GCP) and quality manufacturing (GMP) monitoring.
These examples show why it’s crucial to bring a new level of attention to the accuracy and trustworthiness of data supporting product or test quality, a concept usually referred to as “data integrity.”
Regulators have lost trust in paper records. Evidence found in “compliance ready” electronic applications (specifically in the area of laboratory automation) have shown the paper records relied on for quality decisions, criminal prosecution, or academic publication do not always constitute a complete and transparent record of the sample tested.
Computerized systems can help by making it much more difficult to tamper with data. Unique login requirements, privileges, and permissions can technically control what users are allowed to create, delete, or change, and comprehensive audit trails can record any activity attributed to those users. Regulators recently acknowledged the value of computerized systems in data integrity. The November 2017 release of ISO/IEC 17025:2017, in sections 7.5 and 7.11, describes the technical expectations for either computerized or non-computerized information management systems that are designed to ensure the “integrity of data and information.”
In 1997, the FDA outlined requirements for technical controls very similar to those described in ISO 17025. The FDA’s 21 CFR Part 11 (known as the Electronic Records and Signature Rule) also includes administrative and procedural controls for ensuring that electronic data is trustworthy. It’s worth noting that the European Union (EU) has a similar regulation, Annex 11. But while the EU regulation specifically covers only data supporting pharmaceutical manufacturing, the FDA regulation applies to data from all predicate recordkeeping requirements across all good practices, including human food manufacturing, packing and holding (Part 110), cosmetics, and GLPs for Protection of the Environment (40 CFR Part 160).
All three regulations discussed above support commonly applied practices of good documentation, which map closely to the more recent ALCOA principles of data integrity: Attributable, Legible, Contemporaneous, Original, and Accurate.
These principles were established by Stan W. Wollen, senior compliance advisor at FDA. In 2010, a European Medicines Agency reflection paper on electronic data in clinical trials added four complementary terms: Complete, Consistent, Enduring, and Available. All of these terms, like the good documentation practice principles, should apply equally to both paper and electronic records and are cited in almost every data integrity guidance or training.
But there is evidence to suggest laboratory personnel may not be following the practices outlined in these regulations. Regulators have turned up clearly unacceptable practices when they compared results on paper or manually recorded to the complete results digitally logged by the measurement instruments. The electronic records have revealed cases of testing a sample multiple times to obtain the “right answer,” or adjusting the meta data (sample weight, dilution factor, volume) in a calculation to ensure that a specification is met.
Regulatory Oversight and Enforcement
As with any new regulation, including the Food Safety Modernization Act of 2011, agencies tend to focus their regulatory attention on the most urgent risks. The FDA only ramped up its focus on data integrity in the pharmaceutical industry following some high-profile cases in which test laboratories, such as New Jersey’s Able Laboratories, were found to be deliberately falsifying records supporting pharmaceutical products.
Today, global pharmaceutical regulators are inspecting both the quality systems and laboratory records. They’re comparing paper records to the raw, electronic data to search for suspicious or anomalous test results that may not have been reported in official documentation.
Data integrity in the food industry is complicated by overlapping areas of oversight between the FDA, which regulates most processed food, and the USDA, which regulates meat, poultry, and egg production. In January 2018, the two agencies announced an agreement to work together to “increase clarity, efficiency, and potentially reduce the number of establishments subject to the dual regulatory requirements of the USDA and the FDA.” The increased coordination between the agencies will increase the focus on data integrity in the laboratory. One area that deserves further exploration is how to securely share the original electronic data from testing food products and ingredients. This would boost confidence in the authenticity of quality data shared during “business-to-business food ingredient transactions.”
When humans create the data, calculate results, and then transcribe the “final results” into the record, there is always opportunity for errors to occur, but seamless and automated data creation and transfer can minimize accidental errors. Be wise to always remember when the analytical data are too good to be true, they probably are.
Longden is the senior marketing manager for Informatics Regulatory Compliance at Waters Corp. Reach her at firstname.lastname@example.org.