Data gluttony also hinders process improvement efforts, as food and beverage manufacturers can easily feel overwhelmed by the sheer volume of data at their fingertips. Querying millions of data values from a database can prove challenging, if not impossible. Even if massive amounts of data could be retrieved, what analysis tools can be conveniently leveraged for analyzing millions of data values? Just imagine copying a few million data values into a spreadsheet. How would you analyze it all? By themselves, massive datasets make it difficult to figure out what is truly driving quality and where to make improvements—such as how to cut waste and giveaway. It’s like trying to find a needle in a haystack.
Breaking Through the Noise
To overcome data gluttony and find clarity in the noise, food and beverage manufacturers need to recognize that not everything needs to be measured. They have to stop collecting as much as they can and instead take the time to identify what data really matters. What purpose do these data serve? Why do we need to gather these data? How will these data show us how to improve our quality and operations?
Data sampling is equally important. Some organizations think if they do not capture every possible measurement in production, they will somehow “miss out.” For instance, a food producer might say, “We want to collect cooking temperatures every few milliseconds for food safety monitoring.” But temperatures will not significantly change in just a few seconds. The producer would just end up with a mountain of numbers that add no value or additional understanding to what is already known. It is better to form rational sampling plans, with reasonable data collection frequencies, focusing on how much data are needed, not how much are wanted.