NOV 14, 2022
Machine learning unlocked
MAY 12, 2022
Chemical engineers usually understand their production processes well. But when anomalies occur, they sometimes struggle to find the root cause, and are ill-equipped to bring statistics and data analytics to bear.
But pure data scientists are scarce and often don't know enough about specific processes to interpret their own analyses. Rather than spending time explaining the ins and outs of production processes to a data scientist, more chemical engineers and chemists—including those at Clariant—are finding ways to run these analyses themselves. They've seen that self-service data analysis tools have long-reaching benefits, such as greater sustainability and improved operational performance.
Clariant, a maker of specialty chemicals, has one central department of professionals responsible for data analytics, but we don’t really follow a classic data analysis approach. Instead, almost everyone in the data science department is a chemist or chemical engineer.
When Clariant first started exploring self-service data analytics, the company’s data science team knew it needed to choose a partner that would help guide its digitalization journey. It settled on TrendMiner and its web-based, self-service solution, which is designed to allow non-statisticians to analyze time-series data without the help of a data scientist.
Clariant started using the data analytics solution at its German plants several years ago. Since then, it's rolled out the software at production sites worldwide. The company has found the solution breaks down silos at individual sites and lets us compare process behavior across the entire organization. Getting a full picture of Clariant’s processes wasn't previously possible.
Originally, Clariant invested in the solution to accomplish two things:
The solution also helped bring people and cultures together to work on global projects. Engineers could see the results of using advanced, self-service data analytics to solve daily problems they were unable to figure out on their own.
Clariant’s digital maturity is advanced. The company has evolved its plants into highly efficient, data-driven operations. Before it began its advanced analytics journey, Clariant defined five phases of digital maturity. While the first two steps incorporate monitoring and reporting, the last three encompass advanced, or cognitive, analytics. Clariant’s five analytics phases are:
The company has reached the predictive phase of digital maturity. It's beginning to look at what it will need to reach the prescriptive phase, and—in an ideal world—the cognitive phase in the future.
Clariant established a cloud-based, data-lake model that takes information from manufacturing execution systems and edge devices, as well as systems for laboratory information and production management. The raw data from these sources goes to two places: a historian, where its data analytics platform is connected, and an organized and trusted database, where data filters to a sandbox, and eventually is made available to everyone in the company.
Clariant recently decided to intensify its data analytics capabilities. To accomplish this, it determined it needed to identify site needs and create a user community; provide learning packages in conjunction with the solution provider, but also from its own use cases; track and realize the benefits of the solution; and maintain momentum throughout the journey.
Each site gets a custom training package to address specific needs identified in the first step of Clariant’s data analytics model. This includes analysis, adaptation and rollout, site-specific coaching, tracking use cases, and building a community to share experiences. The sites also have a unique person assigned as the “core” team member to provide tailored support for that site.
The data analytics software helps us identify ideal process parameters, which is known as the “golden fingerprint.” It can also detect anomalies in process behavior.
Now, Clariant uses a recently introduced application feature to evaluate its toughest cases: Python notebooks. The notebook feature uses the popular computer language to help chemical engineers apply machine learning (ML) models for an even deeper dive into process behavior.
Figure 1: Clariant's chemical engineers applied machine learning with the help of TrendMiner's Python notebooks feature to identify an acceptable range for a particular process temperature. This allows experts to create a stronger fingerprint of acceptable recipe parameters, and monitor and alert for temperatures that fall outside of this range.
Clariant engineers learned that determining the root cause of abnormal process behavior was more difficult in some situations than others. Straightforward statistical analysis could find the root cause of process anomalies about 85% of the time. But to learn the root cause of the remaining 15% of its process issues, Clariant used the integrated Python notebook feature to apply ML.
Chemical engineers use the self-service solution to gather time-series data out of its historian. They then apply their own algorithms and the company’s data science platform. Finally, they create analytics on top of that model, discuss the results, and get opinions for the next steps to take to correct the problem.
Chemical engineers typically don't write computer programs as part of their jobs. Computer languages often require a strong learning curve that's outside the scope of an engineer’s training. Clariant engineers discovered, however, that learning to write ML code wasn't going to require another advanced degree.
Python is different. The programming language has been since around the 1980s. It was invented by a Dutch programmer, who wanted to create a language that was powerful but easy enough for anyone to learn and use. Data scientists use Python because of how well it sorts large datasets using short snippets of code. But the language can be used for a variety of tasks, including establishing ML techniques.
Figure 2: This time-series data with a lot of noise hid a batch anomaly. Once these patterns were smoothed out, process experts could visualize the anomaly in the trend lines. They then used machine learning tags, represented by the golden line at the bottom, to see how temperature affected the process during the same timeframe. Process experts can use this information to see that a rise in temperature will have an adverse effect on the product.
Clariant took advantage of the integration by creating new dashboard features in the data analytics solution. Engineers then used different visualization types by creating ML tags in Python that weren't originally available.
Next, they began to supercharge the company’s digitalization program with ML capabilities that allow process experts to establish an even stronger golden fingerprint. From the stronger fingerprint, Clariant’s chemical engineers can set up better monitors and alarms for key stakeholders, who can intervene in time to correct anomalies.
As Clariant’s chemical engineers increased their skills by adding ML capabilities, they also gained efficiency across its operations. Clariant engineers see Python notebooks and ML techniques as powerful and useful additions to its data analytics program.
The company has combined the advantages of a classic engineering model and a data-driven, analytics model. Each Clariant site has its own needs and pain points. In most cases, however, increasing throughput to improve production and make a profit is the goal.
With self-service analytics and the power of machine learning, Clariant is prepared to accomplish both goals during the next phases of its digital journey.