Detecting anomalies in high-dimensional IoT data using hierarchical decomposition and one-class learning
Automated health monitoring, including anomaly/fault detection, is an absolutely necessary attribute of any modern industrial system. Problems of this sort are usually solved through algorithmic processing of data from a great number of physical sensors installed in various equipment. A broad range of ML-based and statistical techniques are used here. An important common parameter that defines the practical complexity and tractability of the problem is the dimensionality of the feature vector generated from the signals of the sensors.
While there is a great variety of methods and techniques described in ML and statistical literature, it is easy to go in the wrong direction when trying to solve problems for industrial systems with a large number of IoT sensors. The seemingly “obvious” and stereotypical solutions often lead to dead-ends or unnecessary complications when applied to such systems. Here we generalize our experience and delineate some potential pitfalls of the stereotypical approaches. We also outline quite a general methodology that helps to avoid such traps when dealing with IoT data of high dimension. The methodology rests on two major pillars: hierarchical decomposition and one-class learning. This means that we try to start health monitoring from the most elementary parts of the whole system, and we learn mainly from the healthy state of the system.
Anomaly detection in industrial IoT data using Google Vertex AI: A reference notebook
Modern manufacturing, transportation, and energy companies routinely operate thousands of machines and perform hundreds of quality checks at different stages of their production and distribution processes. Industrial sensors and IoT devices enable these companies to collect comprehensive real-time metrics across equipment, vehicles, and produced parts, but the analysis of such data streams is a challenging task.
We start with a discussion of how the health monitoring problem can be converted into standard machine learning tasks and what pitfalls one should be aware of, and then implement a reference Vertex AI pipeline for anomaly detection. This pipeline can be viewed as a starter kit for quick prototyping of IoT anomaly detection solutions that can be further customized and extended to create production-grade platforms.
U.S. Navy Takes Falkonry AI to the High Seas for Increased Equipment Reliability and Performance
Falkonry today announced a big leap for Falkonry AI with the Office of Naval Research deploying its AI applications to advance equipment reliability on the high seas. This AI deployment is carried out with a Falkonry-designed reference architecture using NVIDIA accelerated computing and Oracle Cloud Infrastructure’s (OCI’s) distributed cloud. It enables better performance and reliability awareness using electrical and mechanical time series data from thousands of sensors at ultra-high speed.
Falkonry has designed its automated anomaly detection application, Falkonry Insight, to take advantage of Edge computing capabilities that are now available for high security and edge-to-cloud connectivity. Falkonry Insight includes a patent-pending, high-throughput time series AI engine that inspects every sensor data point to identify reliability and performance anomalies along with their contributing factors. Falkonry Insight organizes the information needed by operations teams to determine root causes and automatically informs operations teams to take rapid action. By inserting an edge device into the US Navy’s operational environment that can process data continuously, increasingly sophisticated naval platforms can maintain high reliability and performance out at sea.
Build an Anomaly Detection Model using SME expertise
Achieving World-Class Predictive Maintenance with Normal Behavior Modeling
Central to the normal behavior modeling (NBM) concept is an algorithm known as an autoencoder, shown in Figure 1. Over time, the autoencoder’s input layer ingests a continuous stream of quantitative data from equipment sensors (temperature, pressure, etc.). This data is then fed to a hidden layer (of which there are typically several), where it gets compressed. Numerical weights (a value between 0 and 1) are then applied to each node, with the goal of eventually reproducing the input values at the output layer.
The principal purpose of NBM is to define the normal state of a complex system and then proactively identify instances where the system is operating outside of normal with sufficient advance warning to allow maintenance or repair actions to take place to avoid revenue loss, repair costs, and safety compromises that typically come with such failures.
Predicting Defrost in Refrigeration Cases at Walmart using Fourier Transform
As the largest grocer in the United States, Walmart has a massive assembly of supermarket refrigeration systems in its stores across the country. Food quality is an essential part of our customer experience and Walmart spends a considerable amount annually on maintenance of its vast portfolio of refrigeration systems. In an effort to improve the overall maintenance practices, we use preventative and proactive maintenance strategies. We at Walmart Global Tech use IoT data and build algorithms to study and proactively detect anomalous events in refrigeration systems at Walmart.
Condition monitoring in steel mills: 3 fault detections
Forecast Anomalies in Refrigeration with PySpark & Sensor-data
A refrigeration has four important components: Compressor, Condenser Fan, Evaporator Fan & Expansion Valve. Loosely speaking, together they try to keep the pressure at a reasonable level so as to maintain the temperature within (Remember, PV = nRT). In Walmart, we collect sensor data for all of these components (eg. pressure, fan speed, temperature) at a 10 minutes interval along with metrics like if the system is in defrost or not, compressor is locked out or not etc. We also capture outside air temperature as it impacts the condenser fan speed and in turn, the temperature.
The objective is to minimize the number of malfunctions and suggest probable resolutions of the same to save time. So, we leveraged this telemetry information in order to forecast anomalies in temperature, which would help in prioritizing issues and be proactive rather than reactive.
Intelligent edge management: why AI and ML are key players
What will the future of network edge management look like? We explain how artificial intelligence and machine learning technologies are crucial for intelligent edge computing and the management of future-proof networks. What’s required, and what are the building blocks needed to make it happen?
Using Machine Learning to identify operational modes in rotating equipment
Vibration monitoring is key to performing condition monitoring-based maintenance in rotating equipment such as engines, compressors, turbines, pumps, generators, blowers, and gearboxes. However, periodic route-based vibration monitoring programs are not enough to prevent breakdowns, as they normally offer a narrower view of the machines’ conditions.
Adding Machine Learning algorithms to this process makes it scalable, as it allows the analysis of historic data from equipment. One of the benefits is being able to identify operational modes and help maintenance teams to understand if the machine is operating in normal or abnormal conditions.
Application of AI to Oil Refineries and Petrochemical Plants
Artificial intelligent (AI), machine learning, data science, and other advanced technologies have been progressing remarkably, enabling computers to handle labor- and time-consuming tasks that used to be done manually. As big data have become available, it is expected that AI will automatically identify and solve problems in the manufacturing industry. This paper describes how AI can be used in oil refineries and petrochemical plants to solve issues regarding assets and quality.
A Case of Applying AI to an Ethylene Plant
Unexpected equipment failures or maintenance may result in unscheduled plant shutdowns in continuously operating petrochemical plants such as ethylene plants. To avoid this, the operation status needs to be continuously monitored. However, since troubles in plants have various causes, it is difficult for human workers to precisely grasp the plant status and notice the signs of unexpected failures and need for maintenance. To solve this problem, we worked with a customer in an ethylene plant and developed a solution based on AI analysis. Using AI analysis based on customer feedback, we identified several factors from numerous sensor parameters and created an AI model that can grasp the plant status and detect any signs of abnormalities. This paper introduces a case study of AI analysis carried out in an ethylene plant and the new value that AI technology can offer to customers, and then describes how to extend the solution business with AI analysis.