Long short-term memory (LSTM)
Assembly Line
Time series prediction model using LSTM-Transformer neural network for mine water inflow
Mine flooding accidents have occurred frequently in recent years, and the predicting of mine water inflow is one of the most crucial flood warning indicators. Further, the mine water inflow is characterized by non-linearity and instability, making it difficult to predict. Accordingly, we propose a time series prediction model based on the fusion of the Transformer algorithm, which relies on self-attention, and the LSTM algorithm, which captures long-term dependencies. In this paper, Baotailong mine water inflow in Heilongjiang Province is used as sample data, and the sample data is divided into different ratios of the training set and test set in order to obtain optimal prediction results. In this study, we demonstrate that the LSTM-Transformer model exhibits the highest training accuracy when the ratio is 7:3. To improve the efficiency of search, the combination of random search and Bayesian optimization is used to determine the network model parameters and regularization parameters. Finally, in order to verify the accuracy of the LSTM-Transformer model, the LSTM-Transformer model is compared with LSTM, CNN, Transformer and CNN–LSTM models. The results prove that LSTM-Transformer has the highest prediction accuracy, and all the indicators of its model are well improved.
Long–short-term memory encoder–decoder with regularized hidden dynamics for fault detection in industrial processes
The ability of recurrent neural networks (RNN) to model nonlinear dynamics of high dimensional process data has enabled data-driven RNN-based fault detection algorithms. Previous studies have focused on detecting faults by identifying the discrepancies in data distribution between the faulty and normal data, as reflected in prediction errors generated by RNN models. However, in industrial processes, variations in data distribution can also result from changes in normal control setpoints and compensatory control adjustments in response to disturbances, making it hard to differentiate between normal and faulty conditions. This paper proposes a fault detection method utilizing a long short-term memory (LSTM) encoder–decoder structure with regularized hidden dynamics and reversible instance normalization (RevIN) to compactly represent high-dimensional measurements for effective monitoring. During training, the hidden states of the model are regularized to form a low-dimensional latent space representation of the original multivariate time series data. As a result, the prediction errors of the latent states can be used to monitor the abnormal dynamic variations, while the reconstruction errors of the measured variables are used to monitor the abnormal static variations. Furthermore, the proposed indices can reflect operating conditions, even when the distribution of test data changes, which helps distinguish faults from normal adjustments and disturbances that controllers can settle. Data from numerical simulation and the Tennessee Eastman process are used to illustrate the effectiveness of the proposed fault detection method.