🛢️🧠 ENEOS and PFN Begin Continuous Operation of AI-Based Autonomous Petrochemical Plant System
ENEOS Corporation (ENEOS) and Preferred Networks, Inc. (PFN) announced today that their artificial intelligence (AI) system, which they have been continuously operating since January 2023 for a butadiene extraction unit in ENEOS Kawasaki Refinery’s petrochemical plant, has achieved higher economy and efficiency than manual operations.
Jointly developed by ENEOS and PFN, the AI system is designed to automate large-scale, complex operations of oil refineries and petrochemical plants that currently require operators with years of experience. The new AI system is one of the world’s largest for petrochemical plant operation according to PFN’s research, with a total of 363 sensors for prediction and 13 controlled elements. The companies co-developed the system to improve safety and stability of plant operations by reducing dependence on technicians’ varying skill levels.
In a World First, Yokogawa’s Autonomous Control AI Is Officially Adopted for Use at an ENEOS Materials Chemical Plant
ENEOS Materials Corporation (formerly the elastomers business unit of JSR Corporation) and Yokogawa Electric Corporation (TOKYO: 6841) announce they have reached an agreement that Factorial Kernel Dynamic Policy Programming (FKDPP), a reinforcement learning-based AI algorithm, will be officially adopted for use at an ENEOS Materials chemical plant. This agreement follows a successful field test in which this autonomous control AI demonstrated a high level of performance while controlling a distillation column at this plant for almost an entire year. This is the first example in the world of reinforcement learning AI being formally adopted for direct control of a plant.
Over a 35 day (840 hour) consecutive period, from January 17 to February 21, 2022, this field test initially confirmed that the AI solution could control distillation operations that were beyond the capabilities of existing control methods (PID control/APC) and had necessitated manual control of valves based on the judgements of experienced plant personnel. Following a scheduled plant shut-down for maintenance and repairs, the field test resumed and has continued to the present date. It has been conclusively shown that this solution is capable of controlling the complex conditions that are needed to maintain product quality and ensure that liquids in the distillation column remain at an appropriate level, while making maximum possible use of waste heat as a heat source. In so doing it has stabilized quality, achieved high yield, and saved energy.
A maturity model for the autonomy of manufacturing systems
Modern manufacturing has to cope with dynamic and changing circumstances. Market fluctuations, the effects caused by unpredictable material shortages, highly variable product demand, and worker availability all require system robustness, flexibility, and resilience. To adapt to these new requirements, manufacturers should consider investigating, investing in, and implementing system autonomy. Autonomy is being adopted in multiple industrial contexts, but divergences arise when formalizing the concept of autonomous systems. To develop an implementation of autonomous manufacturing systems, it is essential to specify what autonomy means, how autonomous manufacturing systems are different from other autonomous systems, and how autonomous manufacturing systems are identified and achieved through the main features and enabling technologies. With a comprehensive literature review, this paper provides a definition of autonomy in the manufacturing context, infers the features of autonomy from different engineering domains, and presents a five-level model of autonomy — associated with maturity levels for the features — to ensure the complete identification and evaluation of autonomous manufacturing systems. The paper also presents the evaluation of a real autonomous system that serves as a use-case and a validation of the model.
Yokogawa Launches Autonomous Control AI Service for Use with Edge Controllers
Yokogawa Electric Corporation (TOKYO: 6841) announces the launch of a reinforcement learning service for edge controllers. This autonomous control service for OpreX™ Realtime OS-based Machine Controllers (e-RT3 Plus) utilizes the Factorial Kernel Dynamic Policy Programming (FKDPP) reinforcement learning AI algorithm, and consists of packaged software and an optional consulting service and/or a training program, depending on end user requirements. This software is being released globally, while consulting and the training program will be provided first in Japan, then in other markets.
AI and the chocolate factory
“After about 72 hours of training with the digital twin (on a standard computer; about 24 hours on computer clusters in the cloud), the AI is ready to control the real machine. That’s definitely much faster than humans developing these control algorithms,” Bischoff says. Using reinforcement learning, the AI has developed a solution strategy in which all the chocolate bars on the front conveyor belts are transported on as quickly as possible and the exact speed is only controlled on the last conveyor belt - is interestingly quite different from that of a conventional control system.
The researchers led by Martin Bischoff were able to make their approach even more practical by compressing and compiling the trained control models in such a way that they run cycle-synchronously on the Siemens Simatic controllers in real time. Thomas Menzel, who is responsible for the department Digital Machines and Innovation within the business segment Production Machines, sees great potential in the methodology of letting AI learn complex control tasks independently on the digital twin: “Under the name AI Motion Trainer, this method is now helping several co-creation partners to develop application-specific optimized controls in a much shorter time. Production machines are now no longer limited to tasks for which a PLC control program has already been developed but can realize all tasks that can be learned by AI. The integration with our SIMATIC portfolio makes the use of this technology particularly industry-grade.”
Smart Digital Reality for autonomous industrial facilities
Hexagon addresses the barriers to digital transformation with its Smart Digital Reality for autonomous industrial facilities, elevating the digital twin and digital thread by infusing it with intelligence to automate processes and analytics, increasingly removing human intervention on the journey to a fully autonomous future.
Yokogawa and DOCOMO Successfully Conduct Test of Remote Control Technology Using 5G, Cloud, and AI
Yokogawa Electric Corporation and NTT DOCOMO, INC. announced today that they have conducted a proof-of-concept test (PoC) of a remote control technology for industrial processing. The PoC test involved the use in a cloud environment of an autonomous control AI, the Factorial Kernel Dynamic Policy Programming (FKDPP) algorithm developed by Yokogawa and the Nara Institute of Science and Technology, and a fifth-generation (5G) mobile communications network provided by DOCOMO. The test, which successfully controlled a simulated plant processing operation, demonstrated that 5G is suitable for the remote control of actual plant processes.
Bridge the gap between Process Control and Reinforcement Learning with QuarticGym
Modern process control algorithms are the key to the success of industrial automation. The increased efficiency and quality create value that benefits everyone from the producers to the consumers. The question then is, could we further improve it?
From AlphaGo to robot-arm control, deep reinforcement learning (DRL) tackled a variety of tasks that traditional control algorithms cannot solve. However, it requires a large and compactly sampled dataset or a lot of interactions with the environment to succeed. In many cases, we need to verify and test the reinforcement learning in a simulator before putting it into production. However, there are few simulations for industrial-level production processes that are publicly available. In order to pay back the research community and encourage future works on applying DRL to process control problems, we built and published a simulation playground with data for every interested researcher to play around with and benchmark their own controllers. The simulators are all written in the easy-to-use OpenAI Gym format. Each of the simulations also has a corresponding data sampler, a pre-sampled d4rl-style dataset to train offline controllers, and a set of preconfigured online and offline Deep Learning algorithms.
The Autonomous Factory: Innovation through Personalized Production at Scale
Personalized products are in high demand these days. Meeting this demand is leading companies to increasingly automate their production processes and even make parts of it autonomous. However, this approach presents a trade-off: with increasing personalization comes increasing complexity. Therefore, companies need to decide on the expedient extents and levels of automation to be implemented in their factories. Two strategies that may help along the way: 1. Limited implementation in selected areas. 2. Co-creation with trusted partners.
What is Autonomous Manufacturing?
With Autonomous Manufacturing encompassing the concept that you don’t just need efficiency improvements across facilities based on connected devices, but can also realize process improvements that maximize flexibility with novel technology approaches, any development is possible. No longer will firms be constrained by skills and labor pools. They can turn any geography into a production center based on their proximity to raw materials, parts, inputs and the final market for their goods – a boon for workers, for profitability and for the environment.
Scalable reinforcement learning for plant-wide control of vinyl acetate monomer process
This paper explores a reinforcement learning (RL) approach that designs automatic control strategies in a large-scale chemical process control scenario as the first step for leveraging an RL method to intelligently control real-world chemical plants. The huge number of units for chemical reactions as well as feeding and recycling the materials of a typical chemical process induces a vast amount of samples and subsequent prohibitive computation complexity in RL for deriving a suitable control policy due to high-dimensional state and action spaces. To tackle this problem, a novel RL algorithm: Factorial Fast-food Dynamic Policy Programming (FFDPP) is proposed. By introducing a factorial framework that efficiently factorizes the action space, Fast-food kernel approximation that alleviates the curse of dimensionality caused by the high dimensionality of state space, into Dynamic Policy Programming (DPP) that achieves stable learning even with insufficient samples. FFDPP is evaluated in a commercial chemical plant simulator for a Vinyl Acetate Monomer (VAM) process. Experimental results demonstrate that without any knowledge of the model, the proposed method successfully learned a stable policy with reasonable computation resources to produce a larger amount of VAM product with comparative performance to a state-of-the-art model-based control.