Factorial Kernel Dynamic Policy Programming

Assembly Line

In a World First, Yokogawa’s Autonomous Control AI Is Officially Adopted for Use at an ENEOS Materials Chemical Plant

πŸ“… Date:

πŸ”– Topics: AI, Autonomous Production, Factorial Kernel Dynamic Policy Programming, Industrial Control System

🏭 Vertical: Chemical

🏒 Organizations: Yokogawa, ENEOS Materials


ENEOS Materials Corporation (formerly the elastomers business unit of JSR Corporation) and Yokogawa Electric Corporation (TOKYO: 6841) announce they have reached an agreement that Factorial Kernel Dynamic Policy Programming (FKDPP), a reinforcement learning-based AI algorithm, will be officially adopted for use at an ENEOS Materials chemical plant. This agreement follows a successful field test in which this autonomous control AI demonstrated a high level of performance while controlling a distillation column at this plant for almost an entire year. This is the first example in the world of reinforcement learning AI being formally adopted for direct control of a plant.

Over a 35 day (840 hour) consecutive period, from January 17 to February 21, 2022, this field test initially confirmed that the AI solution could control distillation operations that were beyond the capabilities of existing control methods (PID control/APC) and had necessitated manual control of valves based on the judgements of experienced plant personnel. Following a scheduled plant shut-down for maintenance and repairs, the field test resumed and has continued to the present date. It has been conclusively shown that this solution is capable of controlling the complex conditions that are needed to maintain product quality and ensure that liquids in the distillation column remain at an appropriate level, while making maximum possible use of waste heat as a heat source. In so doing it has stabilized quality, achieved high yield, and saved energy.

Read more at Yokogawa Press Release

Yokogawa Launches Autonomous Control AI Service for Use with Edge Controllers

πŸ“… Date:

πŸ”– Topics: Autonomous Production, Factorial Kernel Dynamic Policy Programming

🏒 Organizations: Yokogawa


Yokogawa Electric Corporation (TOKYO: 6841) announces the launch of a reinforcement learning service for edge controllers. This autonomous control service for OpreXβ„’ Realtime OS-based Machine Controllers (e-RT3 Plus) utilizes the Factorial Kernel Dynamic Policy Programming (FKDPP) reinforcement learning AI algorithm, and consists of packaged software and an optional consulting service and/or a training program, depending on end user requirements. This software is being released globally, while consulting and the training program will be provided first in Japan, then in other markets.

Read more at Yokogawa Press Release

Scalable reinforcement learning for plant-wide control of vinyl acetate monomer process

πŸ“… Date:

✍️ Authors: Lingwei Zhu, Yunduan Cui, Go Takami, Hiroaki Kanokogi, Takamitsu Matsubara

πŸ”– Topics: Reinforcement Learning, Autonomous Production, Factorial Kernel Dynamic Policy Programming

🏭 Vertical: Chemical

🏒 Organizations: Nara Institute of Science and Technology, Yokogawa


This paper explores a reinforcement learning (RL) approach that designs automatic control strategies in a large-scale chemical process control scenario as the first step for leveraging an RL method to intelligently control real-world chemical plants. The huge number of units for chemical reactions as well as feeding and recycling the materials of a typical chemical process induces a vast amount of samples and subsequent prohibitive computation complexity in RL for deriving a suitable control policy due to high-dimensional state and action spaces. To tackle this problem, a novel RL algorithm: Factorial Fast-food Dynamic Policy Programming (FFDPP) is proposed. By introducing a factorial framework that efficiently factorizes the action space, Fast-food kernel approximation that alleviates the curse of dimensionality caused by the high dimensionality of state space, into Dynamic Policy Programming (DPP) that achieves stable learning even with insufficient samples. FFDPP is evaluated in a commercial chemical plant simulator for a Vinyl Acetate Monomer (VAM) process. Experimental results demonstrate that without any knowledge of the model, the proposed method successfully learned a stable policy with reasonable computation resources to produce a larger amount of VAM product with comparative performance to a state-of-the-art model-based control.

Read more at Control Engineering Practice