Reinforcement Learning (RL)

Recent Posts

Big Tech eyes Industrial AI and Robotics

Date:

An overview of Big Tech’s in-roads into manufacturing and industrial AI. From bin picking to robotic wire arc additive manufacturing (WAAM) the pace of industrial technology advances continues to pick up as digital transformation takes hold.

Assembly Line

Could Reinforcement Learning play a part in the future of wafer fab scheduling?

Date:

Author: Marcus Vitelli

Topics: Reinforcement Learning

Vertical: Semiconductor

Organizations: Flexciton

However, as the use of RL for JSS problems is still a novelty, it is not yet at the level of sophistication that the semiconductor industry would require. So far, the approaches can handle standard small problem scenarios but cannot handle flexible problems or batching decisions. Many constraints need to be obeyed in wafer fabs (e.g., timelinks and reticle availability) and it is not easily guaranteed that the agent will adhere to them. The objective set for the agent must be defined ahead of training, which means that any change made afterwards will require a repeat of training before new decisions can be obtained. This is less problematic for solving the instance proposed by Tassel et al., although their approach relies on a specifically modelled reward function which would not easily adapt to changing objectives.

Read more at Flexciton Blog

Yokogawa and DOCOMO Successfully Conduct Test of Remote Control Technology Using 5G, Cloud, and AI

Date:

Topics: Autonomous Production, 5G, Reinforcement Learning, AI

Organizations: Yokogawa, DOCOMO, Nara Institute of Science and Technology

Yokogawa Electric Corporation and NTT DOCOMO, INC. announced today that they have conducted a proof-of-concept test (PoC) of a remote control technology for industrial processing. The PoC test involved the use in a cloud environment of an autonomous control AI, the Factorial Kernel Dynamic Policy Programming (FKDPP) algorithm developed by Yokogawa and the Nara Institute of Science and Technology, and a fifth-generation (5G) mobile communications network provided by DOCOMO. The test, which successfully controlled a simulated plant processing operation, demonstrated that 5G is suitable for the remote control of actual plant processes.

Read more at Yokogawa Press Releases

In a World First, Yokogawa and JSR Use AI to Autonomously Control a Chemical Plant for 35 Consecutive Days

Date:

Topics: Autonomous Factory, Reinforcement Learning, Artificial Intelligence

Vertical: Chemical

Organizations: Yokogawa, JSR, Nara Institute of Science and Technology

Yokogawa Electric Corporation (TOKYO: 6841) and JSR Corporation (JSR, TOKYO: 4185) announce the successful conclusion of a field test in which AI was used to autonomously run a chemical plant for 35 days, a world first. This test confirmed that reinforcement learning AI can be safely applied in an actual plant, and demonstrated that this technology can control operations that have been beyond the capabilities of existing control methods (PID control/APC) and have up to now necessitated the manual operation of control valves based on the judgements of plant personnel. The initiative described here was selected for the 2020 Projects for the Promotion of Advanced Industrial Safety subsidy program of the Japanese Ministry of Economy, Trade and Industry.

The AI used in this control experiment, the Factorial Kernel Dynamic Policy Programming (FKDPP) protocol, was jointly developed by Yokogawa and the Nara Institute of Science and Technology (NAIST) in 2018, and was recognized at an IEEE International Conference on Automation Science and Engineering as being the first reinforcement learning-based AI in the world that can be utilized in plant management.

Given the numerous complex physical and chemical phenomena that impact operations in actual plants, there are still many situations where veteran operators must step in and exercise control. Even when operations are automated using PID control and APC, highly-experienced operators have to halt automated control and change configuration and output values when, for example, a sudden change occurs in atmospheric temperature due to rainfall or some other weather event. This is a common issue at many companies’ plants. Regarding the transition to industrial autonomy, a very significant challenge has been instituting autonomous control in situations where until now manual intervention has been essential, and doing so with as little effort as possible while also ensuring a high level of safety. The results of this test suggest that this collaboration between Yokogawa and JSR has opened a path forward in resolving this longstanding issue.

Read more at Yokogawa News

Action-limited, multimodal deep Q learning for AGV fleet route planning

Date:

Author: Hang Liu

Topics: Automated Guided Vehicle, Reinforcement Learning

Organizations: Hitachi

In traditional operating models, a navigation system completes all calculations i.e., the shortest path planning in a static environment, before the AGVs start moving. However, due to constant incoming offers, changes in vehicle availability, etc., this creates a huge and intractable optimization problem. Meanwhile, an optimal navigation strategy for an AGV fleet cannot be achieved if it fails to consider the fleet and delivery situation in real-time. Such dynamic route planning is more realistic and must have the ability to autonomously learn the complex environments. Deep Q network (DQN), that inherits the capabilities of deep learning and reinforcement learning, provides a framework that is well prepared to make decisions for discrete motion sequence problems.

Read more at Industrial AI Blog

Improving PPA In Complex Designs With AI

Date:

Author: John Koon

Topics: Reinforcement Learning, Generative Design

Vertical: Semiconductor

Organizations: Google, Cadence, Synopsys

The goal of chip design always has been to optimize power, performance, and area (PPA), but results can vary greatly even with the best tools and highly experienced engineering teams. AI works best in design when the problem is clearly defined in a way that AI can understand. So an IC designer must first see if there is a problem that can be tied to a system’s ability to adapt to, learn, and generalize knowledge/rules, and then apply these knowledge/rules to an unfamiliar scenario.

Read more at Semiconductor Engineering

Artificial intelligence optimally controls your plant

Date:

Topics: energy consumption, reinforcement learning, machine learning, industrial control system

Organizations: Siemens

Until now, heating systems have mainly been controlled individually or via a building management system. Building management systems follow a preset temperature profile, meaning they always try to adhere to predefined target temperatures. The temperature in a conference room changes in response to environmental influences like sunlight or the number of people present. Simple (PI or PID) controllers are used to make constant adjustments so that the measured room temperature is as close to the target temperature values as possible.

We believe that the best alternative is learning a control strategy by means of reinforcement learning (RL). Reinforcement learning is a machine learning method that has no explicit (learning) objective. Instead, an “agent” with as complete a knowledge of the system state as possible learns the manipulated variable changes that maximize a “reward” function defined by humans. Using algorithms from reinforcement learning, the agent, meaning the control strategy, can be trained from both current and recorded system data. This requires measurements for the manipulated variable changes that have been carried out, for the (resulting) changes to the system state over time, and for the variables necessary for calculating the reward.

Read more at Siemens Ingenuity

Getting Industrial About The Hybrid Computing And AI Revolution

Date:

Author: Jeffrey Burt

Topics: IIoT, machine learning, reinforcement learning

Vertical: Petroleum and Coal

Organizations: Beyond Limits

Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.

Read more at The Next Platform

Toward Generalized Sim-to-Real Transfer for Robot Learning

Date:

Authors: Daniel Ho, Kanishka Rao

Topics: reinforcement learning, AI, robotics, imitation learning, generative adversarial networks

Organizations: Google

A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.

To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies — so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning — and thus bridge the visual discrepancy between sim and real.

Read more at Google AI Blog

Multi-Task Robotic Reinforcement Learning at Scale

Date:

Authors: Karol Hausman, Yevgen Chebotar

Topics: reinforcement learning, robotics, AI, machine learning

Organizations: Google

For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.

Read more at Google AI Blog

Using tactile-based reinforcement learning for insertion tasks

Date:

Authors: Alan Sullivan, Diego Romeres, Radu Corcodel

Topics: AI, cobot, reinforcement learning, robotics

Organizations: MIT, Mitsubishi Electric

A paper entitled “Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry” was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.

Read more at The Robot Report

Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all

Date:

Author: @TiernanRayTech

Topics: AI, machine learning, robotics, reinforcement learning

Organizations: Google

With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.

Read more at ZDNet