In a World First, Yokogawa and JSR Use AI to Autonomously Control a Chemical Plant for 35 Consecutive Days
Yokogawa Electric Corporation (TOKYO: 6841) and JSR Corporation (JSR, TOKYO: 4185) announce the successful conclusion of a field test in which AI was used to autonomously run a chemical plant for 35 days, a world first. This test confirmed that reinforcement learning AI can be safely applied in an actual plant, and demonstrated that this technology can control operations that have been beyond the capabilities of existing control methods (PID control/APC) and have up to now necessitated the manual operation of control valves based on the judgements of plant personnel. The initiative described here was selected for the 2020 Projects for the Promotion of Advanced Industrial Safety subsidy program of the Japanese Ministry of Economy, Trade and Industry.
The AI used in this control experiment, the Factorial Kernel Dynamic Policy Programming (FKDPP) protocol, was jointly developed by Yokogawa and the Nara Institute of Science and Technology (NAIST) in 2018, and was recognized at an IEEE International Conference on Automation Science and Engineering as being the first reinforcement learning-based AI in the world that can be utilized in plant management.
Given the numerous complex physical and chemical phenomena that impact operations in actual plants, there are still many situations where veteran operators must step in and exercise control. Even when operations are automated using PID control and APC, highly-experienced operators have to halt automated control and change configuration and output values when, for example, a sudden change occurs in atmospheric temperature due to rainfall or some other weather event. This is a common issue at many companies’ plants. Regarding the transition to industrial autonomy, a very significant challenge has been instituting autonomous control in situations where until now manual intervention has been essential, and doing so with as little effort as possible while also ensuring a high level of safety. The results of this test suggest that this collaboration between Yokogawa and JSR has opened a path forward in resolving this longstanding issue.
Neuro-symbolic AI could provide machines with common sense
Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems.
“We’re trying to bring together the power of symbolic languages for knowledge representation and reasoning as well as neural networks and the things that they’re good at, but also with the idea of probabilistic inference, especially Bayesian inference or inverse inference in a causal model for reasoning backwards from the things we can observe to the things we want to infer, like the underlying physics of the world, or the mental states of agents,” Tenenbaum says.
There are several attempts to use pure deep learning for object position and pose detection, but their accuracy is low. In a joint project, MIT and IBM created “3D Scene Perception via Probabilistic Programming” (3DP3), a system that resolves many of the errors that pure deep learning systems fall into.
Enhancing Datasets For Artificial Intelligence Through Model-Based Methods
In industrial processes, data from time series play a particularly important role (e.g., sensor data, process parameters, log files, communication protocols). They are available in very different temporal resolutions – a temperature sensor might deliver values every minute, while for a spectral analysis of wireless network requires over 100 million samples per second.
The objective is to reflect all relevant states of the processes and uncertainties due to stochastic effects within the augmented time series. To add additional values to measured time series of an industrial process, insights into the process are beneficial. Such representation of the physical background can be called a model.
"In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In #manufacturing, you might have 10,000 manufacturers building 10,000 custom #AI models."@landingAI #Industry40 https://t.co/z7BvgDeJsE— David Rogers (@doclrogers) February 10, 2022
DeepSig Achieves Industry’s First AI-Native 5G Call & Why You Should Care
While AI is already used today to help manage wireless networks, AI’s usage in directly learning the signal processing algorithms to transmit and receive wireless signals is unprecedented. Proving AI’s advantages and implementations in 5G radio access components has started now with DeepSig’s AI software demonstrated in the industry’s first 5G AI-Native end-to-end call. DeepSig applies a leading form of AI called deep learning, uniquely implemented inside the physical layer of a 5GNR radio access network. The AI enhanced 5G network performs live over-the-air 5G data connections between smartphones and the internet. This not only proved a deep neural network can be implemented into a working 5G radio access network but more importantly demonstrates reduced processing load and power consumption, reduced latency, and improved signal quality and coverage.
John Deere’s self-driving tractor lets farmers leave the cab — and the field
The technology to support autonomous farming has been developing rapidly in recent years, but John Deere claims this is a significant step forward. With this technology, farmers will not only be able to take their hands off the wheel of their tractor or leave the cab — they’ll be able to leave the field altogether, letting the equipment do the work without them while monitoring things remotely using their smartphone.
The big difference with this new technology is that drivers will now be able to set-and-forget some aspects of their self-driving tractors. The company’s autonomy kit includes six pairs of stereo cameras that capture a 360-degree view around the tractor. This input is then analyzed by machine vision algorithms, which spot unexpected obstacles.
Transfer learning with artificial neural networks between injection molding processes and different polymer materials
Finding appropriate machine setting parameters in injection molding remains a difficult task due to the highly nonlinear process behavior. Artificial neural networks are a well-suited machine learning method for modelling injection molding processes, however, it is costly and therefore industrially unattractive to generate a sufficient amount of process samples for model training. Therefore, transfer learning is proposed as an approach to reuse already collected data from different processes to supplement a small training data set. Process simulations for the same part and 60 different materials of 6 different polymer classes are generated by design of experiments. After feature selection and hyperparameter optimization, finetuning as transfer learning technique is proposed to adapt from one or more polymer classes to an unknown one. The results illustrate a higher model quality for small datasets and selective higher asymptotes for the transfer learning approach in comparison with the base approach.
How AI (Artificial Intelligence) Will Impact T-shirt Printing Industry?
Besides printing for pattern-making, digitization, grading, and marker planning, the t-shirt manufacturing industry uses CAD software. The t-shirt printing industry uses ANN for defect detection during fabric inspections. Other tools like PPC help coordinate between various production departments to meet delivery dates and deliver orders to buyers on time. Besides manufacturing, AI also assists consumers in choosing the right product for their purposes.
Using t-shirt design software, for example, offers a wide range of design and customization options. It is easy to design a shirt using these AI-driven online t-shirt designing software for your eCommerce store, and even a beginner can do it. The t-shirt printing software enables your customers to add shadows, create distressed looks, and manipulate artwork on their t-shirts. There are print-ready template designs that can be set to your t-shirt according to your preferences using AI based t-shirt design tools. Additionally, the software offers design areas for expressing creative ideas. Furthermore, you can see how your t-shirt will look prior to printing, which saves you time and money.
How DeepMind is Reinventing the Robot
To train a robot, though, such huge data sets are unavailable. “This is a problem,” notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What’s more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you’ll have a badly dented robot, if not worse.
There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones. “One of our classic examples was training an agent to play Pong,” says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, “then the performance will—boop!—go off a cliff.” Suddenly it will lose 20 to zero every time.
There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network’s weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.
But that strategy is limited. For one thing, it’s not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you’d have to train it on every single one of them. And if the environment is unstructured, you won’t even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn’t let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.
Hadsell’s preferred approach is something called “elastic weight consolidation.” The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights.
AI in Manufacturing: How It's Used and Why It's Important for Future Factories
The fully autonomous factory has always been a provocative vision, much used in speculative fiction. It’s a place that’s nearly unmanned and run entirely by artificial intelligence (AI) systems directing robotic production lines. But this is unlikely to be the way AI will be employed in manufacturing within the practical planning horizon.
The realistic conception of AI in manufacturing looks more like a collection of applications for compact, discrete systems that manage specific manufacturing processes. They will operate more or less autonomously and respond to external events in increasingly intelligent and even humanlike ways—events ranging from a tool wearing out, a system outage, or a fire or natural disaster.