AI

Recent Posts

Big Tech eyes Industrial AI and Robotics

Date:

An overview of Big Tech’s in-roads into manufacturing and industrial AI. From bin picking to robotic wire arc additive manufacturing (WAAM) the pace of industrial technology advances continues to pick up as digital transformation takes hold.

Assembly Line

Smart Devices, Smart Manufacturing: Pegatron Taps AI, Digital Twins

Date:

Author: Rick Merritt

Topics: AI, Defect Detection, Visual Inspection

Organizations: NVIDIA, Pegatron

Today, Pegatron uses Cambrian, an AI platform it built for automated inspection, deployed in most of its factories. It maintains hundreds of AI models, trained and running in production on NVIDIA GPUs. Pegatron’s system uses NVIDIA A100 Tensor Core GPUs to deploy AI models up to 50x faster than when it trained them on workstations, cutting weeks of work down to a few hours. Pegatron uses NVIDIA Triton Inference Server, open-source software that helps deploy, run and scale AI models across all types of processors, and frameworks.

Taking another step in smarter manufacturing, Pegatron is piloting NVIDIA Omniverse, a platform for developing digital twins “In my opinion, the greatest impact will come from building a full virtual factory so we can try out things like new ways to route products through the plant,” he said. “When you just build it out without a simulation first, your mistakes are very costly.”

Read more at NVIDIA Blog

Using artificial intelligence to control digital manufacturing

Date:

Topics: Additive Manufacturing, Computer Vision, AI

Organizations: MIT

MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time. They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.

The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.

Read more at MIT News

Visual Anomaly Detection: Opportunities and Challenges

Date:

Author: Yuchen Fama

Topics: Defect Detection, AI, Visual Inspection

Organizations: Clarifai

Clarifai is pleased to announce pre-GA product offering of PatchCore-based visual anomaly detection model, as part of our visual inspection solution package for manufacturing which also consists of various purpose-built visual detection and segmentation models, custom workflows and reference application templates.

Users only need a few hundred images of normal examples for training, and ~10 anomalous examples for each category for calibration & testing only, especially with more homogeneous background and more focused region-of-interest.

Read more at Assembly Magazine

Yokogawa and DOCOMO Successfully Conduct Test of Remote Control Technology Using 5G, Cloud, and AI

Date:

Topics: Autonomous Production, 5G, Reinforcement Learning, AI

Organizations: Yokogawa, DOCOMO, Nara Institute of Science and Technology

Yokogawa Electric Corporation and NTT DOCOMO, INC. announced today that they have conducted a proof-of-concept test (PoC) of a remote control technology for industrial processing. The PoC test involved the use in a cloud environment of an autonomous control AI, the Factorial Kernel Dynamic Policy Programming (FKDPP) algorithm developed by Yokogawa and the Nara Institute of Science and Technology, and a fifth-generation (5G) mobile communications network provided by DOCOMO. The test, which successfully controlled a simulated plant processing operation, demonstrated that 5G is suitable for the remote control of actual plant processes.

Read more at Yokogawa Press Releases

Decentralized learning and intelligent automation: the key to zero-touch networks?

Date:

Authors: Selim Ickin, Hannes Larsson, Hassam Riaz, Xiaoyu Lan, Caner Kilinc

Topics: AI, Machine Learning, Federated Learning

Decentralized learning and the multi-armed bandit agent… It may sound like the sci-fi version of an old western. But could this dynamic duo hold the key to efficient distributed machine learning – a crucial factor in the realization of zero-touch automated mobile networks? Let’s find out.

Next-generation autonomous mobile networks will be complex ecosystems made up of a massive number of decentralized and intelligent network devices and nodes – network elements that may be both producing and consuming data simultaneously. If we are to realize our goal of fully automated zero-touch networks, new models of training artificial intelligence (AI) models need to be developed to accommodate these complex and diverse ecosystems.

Read more at Ericsson Blog

What’s Cognitive Manufacturing? Why Should It Matter To You?

Date:

Author: Avnish Kumar

Topics: IIoT, AI

Organizations: LivNSense

The whole complex ecosystem of industries requires integration of various data systems. It is not just the sensor data system that needs retrofication. As many systems are analogue, there exist multiple interfaces because of various proprietary and automation systems such as DCS, SCADA, Historian, and PLC systems. With multiple protocols a simplification of this ecosystem can be done by customisation, bringing data from all the heterogeneous processes to a big data platform, understanding the business processes and gaps, and applying the predictive and prescriptive analytics.

Read more at Electronics B2B

Ford Taps Non-IT Professionals to Broaden Its AI Expertise

Date:

Author: John McCormick

Topics: AI

Organizations: Ford

Ford hopes that opening up AI development to a broader range of employees can significantly reduce the average time it takes to develop many applications, in some cases from months to weeks and even days.

Ford’s AI builders are working on an AI-optimization model that will help the company decide which vehicles should be shipped to which European countries so that car inventory is optimized to maximize sales, according to Ford. The model takes into account thousands of variables, including the carbon-dioxide emissions of each vehicle type, each countries’ emission standards, the amount of miles citizens in a particular country drive, as well as the adoption of electric vehicles and the size of vehicles preferred in each country. Ford said the number of variables being analyzed requires the use of AI, which is designed to handle large data sets.

Read more at Wall Street Journal (Paid)

Real-World ML with Coral: Manufacturing

Date:

Author: Michael Brooks

Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

Organizations: Coral

For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. We’ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog

Simplify Deep Learning Systems with Optimized Machine Vision Lighting

Date:

Author: Steve Kinney

Topics: AI, machine vision

Organizations: Smart Vision Lights

Deep learning cannot compensate for or replace quality lighting. This experiment’s results would hold true over a wide variety of machine vision applications. Poor lighting configurations will result in poor feature extraction and increased defect detection confusion (false positives).

Several rigorous studies show that classification accuracy reduces with image quality distortions such as blur and noise. In general, while deep neural networks perform better than or on par with humans on quality images, a network’s performance is much lower than a human’s when using distorted images. Lighting improves input data, which greatly increases the ability of deep neural network systems to compare and classify images for machine vision applications. Smart lighting — geometry, pattern, wavelength, filters, and more — will continue to drive and produce the best results for machine vision applications with traditional or deep learning systems.

Read more at Quality Magazine

Scientists Set to Use Social Media AI Technology to Optimize Parts for 3D Printing

Date:

Author: Kubi Sertoglu

Topics: 3D Printing, additive manufacturing, AI, genetic algorithm

Organizations: Department of Energy, Argonne National Laboratory

“My idea was that a material’s structure is no different than a 3D image,” he explains. ​“It makes sense that the 3D version of this neural network will do a good job of recognizing the structure’s properties — just like a neural network learns that an image is a cat or something else.”

To see if his idea would work, Messner designed a defined 3D geometry and used conventional physics-based simulations to create a set of two million data points. Each of the data points linked his geometry to ‘desired’ values of density and stiffness. Then, he fed the data points into a neural network and trained it to look for the desired properties.

Finally, Messner used a genetic algorithm – an iterative, optimization-based class of AI – together with the trained neural network to determine the structure that would result in the properties he sought. Impressively, his AI approach found the correct structure 2,760x faster than the conventional physics simulation.

Read more at 3D Printing Industry

If AI Is So Awesome, Why Aren’t You Using It?

Date:

Author: Jeff Winter

Topics: AI

Organizations: Grantek

With all these universal applications and clearly understood benefits, the writing appears to be on the wall: AI is the wave of the future, and if you are not using or planning on using AI soon, you will be history! Software, platforms, and technologies are already out there, yet adoption appears to be slow. Financial justification and benefits analysis seem to be no-brainers, yet no one is out rushing to make improvements. Why is that?

Read more at Grantek

Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects

Date:

Author: Scott Martin

Topics: AI, edge computing, computer vision, recycling

Vertical: Plastics and Rubber

Organizations: NVIDIA, AMP Robotics

People worldwide produce 2 billion tons of waste a year, with 37 percent going to landfill, according to the World Bank.

“Sorting by hand on conveyor belts is dirty and dangerous, and the whole place smells like rotting food. People in the recycling industry told me that robots were absolutely needed,” said Horowitz, the company’s CEO.

His startup, AMP Robotics, can double sorting output and increase purity for bales of materials. It can also sort municipal waste, electronic waste, and construction and demolition materials.

Read more at NVIDIA Blog

Tilling AI: Startup Digs into Autonomous Electric Tractors for Organics

Date:

Author: Scott Martin

Topics: AI, machine vision

Vertical: Agriculture

Organizations: Ztractor, NVIDIA

Ztractor offers tractors that can be configured to work on 135 different types of crops. They rely on the NVIDIA Jetson edge AI platform for computer vision tasks to help farms improve plant conditions, increase crop yields and achieve higher efficiency.

Read more at NVIDIA Blog

Toward Generalized Sim-to-Real Transfer for Robot Learning

Date:

Authors: Daniel Ho, Kanishka Rao

Topics: reinforcement learning, AI, robotics, imitation learning, generative adversarial networks

Organizations: Google

A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.

To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies — so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning — and thus bridge the visual discrepancy between sim and real.

Read more at Google AI Blog

The realities of developing embedded neural networks

Date:

Author: Tony King-Smith

Topics: edge computing, machine learning, AI

Organizations: AImotive

With any embedded software destined for deployment in volume production, an enormous amount of effort goes into the code once the implementation of its core functionality has been completed and verified. This optimization phase is all about minimizing memory, CPU and other resources needed so that as much as possible of the software functionality is preserved, while the resources needed to execute it are reduced to the absolute minimum possible.

This process of creating embedded software from lab-based algorithms enables production engineers to cost-engineer software functionality into a mass-production ready form, requiring far cheaper, less capable chips and hardware than the massive compute datacenter used to develop it. However, it usually requires the functionality to be frozen from the beginning, with code modifications only done to improve the way the algorithms themselves are executed. For most software, that is fine: indeed, it enables a rigorous verification methodology to be used to ensure the embedding process retains all the functionality needed.

However, when embedding NN-based AI algorithms, that can be a major problem. Why? Because by freezing the functionality from the beginning, you are removing one of the main ways in which the execution can be optimized.

Read more at Embedded

AI Vision for Monitoring Applications in Manufacturing and Industrial Environments

Date:

Topics: AI, quality assurance, machine vision, worker safety

Organizations: ADLINK

In traditional industrial and manufacturing environments, monitoring worker safety, enhancing operator efficiency, and improving quality assurance were physical tasks. Today, AI-enabled machine vision technologies replace many of these inefficient, labor-intensive operations for greater reliability, safety, and efficiency. This article explores how, by deploying AI smart cameras, further performance improvements are possible since the data used to empower AI machine vision comes from the camera itself.

Read more at Electronics Media

FPGA comes back into its own as edge computing and AI catch fire

Date:

Topics: field-programmable gate array, edge computing, AI

Organizations: Efinix

The niche of edge computing burdens devices with the need for extremely low power operation, tight form factors, agility in the face of changing data sets, and the ability to evolve with changing AI capabilities via remote upgradeability — all at a reasonable price point. This is, in fact, the natural domain of the FPGA with an inherent excellence in accelerating compute-intensive tasks in a flexible, hardware-customizable platform. However, much of the available off-the-shelf FPGAs are geared toward data center applications in which power and cost profiles justify the bloat in FPGA technologies.

Read more at EE Times

Tools Move up the Value Chain to Take the Mystery Out of Vision AI

Date:

Author: Nitin Dahad

Topics: AI, machine vision, OpenVINO

Organizations: Intel, Xilinx

Intel DevCloud for the Edge and Edge Impulse offer cloud-based platforms that take most of the pain points away with easy access to the latest tools and software. While Xilinx and others have started offering complete systems-on-module with production-ready applications that can be deployed with tools at a higher level of abstraction, removing the need for some of the more specialist skills.

Read more at Embedded

How the USPS Is Finding Lost Packages More Quickly Using AI Technology from Nvidia

Date:

Author: Todd R. Weiss

Topics: AI, machine vision

Organizations: USPS, NVIDIA, Accenture

In one of its latest technology innovations, the USPS got AI help from Nvidia to fix a problem that has long confounded existing processes – how to better track packages that get lost within the USPS system so they can be found in hours instead of in several days. In the past, it took eight to 10 people several days to locate and recover lost packages within USPS facilities. Now it is done by one or two people in a couple hours using AI.

Read more at EnterpriseAI

Influence estimation for generative adversarial networks

Date:

Author: Naoyuki Terashita

Topics: AI, generative adversarial networks

Organizations: Hitachi

Expanding applications [1, 2] of generative adversarial networks (GANs) makes improving the generative performance of models increasingly crucial. An effective approach to improve machine learning models is to identify training instances that “harm” the model’s performance. Recent studies [3, 4] replaced traditional manual screening of a dataset with “influence estimation.” They evaluated the harmfulness of a training instance based on how the performance is expected to change when the instance is removed from the dataset. An example of a harmful instance is a wrongly labeled instance (e.g., a “dog” image labeled as a “cat”). Influence estimation judges this “cat labeled dog image” as a harmful instance when the removal of “cat labeled dog image” is predicted to improve the performance (Figure 1)

Read more at Hitachi Industrial AI Blog

John Deere and Audi Apply Intel’s AI Technology

Date:

Author: David Greenfield

Topics: AI, quality assurance, robot welding, machine vision

Vertical: Agriculture, Automotive

Organizations: John Deere, Audi, Intel

Identifying defects in welds is a common quality control process in manufacturing. To make these inspections more accurate, John Deere is applying computer vision, coupled with Intel’s AI technology, to automatically spot common defects in the automated welding process used in its manufacturing facilities.

At Audi, automated welding applications range from spot welding to riveting. The widespread automation in Audi factories is part of the company’s goal of creating Industrie 4.0-level smart factories. A key aspect of this goal involves Audi’s recognition that creating customized hardware and software to handle individual use cases is not preferrable. Instead, the company focuses on developing scalable and flexible platforms that allow them to more broadly apply advanced digital capabilities such as data analytics, machine learning, and edge computing.

Read more at AutomationWorld

Robotic Flexibility: How Today’s Autonomous Systems Can Be Adapted to Support Changing Operational Needs

Date:

Author: Sara Pearson Specter

Topics: robotics, AI

Vertical: Machinery

Organizations: Obeta, Covariant, KNAPP

While robots are ideally suited to repetitive tasks, until now they lacked the intelligence to identify and handle tens of thousands of constantly changing products in a typical dynamic warehouse operation. That made applying robots to picking applications somewhat limited. Therefore, when German electrical supply wholesaler Obeta sought to install a new automated storage system from MHI member KNAPP in its new Berlin warehouse as a means to address a regional labor shortage made worse by COVID-19, the company specified a robotic picking system powered by onboard artificial intelligence (AI).

“The Covariant Brain is a universal AI that allows robots to see, reason and act in the world around them, completing tasks too complex and varied for traditional programmed robots. Covariant’s software enables Obeta’s Pick-It-Easy Robot to adapt to new tasks on its own through trial and error, so it can handle almost any object,” explained Peter Chen, co-founder and CEO of MHI member Covariant.ai.

Read more at MHI Solutions Magazine

Ford's Ever-Smarter Robots Are Speeding Up the Assembly Line

Date:

Author: Will Knight

Topics: AI, machine learning, robotics

Vertical: Automotive

Organizations: Ford, Symbio Robotics

At a Ford Transmission Plant in Livonia, Michigan, the station where robots help assemble torque converters now includes a system that uses AI to learn from previous attempts how to wiggle the pieces into place most efficiently. Inside a large safety cage, robot arms wheel around grasping circular pieces of metal, each about the diameter of a dinner plate, from a conveyor and slot them together.

The technology allows this part of the assembly line to run 15 percent faster, a significant improvement in automotive manufacturing where thin profit margins depend heavily on manufacturing efficiencies.

Read more at WIRED

Machine learning optimizes real-time inspection of instant noodle packaging

Date:

Topics: AI, machine vision, quality assurance

Vertical: Food

Organizations: Beckhoff Automation

During the production process there are various factors that can potentially lead to the seasoning sachets slipping between two noodle blocks and being cut open by the cutting machine or being packed separately in two packets side by side. Such defective products would result in consumer complaints and damage to the company’s reputation, for which reason delivery of such products to dealers should be reduced as far as possible. Since the machine type upgraded by Tianjin FengYu already produced with a very low error rate before, another aspect of quality control is critical: It must be ensured that only the defective and not the defect-free products are reliably sorted out.

Read more at Beckhoff Blog

Multi-Task Robotic Reinforcement Learning at Scale

Date:

Authors: Karol Hausman, Yevgen Chebotar

Topics: reinforcement learning, robotics, AI, machine learning

Organizations: Google

For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.

Read more at Google AI Blog

Intelligent edge management: why AI and ML are key players

Date:

Authors: Fetahi Wuhib, Mbarka Soualhia, Carla Mouradian, Wubin Li

Topics: AI, machine learning, edge computing, anomaly detection

Organizations: Ericsson

What will the future of network edge management look like? We explain how artificial intelligence and machine learning technologies are crucial for intelligent edge computing and the management of future-proof networks. What’s required, and what are the building blocks needed to make it happen?

Read more at Ericsson

Intel Accelerates AI for Industrial Applications

Date:

Author: Brian McCarson

Topics: AI, IIoT, PyTorch, OpenVINO, edge computing

Organizations: Intel

The human eye can correct for different lighting conditions easily. However, images collected by camera can naturally vary in intensity and contrast if background lighting varies as well. We’ve seen scale challenges observed by factories trying to deploy AI for defect detection based on the exact same hardware, software and algorithm deployed on different machines on the factory floor. Sometimes it took months for factory managers and data scientists to find out why they were getting great results on one machine with high accuracy, low false positive and false negative rates, while on the next machine over the AI application would crash.

Read more at Intel

Tractor Maker John Deere Using AI on Assembly Lines to Discover and Fix Hidden Defective Welds

Date:

Author: Todd R. Weiss

Topics: AI, quality assurance, machine vision, robot welding, arc welding

Vertical: Agriculture

Organizations: John Deere, Intel

John Deere performs gas metal arc welding at 52 factories where its machines are built around the world, and it has proven difficult to find defects in automated welds using manual inspections, according to the company.

That’s where the successful pilot program between Intel and John Deere has been making a difference, using AI and computer vision from Intel to “see” welding issues and get things back on track to keep John Deere’s pilot assembly line humming along.

Read more at EnterpriseAI

Amazon’s robot arms break ground in safety, technology

Date:

Author: Alan S. Brown

Topics: AI, machine learning, robotics, palletizer, robotic arm, worker safety

Organizations: Amazon

Robin, one of the most complex stationary robot arm systems Amazon has ever built, brings many core technologies to new levels and acts as a glimpse into the possibilities of combining vision, package manipulation and machine learning, said Will Harris, principal product manager of the Robin program.

Those technologies can be seen when Robin goes to work. As soft mailers and boxes move down the conveyor line, Robin must break the jumble down into individual items. This is called image segmentation. People do it automatically, but for a long time, robots only saw a solid blob of pixels.

Read more at Amazon Science

AI In Inspection, Metrology, And Test

Date:

Authors: Susan Rambo, Ed Sperling

Topics: AI, machine learning, quality assurance, metrology, nondestructive test

Vertical: Semiconductor

Organizations: CyberOptics, Lam Research, Hitachi, FormFactor, NuFlare, Advantest, PDF Solutions, eBeam Initiative, KLA, proteanTecs, Fraunhofer IIS

“The human eye can see things that no amount of machine learning can,” said Subodh Kulkarni, CEO of CyberOptics. “That’s where some of the sophistication is starting to happen now. Our current systems use a primitive kind of AI technology. Once you look at the image, you can see a problem. And our AI machine doesn’t see that. But then you go to the deep learning kind of algorithms, where you have very serious Ph.D.-level people programming one algorithm for a week, and they can detect all those things. But it takes them a week to program those things, which today is not practical.”

That’s beginning to change. “We’re seeing faster deep-learning algorithms that can be more easily programmed,” Kulkarni said. “But the defects also are getting harder to catch by a machine, so there is still a gap. The biggest bang for the buck is not going to come from improving cameras or projectors or any of the equipment that we use to generate optical images. It’s going to be interpreting optical images.”

Read more at Semiconductor Engineering

Harvesting AI: Startup’s Weed Recognition for Herbicides Grows Yield for Farmers

Date:

Author: Scott Brown

Topics: AI, machine vision

Vertical: Agriculture

Organizations: Bilberry, NVIDIA

In 2016, the former dorm-mates at École Nationale Supérieure d’Arts et Métiers, in Paris, founded Bilberry. The company today develops weed recognition powered by the NVIDIA Jetson edge AI platform for precision application of herbicides at corn and wheat farms, offering as much as a 92 percent reduction in herbicide usage.

Driven by advances in AI and pressures on farmers to reduce their use of herbicides, weed recognition is starting to see its day in the sun.

Read more at NVIDIA

AI tool locates and classifies defects in wind turbine blades

Date:

Topics: AI, defect detection, quality assurance

Vertical: Electrical Equipment

Organizations: Railston & Co, Loughborough University

Using image enhancement, augmentation methods and the Mask R-CNN deep learning algorithm, the system analyses images, highlights defect areas and labels them.

After developing the system, the researchers tested it by inputting 223 new images. The proposed tool is said to have achieved around 85 per cent test accuracy for the task of recognising and classifying wind turbine blade defects.

Read more at The Engineer

Adversarial training reduces safety of neural networks in robots

Date:

Author: @BenDee983

Topics: AI, robotics, machine learning

A more fundamental problem, also confirmed by Lechner and his coauthors, is the lack of causality in machine learning systems. As long as neural networks focus on learning superficial statistical patterns in data, they will remain vulnerable to different forms of adversarial attacks. Learning causal representations might be the key to protecting neural networks against adversarial attacks. But learning causal representations itself is a major challenge and scientists are still trying to figure out how to solve it.

Read more at VentureBeat

Using tactile-based reinforcement learning for insertion tasks

Date:

Authors: Alan Sullivan, Diego Romeres, Radu Corcodel

Topics: AI, cobot, reinforcement learning, robotics

Organizations: MIT, Mitsubishi Electric

A paper entitled “Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry” was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.

Read more at The Robot Report

Improving advanced manufacturing practices through AI's Bayesian network

Date:

Topics: AI, bayesian network

Organizations: 4M Partners

With experience, we learn awareness of events and conditions in our plant environment. As our experience matures, we learn the possibility of a given set of events and conditions resulting in certain outcomes. Computational models can perform the same service by capturing events and conditions, then calculating the probability of certain consequences. If the probability of an anticipated outcome is unacceptable, our computers can inform us of a condition needing attention or address the situation themselves. This, along with collecting meaningful volumes of relevant data, is the core of AI.

One mathematical model employed in AI is the Bayesian network (BN), which is a graph that defines the relationships between conditions or events and their possible consequences. The conditions or events are random variables that are identified on a BN as a node.

Read more at The Fabricator

Using AI to Find Essential Battery Materials

Date:

Author: @mariagallucci

Topics: AI, materials science

Vertical: Mining

Organizations: KoBold Metals, IBM, IEEE

KoBold’s AI-driven approach begins with its data platform, which stores all available forms of information about a particular area, including soil samples, satellite-based hyperspectral imaging, and century-old handwritten drilling reports. The company then applies machine learning methods to make predictions about the location of compositional anomalies—that is, unusually high concentrations of ore bodies in the Earth’s subsurface.

Read more at IEEE Spectrum

Evolutionary Algorithms: How Natural Selection Beats Human Design

Date:

Author: @OzdDerya

Topics: AI, generative design

Vertical: Aerospace

Organizations: NASA

An evolutionary algorithm, which is a subset of evolutionary computation, can be defined as a “population-based metaheuristic optimization algorithm.” These nature-inspired algorithms evolve populations of experimental solutions through numerous generations by using the basic principles of evolutionary biology such as reproduction, mutation, recombination, and selection.

Read more at Interesting Engineering

Introducing Amazon SageMaker Reinforcement Learning Components for open-source Kubeflow pipelines

Date:

Authors: Alex Chung, Kyle Saltmarsh, Leonard O'Sullivan, Matthew Rose, Nicholas Therkelsen-Terry, Nicholas Thomson, Ragha Prasad, Sahika Genc,

Topics: AI, machine learning, robotics

Organizations: AWS, Max Kelsen, Universal Robots, Woodside Energy

Woodside Energy uses AWS RoboMaker with Amazon SageMaker Kubeflow operators to train, tune, and deploy reinforcement learning agents to their robots to perform manipulation tasks that are repetitive or dangerous.

Read more at AWS Blog

Leveraging AI and Statistical Methods to Improve Flame Spray Pyrolysis

Date:

Author: Stephen J. Mraz

Topics: AI, machine learning, materials science

Vertical: Chemical

Organizations: Argonne National Laboratory

Flame spray pyrolysis has long been used to make small particles that can be used as paint pigments. Now, researchers at Argonne National Laboratory are refining the process to make smaller, nano-sized particles of various materials that can make nano-powders for low-cobalt battery cathodes, solid state electrolytes and platinum/titanium dioxide catalysts for turning biomass into fuel.

Read more at Machine Design

Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all

Date:

Author: @TiernanRayTech

Topics: AI, machine learning, robotics, reinforcement learning

Organizations: Google

With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.

Read more at ZDNet

Evolution of control systems with artificial intelligence

Date:

Authors: Kence Anderson, Winston Jenks, Prabu Parthasarathy

Topics: AI, industrial control system

Organizations: Microsoft, John Wood Group

Control systems have continuously evolved over decades, and artificial intelligence (AI) technologies are helping advance the next generation of some control systems.

The proportional-integral-derivative (PID) controller can be interpreted as a layering of capabilities: the proportional term points toward the signal, the integral term homes in on the setpoint and the derivative term can minimize overshoot.

Although the controls ecosystem may present a complex web of interrelated technologies, it can also be simplified by viewing it as ever-evolving branches of a family tree. Each control system technology offers its own characteristics not available in prior technologies. For example, feed forward improves PID control by predicting controller output, and then uses the predictions to separate disturbance errors from noise occurrences. Model predictive control (MPC) adds further capabilities to this by layering predictions of future control action results and controlling multiple correlated inputs and outputs. The latest evolution of control strategies is the adoption of AI technologies to develop industrial controls.

Read more at Control Engineering

Rearranging the Visual World

Date:

Authors: Andy Zeng, Pete Florence

Topics: AI, machine learning, robotics

Organizations: Google

Transporter Nets use a novel approach to 3D spatial understanding that avoids reliance on object-centric representations, making them general for vision-based manipulation but far more sample efficient than benchmarked end-to-end alternatives. As a consequence, they are fast and practical to train on real robots. We are also releasing an accompanying open-source implementation of Transporter Nets together with Ravens, our new simulated benchmark suite of ten vision-based manipulation tasks.

Read more at Google AI Blog

Artificial Intelligence: Driving Digital Innovation and Industry 4.0

Date:

Author: @ralph_ohr

Topics: AI, machine learning

Organizations: Siemens

Intelligent AI solutions can analyze high volumes of data generated by a factory to identify trends and patterns which can then be used to make manufacturing processes more efficient and reduce their energy consumption. Employing Digital Twin-enabled representations of a product and the associated process, AI is able to recognize whether the workpiece being manufactured meets quality requirements. This is how plants are constantly adapting to new circumstances and undergoing optimization with no need for operator input. New technologies are emerging in this application area, such as Reinforcement Learning – a topic that has not been deployed on a broad scale up to now. It can be used to automatically ascertain correlations between production parameters, product quality and process performance by learning through ‘trial-and-error’ – and thereby dynamically tuning the parameter values to optimize the overall process.

Read more at Siemens Ingenuity

Edge-Inference Architectures Proliferate

Date:

Author: Bryon Moyer

Topics: AI, machine learning, edge computing

Vertical: Semiconductor

Organizations: Cadence, Hailo, Google, Flex Logix, BrainChip, Synopsys, GrAI Matter, Deep Vision, Maxim Integrated

What makes one AI system better than another depends on a lot of different factors, including some that aren’t entirely clear.

The new offerings exhibit a wide range of structure, technology, and optimization goals. All must be gentle on power, but some target wired devices while others target battery-powered devices, giving different power/performance targets. While no single architecture is expected to solve every problem, the industry is in a phase of proliferation, not consolidation. It will be a while before the dust settles on the preferred architectures.

Read more at Semiconductor Engineering

Pushing The Frontiers Of Manufacturing AI At Seagate

Date:

Author: Tom Davenport

Topics: AI, machine learning, predictive maintenance, quality assurance

Vertical: Computer and Electronic

Organizations: Seagate

Big data, analytics and AI are widely used in industries like financial services and e-commerce, but are less likely to be found in manufacturing companies. With some exceptions like predictive maintenance, few manufacturing firms have marshaled the amounts of data and analytical talent to aggressively apply analytics and AI to key processes.

Seagate Technology, an over $10B manufacturer of data storage and management solutions, is a prominent counter-example to this trend. It has massive amounts of sensor data in its factories and has been using it extensively over the last five years to ensure and improve the quality and efficiency of its manufacturing processes.

Read more at Forbes

Stanford researchers propose AI that figures out how to use real-world objects

Date:

Author: @Kyle_L_Wiggers

Topics: AI

Organizations: Stanford

One longstanding goal of AI research is to allow robots to meaningfully interact with real-world environments. In a recent paper, researchers at Stanford and Facebook took a step toward this by extracting information related to actions like pushing or pulling objects with movable parts and using it to train an AI model. For example, given a drawer, their model can predict that applying a pulling force on the handle would open the drawer.

Read more at VentureBeat

Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey

Date:

Authors: Nikolas Zolas, Zachary Kroff, Erik Brynjolfsson, Kristina McElheran, David N. Beede, Cathy Buffington, Nathan Goldschlag, Lucia Foster, Emin Dinlersoz

Topics: AI, augmented reality, cloud computing, machine learning, Radio-frequency identification, robotics

While robots are usually singled out as a key technology in studies of automation, the overall diffusion of robotics use and testing is very low across firms in the U.S. The use rate is only 1.3% and the testing rate is 0.3%. These levels correspond relatively closely with patterns found in the robotics expenditure question in the 2018 ASM. Robots are primarily concentrated in large, manufacturing firms. The distribution of robots among firms is highly skewed, and the skewness in favor of larger firms can have a disproportionate effect on the economy that is otherwise not obvious from the relatively low overall diffusion rate of robots. The least-used technologies are RFID (1.1%), Augmented Reality (0.8%), and Automated Vehicles (0.8%). Looking at the pairwise adoption of these technologies in Table 14, we find that use of Machine Learning and Machine Vision are most coincident. We find that use of Automated Guided Vehicles is closely associated with use of Augmented Reality, RFID, and Machine Vision.

Read more at National Bureau of Economic Research

The Amazing Ways The Ford Motor Company Uses Artificial Intelligence And Machine Learning

Date:

Author: Bernard Marr

Topics: AI

Organizations: Ford

The Ford research lab has conducted research on computational intelligence for more than 20 years. About 15 years ago the company introduced an innovative misfire detection system—one of the first large-scale industrial applications of neural networks. Ford uses artificial intelligence to automate quality assurance as well; AI can detect wrinkles in car seats. In addition, neural networks help support Ford’s supply chain through inventory and resource management.

Read more at Forbes