Machine Learning (ML)

Assembly Line

Improving Yield With Machine Learning

Date:

Author: Laura Peters

Topics: Machine Learning, Convolutional Neural Network, ResNet

Vertical: Semiconductor

Organizations: KLA, Synopsys, CyberOptics, Macronix

Machine learning is becoming increasingly valuable in semiconductor manufacturing, where it is being used to improve yield and throughput.

Synopsys engineers recently found that a decision tree deep learning method can classify 98% of defects and features at 60X faster retraining time than traditional CNNs. The decision tree utilizes 8 CNNs and ResNet to automatically classify 12 defect types with images from SEM and optical tools.

Macronix engineers showed how machine learning can expedite new etch process development in 3D NAND devices. Two parameters are particularly important in optimizing the deep trench slit etch — bottom CD and depth of polysilicon etch recess, also known as the etch stop.

KLA engineers, led by Cheng Hung Wu, optimized the use of a high landing energy e-beam inspection tool to capture defects buried as deep as 6µm in a 96-layer ONON stacked structure following deep trench etch. The e-beam tool can detect defects that optical inspectors cannot, but only if operated with high landing energy to penetrate deep structures. With this process, KLA was looking to develop an automated detection and classification system for deep trench defects.

Read more at Semiconductor Engineering

AI-Powered Verification

Date:

Topics: Machine Learning

Vertical: Semiconductor

Organizations: Agnisys, Cadence

“We see AI as a disruptive technology that will in the long run eliminate, and in the near term reduce the need for verification,” says Anupam Bakshi, CEO and founder of Agnisys. “We have had some early successes in using machine learning to read user specifications in natural language and directly convert them into SystemVerilog Assertions (SVA), UVM testbench code, and C/C++ embedded code for test and verification.”

There is nothing worse than spending time and resources to not get the desired result, or for it to take longer than necessary. “In formal, we have multiple engines, different algorithms that are working on solving any given property at any given time,” says Pete Hardee, director for product management at Cadence. “In effect, there is an engine race going on. We track that race and see for each property which engine is working. We use reinforcement learning to set the engine parameters in terms of which engines I’m going to use and how long to run those to get better convergence on the properties that didn’t converge the first time I ran it.”

Read more at Semiconductor Engineering

Batch Optimization using Quartic.ai

Ericsson’s next-gen AI-driven network dimensioning solution

Date:

Authors: Marcial Gutierrez, Sleeba Paul Puthenpurakel, Shrihari Vasudevan

Topics: Machine Learning

Organizations: Ericsson

Resource requirement estimation, often referred to as dimensioning, is a crucial activity in the telecommunications industry. Network dimensioning is an integral part of the Ericsson Sales Process when engaging with a prospective customer – find out more about our approach to network dimensioning and the critical importance of accuracy.

The telco dimensioning problem can be conceived as a regression problem from an AI/ML perspective. The proposed solution is Bayesian Regression which proved to be more robust to multi-collinearity of features. Additionally, our approach allows the incorporation of domain knowledge into the modeling (for example, in the form of priors, bounds and constraints), to avoid dropping network features that are critical for the domain and interpretability requirements, from a model’s trustworthiness perspective.

Read more at Ericsson Blog

Decentralized learning and intelligent automation: the key to zero-touch networks?

Date:

Authors: Selim Ickin, Hannes Larsson, Hassam Riaz, Xiaoyu Lan, Caner Kilinc

Topics: AI, Machine Learning, Federated Learning

Decentralized learning and the multi-armed bandit agent… It may sound like the sci-fi version of an old western. But could this dynamic duo hold the key to efficient distributed machine learning – a crucial factor in the realization of zero-touch automated mobile networks? Let’s find out.

Next-generation autonomous mobile networks will be complex ecosystems made up of a massive number of decentralized and intelligent network devices and nodes – network elements that may be both producing and consuming data simultaneously. If we are to realize our goal of fully automated zero-touch networks, new models of training artificial intelligence (AI) models need to be developed to accommodate these complex and diverse ecosystems.

Read more at Ericsson Blog

How Drishti empowers deep learning in manufacturing

Date:

Topics: Machine Learning

Organizations: Drishti

During his talk at the MLDS Conference, ‘New developments in Deep Learning for unlikely industries’, Shankar outlined Drishti’s industrial applications of AI in manufacturing. The company leverages deep learning and computer vision to automate the analysis of factory floor videos. Essentially, the company has installed cameras on assembly lines that capture videos on which the company runs object detection, anomaly detection and action recognition. Then, the data is sent to industrial engineers to improve the line.

Read more at Analytics India Magazine

Fingerprinting liquids for composites

Date:

Topics: Metrology, Machine Learning

Organizations: Collo, Kiilto

Collo uses electromagnetic sensors and edge analytics to optimize resin degassing, mixing, infusion, polymerization and cure as well as monitoring drift from benchmarked process parameters and enabling in-situ process control.

“So, the solution we are offering is real-time, inline measurement directly from the process,” says Järveläinen. “Our system then converts that data into physical quantities that are understandable and actionable, like rheological viscosity, and it helps to ensure high-quality liquid processes and products. It also allows optimizing the processes. For example, you can shorten mixing time because you can clearly see when mixing is complete. So, you can improve productivity, save energy and reduce scrap versus less optimized processing.”

Read more at Composites World

Why AI software companies are betting on small data to spot manufacturing defects

Date:

Author: Kate Kaye

Topics: Machine Learning, Visual Inspection, Defect Detection

Organizations: Landing AI, Mariner

The deep-learning algorithms that have come to dominate many of the technologies consumers and businesspeople interact with today are trained and improved by ingesting huge quantities of data. But because product defects show up so rarely, most manufacturers don’t have millions, thousands or even hundreds of examples of a particular type of flaw they need to watch out for. In some cases, they might only have 20 or 30 photos of a windshield chip or small pipe fracture, for example.

Because labeling inconsistencies can trip up deep-learning models, Landing AI aims to alleviate the confusion. The company’s software has features that help isolate inconsistencies and assist teams of inspectors in coming to agreement on taxonomy. “The inconsistencies in labels are pervasive,” said Ng. “A lot of these problems are fundamentally ambiguous.”

Read more at Protocol

How pioneering deep learning is reducing Amazon’s packaging waste

Date:

Author: Sean O'Neill

Topics: Machine Learning, Computer Vision, Convolutional Neural Network, Sustainability, E-commerce

Organizations: Amazon

Fortunately, machine learning approaches — particularly deep learning — thrive on big data and massive scale, and a pioneering combination of natural language processing and computer vision is enabling Amazon to hone in on using the right amount of packaging. These tools have helped Amazon drive change over the past six years, reducing per-shipment packaging weight by 36% and eliminating more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.

“When the model is certain of the best package type for a given product, we allow it to auto-certify it for that pack type,” says Bales. “When the model is less certain, it flags a product and its packaging for testing by a human.” The technology is currently being applied to product lines across North America and Europe, automatically reducing waste at a growing scale.

Read more at Amazon Science

Transfer learning with artificial neural networks between injection molding processes and different polymer materials

Date:

Authors: Yannik Lockner, Christian Hopmann, Weibo Zhao

Topics: artificial intelligence, machine learning

Vertical: Plastics and Rubber

Organizations: RWTH Aachen University

Finding appropriate machine setting parameters in injection molding remains a difficult task due to the highly nonlinear process behavior. Artificial neural networks are a well-suited machine learning method for modelling injection molding processes, however, it is costly and therefore industrially unattractive to generate a sufficient amount of process samples for model training. Therefore, transfer learning is proposed as an approach to reuse already collected data from different processes to supplement a small training data set. Process simulations for the same part and 60 different materials of 6 different polymer classes are generated by design of experiments. After feature selection and hyperparameter optimization, finetuning as transfer learning technique is proposed to adapt from one or more polymer classes to an unknown one. The results illustrate a higher model quality for small datasets and selective higher asymptotes for the transfer learning approach in comparison with the base approach.

Read more at ScienceDirect

Artificial intelligence optimally controls your plant

Date:

Topics: energy consumption, reinforcement learning, machine learning, industrial control system

Organizations: Siemens

Until now, heating systems have mainly been controlled individually or via a building management system. Building management systems follow a preset temperature profile, meaning they always try to adhere to predefined target temperatures. The temperature in a conference room changes in response to environmental influences like sunlight or the number of people present. Simple (PI or PID) controllers are used to make constant adjustments so that the measured room temperature is as close to the target temperature values as possible.

We believe that the best alternative is learning a control strategy by means of reinforcement learning (RL). Reinforcement learning is a machine learning method that has no explicit (learning) objective. Instead, an “agent” with as complete a knowledge of the system state as possible learns the manipulated variable changes that maximize a “reward” function defined by humans. Using algorithms from reinforcement learning, the agent, meaning the control strategy, can be trained from both current and recorded system data. This requires measurements for the manipulated variable changes that have been carried out, for the (resulting) changes to the system state over time, and for the variables necessary for calculating the reward.

Read more at Siemens Ingenuity

Quality prediction of ultrasonically welded joints using a hybrid machine learning model

Date:

Authors: Patrick G. Mongan, Eoin P. Hinchy, Noel P. ODowd, Conor T. McCarthy

Topics: machine learning, genetic algorithm, welding

Organizations: Confirm Smart Manufacturing Research Centre, University of Limerick

Ultrasonic metal welding has advantages over other joining technologies due to its low energy consumption, rapid cycle time and the ease of process automation. The ultrasonic welding (USW) process is very sensitive to process parameters, and thus can be difficult to consistently produce strong joints. There is significant interest from the manufacturing community to understand these variable interactions. Machine learning is one such method which can be exploited to better understand the complex interactions of USW input parameters. In this paper, the lap shear strength (LSS) of USW Al 5754 joints is investigated using an off-the-shelf Branson Ultraweld L20. Firstly, a 33 full factorial parametric study using ANOVA is carried out to examine the effects of three USW input parameters (weld energy, vibration amplitude & clamping pressure) on LSS. Following this, a high-fidelity predictive hybrid GA-ANN model is then trained using the input parameters and the addition of process data recorded during welding (peak power).

Read more at ScienceDirect

Machine learning predictions of superalloy microstructure

Date:

Authors: Patrick L Taylor, Gareth Conduit

Topics: machine learning, materials science

Organizations: University of Cambridge, Intellegens

Gaussian process regression machine learning with a physically-informed kernel is used to model the phase compositions of nickel-base superalloys. The model delivers good predictions for laboratory and commercial superalloys. Additionally, the model predicts the phase composition with uncertainties unlike the traditional CALPHAD method.

Read more at ScienceDirect

Hybrid machine learning-enabled adaptive welding speed control

Date:

Authors: Joseph Kershaw, Rui Yu, YuMing Zhang, Peng Wang

Topics: machine learning, robot welding, convolutional neural network

Organizations: University of Kentucky

This research presents a preliminary study on developing appropriate Machine Learning (ML) techniques for real-time welding quality prediction and adaptive welding speed adjustment for GTAW welding at a constant current. In order to collect the data needed to train the hybrid ML models, two cameras are applied to monitor the welding process, with one camera (available in practical robotic welding) recording the top-side weld pool dynamics and a second camera (unavailable in practical robotic welding, but applicable for training purpose) recording the back-side bead formation. Given these two data sets, correlations can be discovered through a convolutional neural network (CNN) that is good at image characterization. With the CNN, top-side weld pool images can be analyzed to predict the back-side bead width during active welding control.

Read more at Science Direct

Fabs Drive Deeper Into Machine Learning

Date:

Author: Anne Meixner

Topics: machine learning, machine vision, defect detection, convolutional neural network

Vertical: Semiconductor

Organizations: GlobalFoundries, KLA, SkyWater Technology, Onto Innovation, CyberOptics, Hitachi, Synopsys

For the past couple decades, semiconductor manufacturers have relied on computer vision, which is one of the earliest applications of machine learning in semiconductor manufacturing. Referred to as Automated Optical Inspection (AOI), these systems use signal processing algorithms to identify macro and micro physical deformations.

Defect detection provides a feedback loop for fab processing steps. Wafer test results produce bin maps (good or bad die), which also can be analyzed as images. Their data granularity is significantly larger than the pixelated data from an optical inspection tool. Yet test results from wafer maps can match the splatters generated during lithography and scratches produced from handling that AOI systems can miss. Thus, wafer test maps give useful feedback to the fab.

Read more at Semiconductor Engineering

Adoption of machine learning technology for failure prediction in industrial maintenance: A systematic review

Date:

Authors: Joerg Leukel, Julian Gonzalez, Martin Riekert

Topics: machine learning, predictive maintenance

Organizations: University of Hohenheim

Failure prediction is the task of forecasting whether a material system of interest will fail at a specific point of time in the future. This task attains significance for strategies of industrial maintenance, such as predictive maintenance. For solving the prediction task, machine learning (ML) technology is increasingly being used, and the literature provides evidence for the effectiveness of ML-based prediction models. However, the state of recent research and the lessons learned are not well documented. Therefore, the objective of this review is to assess the adoption of ML technology for failure prediction in industrial maintenance and synthesize the reported results. We conducted a systematic search for experimental studies in peer-reviewed outlets published from 2012 to 2020. We screened a total of 1,024 articles, of which 34 met the inclusion criteria.

Read more at ScienceDirect

Accelerating the Design of Automotive Catalyst Products Using Machine Learning

Date:

Authors: Tom Whitehead, Flora Chen, Christopher Daly, Gareth Conduit

Topics: generative design, machine learning

Vertical: Automotive

Organizations: Intellegens, Johnson Matthey

The design of catalyst products to reduce harmful emissions is currently an intensive process of expert-driven discovery, taking several years to develop a product. Machine learning can accelerate this timescale, leveraging historic experimental data from related products to guide which new formulations and experiments will enable a project to most directly reach its targets. We used machine learning to accurately model 16 key performance targets for catalyst products, enabling detailed understanding of the factors governing catalyst performance and realistic suggestions of future experiments to rapidly develop more effective products. The proposed formulations are currently undergoing experimental validation.

Read more at Ingenta Connect

Getting Industrial About The Hybrid Computing And AI Revolution

Date:

Author: Jeffrey Burt

Topics: IIoT, machine learning, reinforcement learning

Vertical: Petroleum and Coal

Organizations: Beyond Limits

Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.

Read more at The Next Platform

Real-World ML with Coral: Manufacturing

Date:

Author: Michael Brooks

Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

Organizations: Coral

For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. We’ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog

The Journey of Additive Manufacturing and Artificial Intelligence

The Machine Economy is Here: Powering a Connected World

Date:

Author: Megan Doyle

Topics: IIoT, machine learning, blockchain

Organizations: Flexon Technology, Allied Vision

In combination with the real-time data produced by IoT, blockchain, and ML applications are disrupting B2B companies across various industries from healthcare to manufacturing. Together, these three fundamental technologies create an intelligent system where connected devices can “talk” to one another. However, machines are still unable to conduct transactions with each other.

This is where distributed ledger technology (DLT) and blockchain come into play. Cryptocurrencies and smart contracts (self-executing contracts between buyers and sellers on a decentralized network) make it possible for autonomous machines to transact with one another on a blockchain.

Devices participating in M2M transactions can be programmed to make purchases based on individual or business needs. Human error was a cause for concern in the past; machine learning algorithms provide reliable and trusted data that continue to learn and improve — becoming smarter each day.

Read more at IoT For All

How to integrate AI into engineering

Date:

Author: Jos Martin

Topics: machine learning

Organizations: MathWorks

Most of the focus on AI is all about the AI model, which drives engineers to quickly dive into the modelling aspect of AI. After a few starter projects, engineers learn that AI is not just modelling, but rather a complete set of steps that includes data preparation, modelling, simulation and test, and deployment

Read more at The Engineer

Visual Inspection AI: a purpose-built solution for faster, more accurate quality control

Date:

Authors: Mandeep Wariach, Thomas Reinbacher

Topics: cloud computing, computer vision, machine learning, quality assurance

Organizations: Google

The Google Cloud Visual Inspection AI solution automates visual inspection tasks using a set of AI and computer vision technologies that enable manufacturers to transform quality control processes by automatically detecting product defects.

We built Visual Inspection AI to meet the needs of quality, test, manufacturing, and process engineers who are experts in their domain, but not in AI. By combining ease of use with a focus on priority uses cases, customers are realizing significant benefits compared to general purpose machine learning (ML) approaches.

Read more at Google Cloud Blog

Machine Learning Keeps Rolling Bearings on the Move

Date:

Author: Rehana Begg

Topics: machine learning, vibration analysis, predictive maintenance, bearing

Organizations: Osaka University

Rolling bearings are essential components in automated machinery with rotating elements. They come in many shapes and sizes, but are essentially designed to carry a load while minimizing friction. In general, the design consists of two rings separated by rolling elements (balls or rollers). The rings can rotate can rotate relative to each other with very little friction.

The ability to accurately predict the remaining useful life of the bearings under defect progression could reduce unnecessary maintenance procedures and prematurely discarded parts without risking breakdown, reported scientists from the Institute of Scientific and Industrial Research and NTN Next Generation Research Alliance Laboratories at Osaka University.

The scientists have developed a machine learning method that combines convolutional neural networks and Bayesian hierarchical modeling to predict the remaining useful life of rolling bearings. Their approach is based on the measured vibration spectrum.

Read more at Machine Design

Tree Model Quantization for Embedded Machine Learning Applications

Date:

Author: Leslie J. Schradin

Topics: edge computing, machine learning

Organizations: Qeexo

Compressed tree-based models are useful models to consider for embedded machine learning applications, in particular with the compression technique: quantization. Quantization can compress models by significant amounts with a trade-off of slight loss in model fidelity, allowing more room on the device for other programs.

Read more at Qeexo

The realities of developing embedded neural networks

Date:

Author: Tony King-Smith

Topics: edge computing, machine learning, AI

Organizations: AImotive

With any embedded software destined for deployment in volume production, an enormous amount of effort goes into the code once the implementation of its core functionality has been completed and verified. This optimization phase is all about minimizing memory, CPU and other resources needed so that as much as possible of the software functionality is preserved, while the resources needed to execute it are reduced to the absolute minimum possible.

This process of creating embedded software from lab-based algorithms enables production engineers to cost-engineer software functionality into a mass-production ready form, requiring far cheaper, less capable chips and hardware than the massive compute datacenter used to develop it. However, it usually requires the functionality to be frozen from the beginning, with code modifications only done to improve the way the algorithms themselves are executed. For most software, that is fine: indeed, it enables a rigorous verification methodology to be used to ensure the embedding process retains all the functionality needed.

However, when embedding NN-based AI algorithms, that can be a major problem. Why? Because by freezing the functionality from the beginning, you are removing one of the main ways in which the execution can be optimized.

Read more at Embedded

Google Cloud and Seagate: Transforming hard-disk drive maintenance with predictive ML

Date:

Authors: Nitin Aggarwal, Rostam Dinyari

Topics: machine learning, predictive maintenance

Vertical: Computer and Electronic

Organizations: Google, Seagate

At Google Cloud, we know first-hand how critical it is to manage HDDs in operations and preemptively identify potential failures. We are responsible for running some of the largest data centers in the world—any misses in identifying these failures at the right time can potentially cause serious outages across our many products and services. In the past, when a disk was flagged for a problem, the main option was to repair the problem on site using software. But this procedure was expensive and time-consuming. It required draining the data from the drive, isolating the drive, running diagnostics, and then re-introducing it to traffic.

That’s why we teamed up with Seagate, our HDD original equipment manufacturer (OEM) partner for Google’s data centers, to find a way to predict frequent HDD problems. Together, we developed a machine learning (ML) system, built on top of Google Cloud, to forecast the probability of a recurring failing disk—a disk that fails or has experienced three or more problems in 30 days.

Read more at Google Cloud Blog

Ford's Ever-Smarter Robots Are Speeding Up the Assembly Line

Date:

Author: Will Knight

Topics: AI, machine learning, robotics

Vertical: Automotive

Organizations: Ford, Symbio Robotics

At a Ford Transmission Plant in Livonia, Michigan, the station where robots help assemble torque converters now includes a system that uses AI to learn from previous attempts how to wiggle the pieces into place most efficiently. Inside a large safety cage, robot arms wheel around grasping circular pieces of metal, each about the diameter of a dinner plate, from a conveyor and slot them together.

The technology allows this part of the assembly line to run 15 percent faster, a significant improvement in automotive manufacturing where thin profit margins depend heavily on manufacturing efficiencies.

Read more at WIRED

Start-ups Powering New Era of Industrial Robotics

Date:

Author: James Falkoff

Topics: robotics, automated guided vehicle, machine learning

Vertical: Machinery

Organizations: Ready Robotics, ArtiMinds, Realtime Robotics, RIOS, Vicarious

Much of the bottleneck to achieving automation in manufacturing relates to limitations in the current programming model of industrial robotics. Programming is done in languages proprietary to each robotic hardware OEM – languages “straight from the 80s” as one industry executive put it.

There are a limited number of specialists who are proficient in these languages. Given the rarity of the expertise involved, as well as the time it takes to program a robot, robotics application development typically costs three times as much as the hardware for a given installation.

Read more at Robotics Business Review

Multi-Task Robotic Reinforcement Learning at Scale

Date:

Authors: Karol Hausman, Yevgen Chebotar

Topics: reinforcement learning, robotics, AI, machine learning

Organizations: Google

For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.

Read more at Google AI Blog

Intelligent edge management: why AI and ML are key players

Date:

Authors: Fetahi Wuhib, Mbarka Soualhia, Carla Mouradian, Wubin Li

Topics: AI, machine learning, edge computing, anomaly detection

Organizations: Ericsson

What will the future of network edge management look like? We explain how artificial intelligence and machine learning technologies are crucial for intelligent edge computing and the management of future-proof networks. What’s required, and what are the building blocks needed to make it happen?

Read more at Ericsson

Using Machine Learning to identify operational modes in rotating equipment

Date:

Author: Frederik Wartenberg

Topics: anomaly detection, vibration analysis, machine learning

Organizations: Viking Analytics

Vibration monitoring is key to performing condition monitoring-based maintenance in rotating equipment such as engines, compressors, turbines, pumps, generators, blowers, and gearboxes. However, periodic route-based vibration monitoring programs are not enough to prevent breakdowns, as they normally offer a narrower view of the machines’ conditions.

Adding Machine Learning algorithms to this process makes it scalable, as it allows the analysis of historic data from equipment. One of the benefits is being able to identify operational modes and help maintenance teams to understand if the machine is operating in normal or abnormal conditions.

Read more at Viking Analytics Blog

Amazon’s robot arms break ground in safety, technology

Date:

Author: Alan S. Brown

Topics: AI, machine learning, robotics, palletizer, robotic arm, worker safety

Organizations: Amazon

Robin, one of the most complex stationary robot arm systems Amazon has ever built, brings many core technologies to new levels and acts as a glimpse into the possibilities of combining vision, package manipulation and machine learning, said Will Harris, principal product manager of the Robin program.

Those technologies can be seen when Robin goes to work. As soft mailers and boxes move down the conveyor line, Robin must break the jumble down into individual items. This is called image segmentation. People do it automatically, but for a long time, robots only saw a solid blob of pixels.

Read more at Amazon Science

AI In Inspection, Metrology, And Test

Date:

Authors: Susan Rambo, Ed Sperling

Topics: AI, machine learning, quality assurance, metrology, nondestructive test

Vertical: Semiconductor

Organizations: CyberOptics, Lam Research, Hitachi, FormFactor, NuFlare, Advantest, PDF Solutions, eBeam Initiative, KLA, proteanTecs, Fraunhofer IIS

“The human eye can see things that no amount of machine learning can,” said Subodh Kulkarni, CEO of CyberOptics. “That’s where some of the sophistication is starting to happen now. Our current systems use a primitive kind of AI technology. Once you look at the image, you can see a problem. And our AI machine doesn’t see that. But then you go to the deep learning kind of algorithms, where you have very serious Ph.D.-level people programming one algorithm for a week, and they can detect all those things. But it takes them a week to program those things, which today is not practical.”

That’s beginning to change. “We’re seeing faster deep-learning algorithms that can be more easily programmed,” Kulkarni said. “But the defects also are getting harder to catch by a machine, so there is still a gap. The biggest bang for the buck is not going to come from improving cameras or projectors or any of the equipment that we use to generate optical images. It’s going to be interpreting optical images.”

Read more at Semiconductor Engineering

How To Measure ML Model Accuracy

Date:

Author: Bryon Moyer

Topics: machine learning

Organizations: Ansys, Brainome, Cadence, Flex Logix, Synopsys, Xilinx

Machine learning (ML) is about making predictions about new data based on old data. The quality of any machine-learning algorithm is ultimately determined by the quality of those predictions.

However, there is no one universal way to measure that quality across all ML applications, and that has broad implications for the value and usefulness of machine learning.

Read more at Semiconductor Engineering

Go beyond machine learning to optimize manufacturing operations

Date:

Author: Andrew Silberfarb

Topics: machine learning

Organizations: SRI International

Machine learning depends on vast amounts of data to make inferences. However, sometimes the amount of data needed by machine-learning algorithms is simply not available. SRI International has developed a system called Deep Adaptive Semantic Logic (DASL) that uses adaptive semantic reasoning to fill in the data gaps. DASL integrates bottom-up data-driven modeling with top-down theoretical reasoning in a symbiotic union of innovative machine learning and knowledge guided inference. The system brings experts and data together to make better, more informed decisions.

Read more at Automation Alley

Adversarial training reduces safety of neural networks in robots

Date:

Author: @BenDee983

Topics: AI, robotics, machine learning

A more fundamental problem, also confirmed by Lechner and his coauthors, is the lack of causality in machine learning systems. As long as neural networks focus on learning superficial statistical patterns in data, they will remain vulnerable to different forms of adversarial attacks. Learning causal representations might be the key to protecting neural networks against adversarial attacks. But learning causal representations itself is a major challenge and scientists are still trying to figure out how to solve it.

Read more at VentureBeat

What Walmart learned from its machine learning deployment

Date:

Author: Katie Malone

Topics: cloud computing, machine learning

Organizations: Walmart

As more businesses turn to automation to realize business value, retail’s wide variety of ML use cases can provide insights into how to overcome challenges associated with the technology. The goal should be trying to solve a problem by using ML as a tool to get there, Kamdar said.

For example, Walmart uses a ML model to optimize the timing and pricing of markdowns, and to examine real estate data to find places to cut costs, according to executives on an earnings call in February.

Read more at Supply Chain Dive

AI project to 'pandemic-proof' NHS supply chain

Date:

Topics: natural language processing, machine learning

Organizations: Vamstar

With the ability to analyse NHS and global procurement data from previous supply contracts, the platform will aim to allow NHS buyers to evaluate credibility and capability of suppliers to fulfil their order. Each supplier would have a real-time ‘risk rating’ with information on the goods and services they supply.

Researchers at Sheffield University’s Information School are said to be developing Natural Language Processing (NLP) methods for the automated reading and extraction of data from large amounts of contract tender data held by the NHS and other European healthcare providers

Read more at The Engineer

How Machine Learning Techniques Can Help Engineers Design Better Products

Date:

Topics: machine learning, generative design

Organizations: Altair

By leveraging field predictive ML models engineers can explore more options without the use of a solver when designing different components and parts, saving time and resources. This ultimately produces higher quality results that can then be used to make more informed decisions throughout the design process.

Read more at Altair Engineering

Introducing Amazon SageMaker Reinforcement Learning Components for open-source Kubeflow pipelines

Date:

Authors: Alex Chung, Kyle Saltmarsh, Leonard O'Sullivan, Matthew Rose, Nicholas Therkelsen-Terry, Nicholas Thomson, Ragha Prasad, Sahika Genc,

Topics: AI, machine learning, robotics

Organizations: AWS, Max Kelsen, Universal Robots, Woodside Energy

Woodside Energy uses AWS RoboMaker with Amazon SageMaker Kubeflow operators to train, tune, and deploy reinforcement learning agents to their robots to perform manipulation tasks that are repetitive or dangerous.

Read more at AWS Blog

Leveraging AI and Statistical Methods to Improve Flame Spray Pyrolysis

Date:

Author: Stephen J. Mraz

Topics: AI, machine learning, materials science

Vertical: Chemical

Organizations: Argonne National Laboratory

Flame spray pyrolysis has long been used to make small particles that can be used as paint pigments. Now, researchers at Argonne National Laboratory are refining the process to make smaller, nano-sized particles of various materials that can make nano-powders for low-cobalt battery cathodes, solid state electrolytes and platinum/titanium dioxide catalysts for turning biomass into fuel.

Read more at Machine Design

Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all

Date:

Author: @TiernanRayTech

Topics: AI, machine learning, robotics, reinforcement learning

Organizations: Google

With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.

Read more at ZDNet

AWS Announces General Availability of Amazon Lookout for Vision

Date:

Topics: cloud computing, computer vision, machine learning, quality assurance

Organizations: AWS, Basler, Dafgards, General Electric

AWS announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called “few-shot learning,” Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service.

Read more at Business Wire

Rearranging the Visual World

Date:

Authors: Andy Zeng, Pete Florence

Topics: AI, machine learning, robotics

Organizations: Google

Transporter Nets use a novel approach to 3D spatial understanding that avoids reliance on object-centric representations, making them general for vision-based manipulation but far more sample efficient than benchmarked end-to-end alternatives. As a consequence, they are fast and practical to train on real robots. We are also releasing an accompanying open-source implementation of Transporter Nets together with Ravens, our new simulated benchmark suite of ten vision-based manipulation tasks.

Read more at Google AI Blog

Artificial Intelligence: Driving Digital Innovation and Industry 4.0

Date:

Author: @ralph_ohr

Topics: AI, machine learning

Organizations: Siemens

Intelligent AI solutions can analyze high volumes of data generated by a factory to identify trends and patterns which can then be used to make manufacturing processes more efficient and reduce their energy consumption. Employing Digital Twin-enabled representations of a product and the associated process, AI is able to recognize whether the workpiece being manufactured meets quality requirements. This is how plants are constantly adapting to new circumstances and undergoing optimization with no need for operator input. New technologies are emerging in this application area, such as Reinforcement Learning – a topic that has not been deployed on a broad scale up to now. It can be used to automatically ascertain correlations between production parameters, product quality and process performance by learning through ‘trial-and-error’ – and thereby dynamically tuning the parameter values to optimize the overall process.

Read more at Siemens Ingenuity

Edge-Inference Architectures Proliferate

Date:

Author: Bryon Moyer

Topics: AI, machine learning, edge computing

Vertical: Semiconductor

Organizations: Cadence, Hailo, Google, Flex Logix, BrainChip, Synopsys, GrAI Matter, Deep Vision, Maxim Integrated

What makes one AI system better than another depends on a lot of different factors, including some that aren’t entirely clear.

The new offerings exhibit a wide range of structure, technology, and optimization goals. All must be gentle on power, but some target wired devices while others target battery-powered devices, giving different power/performance targets. While no single architecture is expected to solve every problem, the industry is in a phase of proliferation, not consolidation. It will be a while before the dust settles on the preferred architectures.

Read more at Semiconductor Engineering

Pushing The Frontiers Of Manufacturing AI At Seagate

Date:

Author: Tom Davenport

Topics: AI, machine learning, predictive maintenance, quality assurance

Vertical: Computer and Electronic

Organizations: Seagate

Big data, analytics and AI are widely used in industries like financial services and e-commerce, but are less likely to be found in manufacturing companies. With some exceptions like predictive maintenance, few manufacturing firms have marshaled the amounts of data and analytical talent to aggressively apply analytics and AI to key processes.

Seagate Technology, an over $10B manufacturer of data storage and management solutions, is a prominent counter-example to this trend. It has massive amounts of sensor data in its factories and has been using it extensively over the last five years to ensure and improve the quality and efficiency of its manufacturing processes.

Read more at Forbes

Building effective IoT applications with tinyML and automated machine learning

Date:

Authors: Rajen Bhatt, Tina Shyuan

Topics: IIoT, machine learning

Organizations: Qeexo

The convergence of IoT devices and ML algorithms enables a wide range of smart applications and enhanced user experiences, which are made possible by low-power, low-latency, and lightweight machine learning inference, i.e., tinyML.

Read more at Embedded

Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey

Date:

Authors: Nikolas Zolas, Zachary Kroff, Erik Brynjolfsson, Kristina McElheran, David N. Beede, Cathy Buffington, Nathan Goldschlag, Lucia Foster, Emin Dinlersoz

Topics: AI, augmented reality, cloud computing, machine learning, Radio-frequency identification, robotics

While robots are usually singled out as a key technology in studies of automation, the overall diffusion of robotics use and testing is very low across firms in the U.S. The use rate is only 1.3% and the testing rate is 0.3%. These levels correspond relatively closely with patterns found in the robotics expenditure question in the 2018 ASM. Robots are primarily concentrated in large, manufacturing firms. The distribution of robots among firms is highly skewed, and the skewness in favor of larger firms can have a disproportionate effect on the economy that is otherwise not obvious from the relatively low overall diffusion rate of robots. The least-used technologies are RFID (1.1%), Augmented Reality (0.8%), and Automated Vehicles (0.8%). Looking at the pairwise adoption of these technologies in Table 14, we find that use of Machine Learning and Machine Vision are most coincident. We find that use of Automated Guided Vehicles is closely associated with use of Augmented Reality, RFID, and Machine Vision.

Read more at National Bureau of Economic Research

How Instacart fixed its A.I. and keeps up with the coronavirus pandemic

Date:

Author: @JonathanVanian

Topics: COVID-19, demand planning, machine learning

Organizations: Instacart

Like many companies, online grocery delivery service Instacart has spent the past few months overhauling its machine-learning models because the coronavirus pandemic has drastically changed how customers behave.

Starting in mid-March, Instacart’s all-important technology for predicting whether certain products would be available at specific stores became increasingly inaccurate. The accuracy of a metric used to evaluate how many items are found at a store dropped to 61% from 93%, tipping off the Instacart engineers that they needed to re-train their machine learning model that predicts an item’s availability at a store. After all, customers could get annoyed being told one thing—the item that they wanted was available—when in fact it wasn’t, resulting in products never being delivered. ‘A shock to the system’ is how Instacart’s machine learning director Sharath Rao described the problem to Fortune.

Read more at Fortune (Paid)