Machine Learning (ML)

Assembly Line

Closed-loop fully-automated frameworks for accelerating materials discovery

๐Ÿ“… Date:

๐Ÿ”– Topics: Machine Learning, Materials Science

๐Ÿข Organizations: Citrine Informatics, Carnegie Mellon, MIT


Our work shows that a fully-automated closed-loop framework driven by sequential learning can accelerate the discovery of materials by up to 10-25x (or a reduction in design time by 90-95%) when compared to traditional approaches. We show that such closed-loop frameworks can lead to enormous improvement in researcher productivity in addition to reducing overall project costs. Overall, these findings present a clear value proposition for investing in closed-loop frameworks and sequential learning in materials discovery and design enterprises.

Read more at Citrine Informatics Blog

UVA Research Team Detects Additive Manufacturing Defects in Real-Time

๐Ÿ“… Date:

โœ๏ธ Author: Tao Sun

๐Ÿ”– Topics: Additive Manufacturing, Machine Learning, Laser Powder Bed Fusion

๐Ÿข Organizations: University of Virginia, Carnegie Melon, University of Wisconsin


Introduced in the 1990s, laser powder bed fusion, or LPBF uses metal powder and lasers to 3-D print metal parts. But porosity defects remain a challenge for fatigue-sensitive applications like aircraft wings. Some porosity is associated with deep and narrow vapor depressions which are the keyholes.

โ€œBy integrating operando synchrotron x-ray imaging, near-infrared imaging, and machine learning, our approach can capture the unique thermal signature associated with keyhole pore generation with sub-millisecond temporal resolution and 100% prediction rate,โ€ Sun said. In developing their real-time keyhole detection method, the researchers also advanced the way a state-of-the-art tool โ€” operando synchrotron x-ray imaging โ€” can be used. Utilizing machine learning, they additionally discovered two modes of keyhole oscillation.

Read more at UVA Engineering News

AI farming tool from BASF finds fertile ground in Japan's rice country

๐Ÿ“… Date:

โœ๏ธ Author: Taito Kurose

๐Ÿ”– Topics: Machine Learning

๐Ÿญ Vertical: Agriculture

๐Ÿข Organizations: BASF, Yamazaki Rice


Yamazaki Rice, based near Tokyo in Saitama prefecture, began using BASFโ€™s Xarvio Field Manager system this year with five workers on about 100 hectares of land.

Xarvio provides real-time analysis informed by satellite and weather data. Automated maps customize the amount of fertilizer recommended for each section of the farm. The data is fed to GPS-equipped farm equipment. The AI gives daily suggestions that Yamazaki Riceโ€™s president said helped improve yields by up to 25% in some fields. Xarvioโ€™s machine learning covers more than 10 years of crop data as well as scientific papers worldwide.

Read more at Nikkei Asia

How a universal model is helping one generation of Amazon robots train the next

๐Ÿ“… Date:

โœ๏ธ Author: Sean O'Neill

๐Ÿ”– Topics: Robot Arm, Machine Learning, Warehouse Automation

๐Ÿข Organizations: Amazon


In short, building a dataset big enough to train a demanding machine learning model requires time and resources, with no guarantee that the novel robotic process you are working toward will prove successful. This became a recurring issue for Amazon Robotics AI. So this year, work began in earnest to address the data scarcity problem. The solution: a โ€œuniversal modelโ€ able to generalize to virtually any package segmentation task.

To develop the model, Meeker and her colleagues first used publicly available datasets to give their model basic classification skills โ€” being able to distinguish boxes or packages from other things, for example. Next, they honed the model, teaching it to distinguish between many types of packaging in warehouse settings โ€” from plastic bags to padded mailers to cardboard boxes of varying appearance โ€” using a trove of training data compiled by the Robin program and half a dozen other Amazon teams over the last few years. This dataset comprised almost half a million annotated images.

The universal model now includes images of unpackaged items, too, allowing it to perform segmentation across a greater diversity of warehouse processes. Initiatives such as multimodal identification, which aims to visually identify items without needing to see a barcode, and the automated damage detection program are accruing product-specific data that could be fed into the universal model, as well as images taken on the fulfillment center floor by the autonomous robots that carry crates of products.

Read more at Amazon Science

Automated Optical Inspection

Machine-Learning-Enhanced Simulation Could Reduce Energy Costs in Materials Production

๐Ÿ“… Date:

๐Ÿ”– Topics: Sustainability, Machine Learning

๐Ÿข Organizations: Argonne National Laboratory, 3M


Thanks to a new computational effort being pioneered by the U.S. Department of Energyโ€™s (DOE) Argonne National Laboratory in conjunction with 3M and supported by the DOEโ€™S High Performance Computing for Energy Innovation (HPC4EI) program, researchers are finding new ways to dramatically reduce the amount of energy required for melt blowing the materials needed in N95 masks and other applications.

Currently, the process used to create a nozzle to spin nonwoven materials produces a very high-quality product, but it is quite energy intensive. Approximately 300,000 tons of melt-blown materials are produced annually worldwide, requiring roughly 245 gigawatt-hours per year of energy, approximately the amount generated by a large solar farm. By using Argonne supercomputing resources to pair computational fluid dynamics simulations and machine-learning techniques, the Argonne and 3M collaboration sought to reduce energy consumption by 20% without compromising material quality.

Because the process of making a new nozzle is very expensive, the information gained from the machine-learning model can equip material manufacturers with a way to narrow down to a set of optimal designs. โ€‹โ€Machine-learning-enhanced simulation is the best way of cheaply getting at the right combination of parameters like temperatures, material composition, and pressures for creating these materials at high quality with less energy,โ€ Blaiszik said.

Read more at AZO Materials

Machine learning-aided engineering of hydrolases for PET depolymerization

๐Ÿ“… Date:

โœ๏ธ Authors: Hongyuan Lu, Daniel J. Diaz, Natalie J. Czarnecki, Congzhi Zhu, Wantae Kim, Raghav Shroff, Daniel J. Acosta, Bradley R. Alexander, Hannah O. Cole, Yan Zhang, Nathaniel A. Lynd, Andrew D. Ellington, Hal S. Alper

๐Ÿ”– Topics: Sustainability, Machine Learning


Plastic waste poses an ecological challenge1,2,3 and enzymatic degradation offers one, potentially green and scalable, route for polyesters waste recycling4. Poly(ethylene terephthalate) (PET) accounts for 12% of global solid waste5, and a circular carbon economy for PET is theoretically attainable through rapid enzymatic depolymerization followed by repolymerization or conversion/valorization into other products6,7,8,9,10. Application of PET hydrolases, however, has been hampered by their lack of robustness to pH and temperature ranges, slow reaction rates and inability to directly use untreated postconsumer plastics11. Here, we use a structure-based, machine learning algorithm to engineer a robust and active PET hydrolase. Our mutant and scaffold combination (FAST-PETase: functional, active, stable and tolerant PETase) contains five mutations compared to wild-type PETase (N233K/R224Q/S121E from prediction and D186H/R280A from scaffold) and shows superior PET-hydrolytic activity relative to both wild-type and engineered alternatives12 between 30 and 50โ€‰ยฐC and a range of pH levels. We demonstrate that untreated, postconsumer-PET from 51 different thermoformed products can all be almost completely degraded by FAST-PETase in 1โ€‰week. FAST-PETase can also depolymerize untreated, amorphous portions of a commercial water bottle and an entire thermally pretreated water bottle at 50โ€‰ยบC. Finally, we demonstrate a closed-loop PET recycling process by using FAST-PETase and resynthesizing PET from the recovered monomers. Collectively, our results demonstrate a viable route for enzymatic plastic recycling at the industrial scale.

Read more at Nature

CircularNet: Reducing waste with Machine Learning

๐Ÿ“… Date:

โœ๏ธ Authors: Robert Little, Umair Sabir

๐Ÿ”– Topics: Sustainability, Machine Learning, Convolutional Neural Network

๐Ÿข Organizations: Google


The facilities where our waste and recyclables are processed are called โ€œMaterial Recovery Facilitiesโ€ (MRFs). Each MRF processes tens of thousands of pounds of our societal โ€œwasteโ€ every day, separating valuable recyclable materials like metals and plastics from non-recyclable materials. A key inefficiency within the current waste capture and sorting process is the inability to identify and segregate waste into high quality material streams. The accuracy of the sorting directly determines the quality of the recycled material; for high-quality, commercially viable recycling, the contamination levels need to be low. Even though the MRFs use various technologies alongside manual labor to separate materials into distinct and clean streams, the exceptionally cluttered and contaminated nature of the waste stream makes automated waste detection challenging to achieve, and the recycling rates and the profit margins stay at undesirably low levels.

Enter what we call โ€œCircularNetโ€, a set of models that lowers barriers to AI/ML tech for waste identification and all the benefits this new level of transparency can offer. Our goal with CircularNet is to develop a robust and data-efficient model for waste/recyclables detection, which can support the way we identify, sort, manage, and recycle materials across the waste management ecosystem.

Read more at Tensorflow Blog

Lufthansa increases on-time flights by wind forecasting with Google Cloud ML

๐Ÿ“… Date:

โœ๏ธ Author: Anant Nawalgaria

๐Ÿ”– Topics: Machine Learning, Forecasting

๐Ÿข Organizations: Lufthansa, Google


The magnitude and direction of wind significantly impacts airport operations, and Lufthansa Group Airlines are no exception. A particularly troublesome kind is called BISE: it is a cold, dry wind that blows from the northeast to southwest in Switzerland, through the Swiss Plateau. Its effects on flight schedules can be severe, such as forcing planes to change runways, which can create a chain reaction of flight delays and possible cancellations. In Zurich Airport, in particular, BISE can potentially reduce capacity by up to 30%, leading to further flight delays and cancellations, and to millions in lost revenue for Lufthansa (as well as dissatisfaction among their passengers).

Machine learning (ML) can help airports and airlines to better anticipate and manage these types of disruptive weather events. In this blog post, weโ€™ll explore an experiment Lufthansa did together with Google Cloud and its Vertex AI Forecast service, accurately predicting BISE hours in advance, with more than 40% relative improvement in accuracy over internal heuristics, all within days instead of the months it often takes to do ML projects of this magnitude and performance.

Read more at Google Cloud Blog

Improving Yield With Machine Learning

๐Ÿ“… Date:

โœ๏ธ Author: Laura Peters

๐Ÿ”– Topics: Machine Learning, Convolutional Neural Network, ResNet

๐Ÿญ Vertical: Semiconductor

๐Ÿข Organizations: KLA, Synopsys, CyberOptics, Macronix


Machine learning is becoming increasingly valuable in semiconductor manufacturing, where it is being used to improve yield and throughput.

Synopsys engineers recently found that a decision tree deep learning method can classify 98% of defects and features at 60X faster retraining time than traditional CNNs. The decision tree utilizes 8 CNNs and ResNet to automatically classify 12 defect types with images from SEM and optical tools.

Macronix engineers showed how machine learning can expedite new etch process development in 3D NAND devices. Two parameters are particularly important in optimizing the deep trench slit etch โ€” bottom CD and depth of polysilicon etch recess, also known as the etch stop.

KLA engineers, led by Cheng Hung Wu, optimized the use of a high landing energy e-beam inspection tool to capture defects buried as deep as 6ยตm in a 96-layer ONON stacked structure following deep trench etch. The e-beam tool can detect defects that optical inspectors cannot, but only if operated with high landing energy to penetrate deep structures. With this process, KLA was looking to develop an automated detection and classification system for deep trench defects.

Read more at Semiconductor Engineering

AI-Powered Verification

๐Ÿ“… Date:

๐Ÿ”– Topics: Machine Learning

๐Ÿญ Vertical: Semiconductor

๐Ÿข Organizations: Agnisys, Cadence


โ€œWe see AI as a disruptive technology that will in the long run eliminate, and in the near term reduce the need for verification,โ€ says Anupam Bakshi, CEO and founder of Agnisys. โ€œWe have had some early successes in using machine learning to read user specifications in natural language and directly convert them into SystemVerilog Assertions (SVA), UVM testbench code, and C/C++ embedded code for test and verification.โ€

There is nothing worse than spending time and resources to not get the desired result, or for it to take longer than necessary. โ€œIn formal, we have multiple engines, different algorithms that are working on solving any given property at any given time,โ€ says Pete Hardee, director for product management at Cadence. โ€œIn effect, there is an engine race going on. We track that race and see for each property which engine is working. We use reinforcement learning to set the engine parameters in terms of which engines Iโ€™m going to use and how long to run those to get better convergence on the properties that didnโ€™t converge the first time I ran it.โ€

Read more at Semiconductor Engineering

Batch Optimization using Quartic.ai

Ericssonโ€™s next-gen AI-driven network dimensioning solution

๐Ÿ“… Date:

โœ๏ธ Authors: Marcial Gutierrez, Sleeba Paul Puthenpurakel, Shrihari Vasudevan

๐Ÿ”– Topics: Machine Learning

๐Ÿข Organizations: Ericsson


Resource requirement estimation, often referred to as dimensioning, is a crucial activity in the telecommunications industry. Network dimensioning is an integral part of the Ericsson Sales Process when engaging with a prospective customer โ€“ find out more about our approach to network dimensioning and the critical importance of accuracy.

The telco dimensioning problem can be conceived as a regression problem from an AI/ML perspective. The proposed solution is Bayesian Regression which proved to be more robust to multi-collinearity of features. Additionally, our approach allows the incorporation of domain knowledge into the modeling (for example, in the form of priors, bounds and constraints), to avoid dropping network features that are critical for the domain and interpretability requirements, from a modelโ€™s trustworthiness perspective.

Read more at Ericsson Blog

Decentralized learning and intelligent automation: the key to zero-touch networks?

๐Ÿ“… Date:

โœ๏ธ Authors: Selim Ickin, Hannes Larsson, Hassam Riaz, Xiaoyu Lan, Caner Kilinc

๐Ÿ”– Topics: AI, Machine Learning, Federated Learning


Decentralized learning and the multi-armed bandit agentโ€ฆ It may sound like the sci-fi version of an old western. But could this dynamic duo hold the key to efficient distributed machine learning โ€“ a crucial factor in the realization of zero-touch automated mobile networks? Letโ€™s find out.

Next-generation autonomous mobile networks will be complex ecosystems made up of a massive number of decentralized and intelligent network devices and nodes โ€“ network elements that may be both producing and consuming data simultaneously. If we are to realize our goal of fully automated zero-touch networks, new models of training artificial intelligence (AI) models need to be developed to accommodate these complex and diverse ecosystems.

Read more at Ericsson Blog

How Drishti empowers deep learning in manufacturing

๐Ÿ“… Date:

๐Ÿ”– Topics: Machine Learning

๐Ÿข Organizations: Drishti


During his talk at the MLDS Conference, โ€˜New developments in Deep Learning for unlikely industriesโ€™, Shankar outlined Drishtiโ€™s industrial applications of AI in manufacturing. The company leverages deep learning and computer vision to automate the analysis of factory floor videos. Essentially, the company has installed cameras on assembly lines that capture videos on which the company runs object detection, anomaly detection and action recognition. Then, the data is sent to industrial engineers to improve the line.

Read more at Analytics India Magazine

Fingerprinting liquids for composites

๐Ÿ“… Date:

๐Ÿ”– Topics: Metrology, Machine Learning

๐Ÿข Organizations: Collo, Kiilto


Collo uses electromagnetic sensors and edge analytics to optimize resin degassing, mixing, infusion, polymerization and cure as well as monitoring drift from benchmarked process parameters and enabling in-situ process control.

โ€œSo, the solution we are offering is real-time, inline measurement directly from the process,โ€ says Jรคrvelรคinen. โ€œOur system then converts that data into physical quantities that are understandable and actionable, like rheological viscosity, and it helps to ensure high-quality liquid processes and products. It also allows optimizing the processes. For example, you can shorten mixing time because you can clearly see when mixing is complete. So, you can improve productivity, save energy and reduce scrap versus less optimized processing.โ€

Read more at Composites World

Why AI software companies are betting on small data to spot manufacturing defects

๐Ÿ“… Date:

โœ๏ธ Author: Kate Kaye

๐Ÿ”– Topics: Machine Learning, Visual Inspection, Defect Detection

๐Ÿข Organizations: Landing AI, Mariner


The deep-learning algorithms that have come to dominate many of the technologies consumers and businesspeople interact with today are trained and improved by ingesting huge quantities of data. But because product defects show up so rarely, most manufacturers donโ€™t have millions, thousands or even hundreds of examples of a particular type of flaw they need to watch out for. In some cases, they might only have 20 or 30 photos of a windshield chip or small pipe fracture, for example.

Because labeling inconsistencies can trip up deep-learning models, Landing AI aims to alleviate the confusion. The companyโ€™s software has features that help isolate inconsistencies and assist teams of inspectors in coming to agreement on taxonomy. โ€œThe inconsistencies in labels are pervasive,โ€ said Ng. โ€œA lot of these problems are fundamentally ambiguous.โ€

Read more at Protocol

How pioneering deep learning is reducing Amazonโ€™s packaging waste

๐Ÿ“… Date:

โœ๏ธ Author: Sean O'Neill

๐Ÿ”– Topics: Machine Learning, Computer Vision, Convolutional Neural Network, Sustainability, E-commerce

๐Ÿข Organizations: Amazon


Fortunately, machine learning approaches โ€” particularly deep learning โ€” thrive on big data and massive scale, and a pioneering combination of natural language processing and computer vision is enabling Amazon to hone in on using the right amount of packaging. These tools have helped Amazon drive change over the past six years, reducing per-shipment packaging weight by 36% and eliminating more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.

โ€œWhen the model is certain of the best package type for a given product, we allow it to auto-certify it for that pack type,โ€ says Bales. โ€œWhen the model is less certain, it flags a product and its packaging for testing by a human.โ€ The technology is currently being applied to product lines across North America and Europe, automatically reducing waste at a growing scale.

Read more at Amazon Science

Transfer learning with artificial neural networks between injection molding processes and different polymer materials

๐Ÿ“… Date:

โœ๏ธ Authors: Yannik Lockner, Christian Hopmann, Weibo Zhao

๐Ÿ”– Topics: artificial intelligence, machine learning

๐Ÿญ Vertical: Plastics and Rubber

๐Ÿข Organizations: RWTH Aachen University


Finding appropriate machine setting parameters in injection molding remains a difficult task due to the highly nonlinear process behavior. Artificial neural networks are a well-suited machine learning method for modelling injection molding processes, however, it is costly and therefore industrially unattractive to generate a sufficient amount of process samples for model training. Therefore, transfer learning is proposed as an approach to reuse already collected data from different processes to supplement a small training data set. Process simulations for the same part and 60 different materials of 6 different polymer classes are generated by design of experiments. After feature selection and hyperparameter optimization, finetuning as transfer learning technique is proposed to adapt from one or more polymer classes to an unknown one. The results illustrate a higher model quality for small datasets and selective higher asymptotes for the transfer learning approach in comparison with the base approach.

Read more at ScienceDirect

Artificial intelligence optimally controls your plant

๐Ÿ“… Date:

๐Ÿ”– Topics: energy consumption, reinforcement learning, machine learning, industrial control system

๐Ÿข Organizations: Siemens


Until now, heating systems have mainly been controlled individually or via a building management system. Building management systems follow a preset temperature profile, meaning they always try to adhere to predefined target temperatures. The temperature in a conference room changes in response to environmental influences like sunlight or the number of people present. Simple (PI or PID) controllers are used to make constant adjustments so that the measured room temperature is as close to the target temperature values as possible.

We believe that the best alternative is learning a control strategy by means of reinforcement learning (RL). Reinforcement learning is a machine learning method that has no explicit (learning) objective. Instead, an โ€œagentโ€ with as complete a knowledge of the system state as possible learns the manipulated variable changes that maximize a โ€œrewardโ€ function defined by humans. Using algorithms from reinforcement learning, the agent, meaning the control strategy, can be trained from both current and recorded system data. This requires measurements for the manipulated variable changes that have been carried out, for the (resulting) changes to the system state over time, and for the variables necessary for calculating the reward.

Read more at Siemens Ingenuity

Quality prediction of ultrasonically welded joints using a hybrid machine learning model

๐Ÿ“… Date:

โœ๏ธ Authors: Patrick G. Mongan, Eoin P. Hinchy, Noel P. ODowd, Conor T. McCarthy

๐Ÿ”– Topics: machine learning, genetic algorithm, welding

๐Ÿข Organizations: Confirm Smart Manufacturing Research Centre, University of Limerick


Ultrasonic metal welding has advantages over other joining technologies due to its low energy consumption, rapid cycle time and the ease of process automation. The ultrasonic welding (USW) process is very sensitive to process parameters, and thus can be difficult to consistently produce strong joints. There is significant interest from the manufacturing community to understand these variable interactions. Machine learning is one such method which can be exploited to better understand the complex interactions of USW input parameters. In this paper, the lap shear strength (LSS) of USW Al 5754 joints is investigated using an off-the-shelf Branson Ultraweld L20. Firstly, a 33 full factorial parametric study using ANOVA is carried out to examine the effects of three USW input parameters (weld energy, vibration amplitude & clamping pressure) on LSS. Following this, a high-fidelity predictive hybrid GA-ANN model is then trained using the input parameters and the addition of process data recorded during welding (peak power).

Read more at ScienceDirect

Machine learning predictions of superalloy microstructure

๐Ÿ“… Date:

โœ๏ธ Authors: Patrick L Taylor, Gareth Conduit

๐Ÿ”– Topics: machine learning, materials science

๐Ÿข Organizations: University of Cambridge, Intellegens


Gaussian process regression machine learning with a physically-informed kernel is used to model the phase compositions of nickel-base superalloys. The model delivers good predictions for laboratory and commercial superalloys. Additionally, the model predicts the phase composition with uncertainties unlike the traditional CALPHAD method.

Read more at ScienceDirect

Hybrid machine learning-enabled adaptive welding speed control

๐Ÿ“… Date:

โœ๏ธ Authors: Joseph Kershaw, Rui Yu, YuMing Zhang, Peng Wang

๐Ÿ”– Topics: machine learning, robot welding, convolutional neural network

๐Ÿข Organizations: University of Kentucky


This research presents a preliminary study on developing appropriate Machine Learning (ML) techniques for real-time welding quality prediction and adaptive welding speed adjustment for GTAW welding at a constant current. In order to collect the data needed to train the hybrid ML models, two cameras are applied to monitor the welding process, with one camera (available in practical robotic welding) recording the top-side weld pool dynamics and a second camera (unavailable in practical robotic welding, but applicable for training purpose) recording the back-side bead formation. Given these two data sets, correlations can be discovered through a convolutional neural network (CNN) that is good at image characterization. With the CNN, top-side weld pool images can be analyzed to predict the back-side bead width during active welding control.

Read more at Science Direct

Fabs Drive Deeper Into Machine Learning

๐Ÿ“… Date:

โœ๏ธ Author: Anne Meixner

๐Ÿ”– Topics: machine learning, machine vision, defect detection, convolutional neural network

๐Ÿญ Vertical: Semiconductor

๐Ÿข Organizations: GlobalFoundries, KLA, SkyWater Technology, Onto Innovation, CyberOptics, Hitachi, Synopsys


For the past couple decades, semiconductor manufacturers have relied on computer vision, which is one of the earliest applications of machine learning in semiconductor manufacturing. Referred to as Automated Optical Inspection (AOI), these systems use signal processing algorithms to identify macro and micro physical deformations.

Defect detection provides a feedback loop for fab processing steps. Wafer test results produce bin maps (good or bad die), which also can be analyzed as images. Their data granularity is significantly larger than the pixelated data from an optical inspection tool. Yet test results from wafer maps can match the splatters generated during lithography and scratches produced from handling that AOI systems can miss. Thus, wafer test maps give useful feedback to the fab.

Read more at Semiconductor Engineering

Adoption of machine learning technology for failure prediction in industrial maintenance: A systematic review

๐Ÿ“… Date:

โœ๏ธ Authors: Joerg Leukel, Julian Gonzalez, Martin Riekert

๐Ÿ”– Topics: machine learning, predictive maintenance

๐Ÿข Organizations: University of Hohenheim


Failure prediction is the task of forecasting whether a material system of interest will fail at a specific point of time in the future. This task attains significance for strategies of industrial maintenance, such as predictive maintenance. For solving the prediction task, machine learning (ML) technology is increasingly being used, and the literature provides evidence for the effectiveness of ML-based prediction models. However, the state of recent research and the lessons learned are not well documented. Therefore, the objective of this review is to assess the adoption of ML technology for failure prediction in industrial maintenance and synthesize the reported results. We conducted a systematic search for experimental studies in peer-reviewed outlets published from 2012 to 2020. We screened a total of 1,024 articles, of which 34 met the inclusion criteria.

Read more at ScienceDirect

Accelerating the Design of Automotive Catalyst Products Using Machine Learning

๐Ÿ“… Date:

โœ๏ธ Authors: Tom Whitehead, Flora Chen, Christopher Daly, Gareth Conduit

๐Ÿ”– Topics: generative design, machine learning

๐Ÿญ Vertical: Automotive

๐Ÿข Organizations: Intellegens, Johnson Matthey


The design of catalyst products to reduce harmful emissions is currently an intensive process of expert-driven discovery, taking several years to develop a product. Machine learning can accelerate this timescale, leveraging historic experimental data from related products to guide which new formulations and experiments will enable a project to most directly reach its targets. We used machine learning to accurately model 16 key performance targets for catalyst products, enabling detailed understanding of the factors governing catalyst performance and realistic suggestions of future experiments to rapidly develop more effective products. The proposed formulations are currently undergoing experimental validation.

Read more at Ingenta Connect

Getting Industrial About The Hybrid Computing And AI Revolution

๐Ÿ“… Date:

โœ๏ธ Author: Jeffrey Burt

๐Ÿ”– Topics: IIoT, machine learning, reinforcement learning

๐Ÿญ Vertical: Petroleum and Coal

๐Ÿข Organizations: Beyond Limits


Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.

Read more at The Next Platform

Real-World ML with Coral: Manufacturing

๐Ÿ“… Date:

โœ๏ธ Author: Michael Brooks

๐Ÿ”– Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

๐Ÿข Organizations: Coral


For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. Weโ€™ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog

The Journey of Additive Manufacturing and Artificial Intelligence

The Machine Economy is Here: Powering a Connected World

๐Ÿ“… Date:

โœ๏ธ Author: Megan Doyle

๐Ÿ”– Topics: IIoT, machine learning, blockchain

๐Ÿข Organizations: Flexon Technology, Allied Vision


In combination with the real-time data produced by IoT, blockchain, and ML applications are disrupting B2B companies across various industries from healthcare to manufacturing. Together, these three fundamental technologies create an intelligent system where connected devices can โ€œtalkโ€ to one another. However, machines are still unable to conduct transactions with each other.

This is where distributed ledger technology (DLT) and blockchain come into play. Cryptocurrencies and smart contracts (self-executing contracts between buyers and sellers on a decentralized network) make it possible for autonomous machines to transact with one another on a blockchain.

Devices participating in M2M transactions can be programmed to make purchases based on individual or business needs. Human error was a cause for concern in the past; machine learning algorithms provide reliable and trusted data that continue to learn and improve โ€” becoming smarter each day.

Read more at IoT For All

How to integrate AI into engineering

๐Ÿ“… Date:

โœ๏ธ Author: Jos Martin

๐Ÿ”– Topics: machine learning

๐Ÿข Organizations: MathWorks


Most of the focus on AI is all about the AI model, which drives engineers to quickly dive into the modelling aspect of AI. After a few starter projects, engineers learn that AI is not just modelling, but rather a complete set of steps that includes data preparation, modelling, simulation and test, and deployment

Read more at The Engineer

Visual Inspection AI: a purpose-built solution for faster, more accurate quality control

๐Ÿ“… Date:

โœ๏ธ Authors: Mandeep Wariach, Thomas Reinbacher

๐Ÿ”– Topics: cloud computing, computer vision, machine learning, quality assurance

๐Ÿข Organizations: Google


The Google Cloud Visual Inspection AI solution automates visual inspection tasks using a set of AI and computer vision technologies that enable manufacturers to transform quality control processes by automatically detecting product defects.

We built Visual Inspection AI to meet the needs of quality, test, manufacturing, and process engineers who are experts in their domain, but not in AI. By combining ease of use with a focus on priority uses cases, customers are realizing significant benefits compared to general purpose machine learning (ML) approaches.

Read more at Google Cloud Blog

Machine Learning Keeps Rolling Bearings on the Move

๐Ÿ“… Date:

โœ๏ธ Author: Rehana Begg

๐Ÿ”– Topics: machine learning, vibration analysis, predictive maintenance, bearing

๐Ÿข Organizations: Osaka University


Rolling bearings are essential components in automated machinery with rotating elements. They come in many shapes and sizes, but are essentially designed to carry a load while minimizing friction. In general, the design consists of two rings separated by rolling elements (balls or rollers). The rings can rotate can rotate relative to each other with very little friction.

The ability to accurately predict the remaining useful life of the bearings under defect progression could reduce unnecessary maintenance procedures and prematurely discarded parts without risking breakdown, reported scientists from the Institute of Scientific and Industrial Research and NTN Next Generation Research Alliance Laboratories at Osaka University.

The scientists have developed a machine learning method that combines convolutional neural networks and Bayesian hierarchical modeling to predict the remaining useful life of rolling bearings. Their approach is based on the measured vibration spectrum.

Read more at Machine Design

Tree Model Quantization for Embedded Machine Learning Applications

๐Ÿ“… Date:

โœ๏ธ Author: Leslie J. Schradin

๐Ÿ”– Topics: edge computing, machine learning

๐Ÿข Organizations: Qeexo


Compressed tree-based models are useful models to consider for embedded machine learning applications, in particular with the compression technique: quantization. Quantization can compress models by significant amounts with a trade-off of slight loss in model fidelity, allowing more room on the device for other programs.

Read more at Qeexo

The realities of developing embedded neural networks

๐Ÿ“… Date:

โœ๏ธ Author: Tony King-Smith

๐Ÿ”– Topics: edge computing, machine learning, AI

๐Ÿข Organizations: AImotive


With any embedded software destined for deployment in volume production, an enormous amount of effort goes into the code once the implementation of its core functionality has been completed and verified. This optimization phase is all about minimizing memory, CPU and other resources needed so that as much as possible of the software functionality is preserved, while the resources needed to execute it are reduced to the absolute minimum possible.

This process of creating embedded software from lab-based algorithms enables production engineers to cost-engineer software functionality into a mass-production ready form, requiring far cheaper, less capable chips and hardware than the massive compute datacenter used to develop it. However, it usually requires the functionality to be frozen from the beginning, with code modifications only done to improve the way the algorithms themselves are executed. For most software, that is fine: indeed, it enables a rigorous verification methodology to be used to ensure the embedding process retains all the functionality needed.

However, when embedding NN-based AI algorithms, that can be a major problem. Why? Because by freezing the functionality from the beginning, you are removing one of the main ways in which the execution can be optimized.

Read more at Embedded

Google Cloud and Seagate: Transforming hard-disk drive maintenance with predictive ML

๐Ÿ“… Date:

โœ๏ธ Authors: Nitin Aggarwal, Rostam Dinyari

๐Ÿ”– Topics: machine learning, predictive maintenance

๐Ÿญ Vertical: Computer and Electronic

๐Ÿข Organizations: Google, Seagate


At Google Cloud, we know first-hand how critical it is to manage HDDs in operations and preemptively identify potential failures. We are responsible for running some of the largest data centers in the worldโ€”any misses in identifying these failures at the right time can potentially cause serious outages across our many products and services. In the past, when a disk was flagged for a problem, the main option was to repair the problem on site using software. But this procedure was expensive and time-consuming. It required draining the data from the drive, isolating the drive, running diagnostics, and then re-introducing it to traffic.

Thatโ€™s why we teamed up with Seagate, our HDD original equipment manufacturer (OEM) partner for Googleโ€™s data centers, to find a way to predict frequent HDD problems. Together, we developed a machine learning (ML) system, built on top of Google Cloud, to forecast the probability of a recurring failing diskโ€”a disk that fails or has experienced three or more problems in 30 days.

Read more at Google Cloud Blog

Ford's Ever-Smarter Robots Are Speeding Up the Assembly Line

๐Ÿ“… Date:

โœ๏ธ Author: Will Knight

๐Ÿ”– Topics: AI, machine learning, robotics

๐Ÿญ Vertical: Automotive

๐Ÿข Organizations: Ford, Symbio Robotics


At a Ford Transmission Plant in Livonia, Michigan, the station where robots help assemble torque converters now includes a system that uses AI to learn from previous attempts how to wiggle the pieces into place most efficiently. Inside a large safety cage, robot arms wheel around grasping circular pieces of metal, each about the diameter of a dinner plate, from a conveyor and slot them together.

The technology allows this part of the assembly line to run 15 percent faster, a significant improvement in automotive manufacturing where thin profit margins depend heavily on manufacturing efficiencies.

Read more at WIRED

Start-ups Powering New Era of Industrial Robotics

๐Ÿ“… Date:

โœ๏ธ Author: James Falkoff

๐Ÿ”– Topics: robotics, automated guided vehicle, machine learning

๐Ÿญ Vertical: Machinery

๐Ÿข Organizations: Ready Robotics, ArtiMinds, Realtime Robotics, RIOS, Vicarious


Much of the bottleneck to achieving automation in manufacturing relates to limitations in the current programming model of industrial robotics. Programming is done in languages proprietary to each robotic hardware OEM โ€“ languages โ€œstraight from the 80sโ€ as one industry executive put it.

There are a limited number of specialists who are proficient in these languages. Given the rarity of the expertise involved, as well as the time it takes to program a robot, robotics application development typically costs three times as much as the hardware for a given installation.

Read more at Robotics Business Review

Multi-Task Robotic Reinforcement Learning at Scale

๐Ÿ“… Date:

โœ๏ธ Authors: Karol Hausman, Yevgen Chebotar

๐Ÿ”– Topics: reinforcement learning, robotics, AI, machine learning

๐Ÿข Organizations: Google


For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.

Read more at Google AI Blog

Intelligent edge management: why AI and ML are key players

๐Ÿ“… Date:

โœ๏ธ Authors: Fetahi Wuhib, Mbarka Soualhia, Carla Mouradian, Wubin Li

๐Ÿ”– Topics: AI, machine learning, edge computing, anomaly detection

๐Ÿข Organizations: Ericsson


What will the future of network edge management look like? We explain how artificial intelligence and machine learning technologies are crucial for intelligent edge computing and the management of future-proof networks. Whatโ€™s required, and what are the building blocks needed to make it happen?

Read more at Ericsson

Using Machine Learning to identify operational modes in rotating equipment

๐Ÿ“… Date:

โœ๏ธ Author: Frederik Wartenberg

๐Ÿ”– Topics: anomaly detection, vibration analysis, machine learning

๐Ÿข Organizations: Viking Analytics


Vibration monitoring is key to performing condition monitoring-based maintenance in rotating equipment such as engines, compressors, turbines, pumps, generators, blowers, and gearboxes. However, periodic route-based vibration monitoring programs are not enough to prevent breakdowns, as they normally offer a narrower view of the machinesโ€™ conditions.

Adding Machine Learning algorithms to this process makes it scalable, as it allows the analysis of historic data from equipment. One of the benefits is being able to identify operational modes and help maintenance teams to understand if the machine is operating in normal or abnormal conditions.

Read more at Viking Analytics Blog

Amazonโ€™s robot arms break ground in safety, technology

๐Ÿ“… Date:

โœ๏ธ Author: Alan S. Brown

๐Ÿ”– Topics: AI, machine learning, robotics, palletizer, robotic arm, worker safety

๐Ÿข Organizations: Amazon


Robin, one of the most complex stationary robot arm systems Amazon has ever built, brings many core technologies to new levels and acts as a glimpse into the possibilities of combining vision, package manipulation and machine learning, said Will Harris, principal product manager of the Robin program.

Those technologies can be seen when Robin goes to work. As soft mailers and boxes move down the conveyor line, Robin must break the jumble down into individual items. This is called image segmentation. People do it automatically, but for a long time, robots only saw a solid blob of pixels.

Read more at Amazon Science

AI In Inspection, Metrology, And Test

๐Ÿ“… Date:

โœ๏ธ Authors: Susan Rambo, Ed Sperling

๐Ÿ”– Topics: AI, machine learning, quality assurance, metrology, nondestructive test

๐Ÿญ Vertical: Semiconductor

๐Ÿข Organizations: CyberOptics, Lam Research, Hitachi, FormFactor, NuFlare, Advantest, PDF Solutions, eBeam Initiative, KLA, proteanTecs, Fraunhofer IIS


โ€œThe human eye can see things that no amount of machine learning can,โ€ said Subodh Kulkarni, CEO of CyberOptics. โ€œThatโ€™s where some of the sophistication is starting to happen now. Our current systems use a primitive kind of AI technology. Once you look at the image, you can see a problem. And our AI machine doesnโ€™t see that. But then you go to the deep learning kind of algorithms, where you have very serious Ph.D.-level people programming one algorithm for a week, and they can detect all those things. But it takes them a week to program those things, which today is not practical.โ€

Thatโ€™s beginning to change. โ€œWeโ€™re seeing faster deep-learning algorithms that can be more easily programmed,โ€ Kulkarni said. โ€œBut the defects also are getting harder to catch by a machine, so there is still a gap. The biggest bang for the buck is not going to come from improving cameras or projectors or any of the equipment that we use to generate optical images. Itโ€™s going to be interpreting optical images.โ€

Read more at Semiconductor Engineering

How To Measure ML Model Accuracy

๐Ÿ“… Date:

โœ๏ธ Author: Bryon Moyer

๐Ÿ”– Topics: machine learning

๐Ÿข Organizations: Ansys, Brainome, Cadence, Flex Logix, Synopsys, Xilinx


Machine learning (ML) is about making predictions about new data based on old data. The quality of any machine-learning algorithm is ultimately determined by the quality of those predictions.

However, there is no one universal way to measure that quality across all ML applications, and that has broad implications for the value and usefulness of machine learning.

Read more at Semiconductor Engineering

Go beyond machine learning to optimize manufacturing operations

๐Ÿ“… Date:

โœ๏ธ Author: Andrew Silberfarb

๐Ÿ”– Topics: machine learning

๐Ÿข Organizations: SRI International


Machine learning depends on vast amounts of data to make inferences. However, sometimes the amount of data needed by machine-learning algorithms is simply not available. SRI International has developed a system called Deep Adaptive Semantic Logic (DASL) that uses adaptive semantic reasoning to fill in the data gaps. DASL integrates bottom-up data-driven modeling with top-down theoretical reasoning in a symbiotic union of innovative machine learning and knowledge guided inference. The system brings experts and data together to make better, more informed decisions.

Read more at Automation Alley

Adversarial training reduces safety of neural networks in robots

๐Ÿ“… Date:

โœ๏ธ Author: @BenDee983

๐Ÿ”– Topics: AI, robotics, machine learning


A more fundamental problem, also confirmed by Lechner and his coauthors, is the lack of causality in machine learning systems. As long as neural networks focus on learning superficial statistical patterns in data, they will remain vulnerable to different forms of adversarial attacks. Learning causal representations might be the key to protecting neural networks against adversarial attacks. But learning causal representations itself is a major challenge and scientists are still trying to figure out how to solve it.

Read more at VentureBeat

What Walmart learned from its machine learning deployment

๐Ÿ“… Date:

โœ๏ธ Author: Katie Malone

๐Ÿ”– Topics: cloud computing, machine learning

๐Ÿข Organizations: Walmart


As more businesses turn to automation to realize business value, retailโ€™s wide variety of ML use cases can provide insights into how to overcome challenges associated with the technology. The goal should be trying to solve a problem by using ML as a tool to get there, Kamdar said.

For example, Walmart uses a ML model to optimize the timing and pricing of markdowns, and to examine real estate data to find places to cut costs, according to executives on an earnings call in February.

Read more at Supply Chain Dive

AI project to 'pandemic-proof' NHS supply chain

๐Ÿ“… Date:

๐Ÿ”– Topics: natural language processing, machine learning

๐Ÿข Organizations: Vamstar


With the ability to analyse NHS and global procurement data from previous supply contracts, the platform will aim to allow NHS buyers to evaluate credibility and capability of suppliers to fulfil their order. Each supplier would have a real-time โ€˜risk ratingโ€™ with information on the goods and services they supply.

Researchers at Sheffield Universityโ€™s Information School are said to be developing Natural Language Processing (NLP) methods for the automated reading and extraction of data from large amounts of contract tender data held by the NHS and other European healthcare providers

Read more at The Engineer

How Machine Learning Techniques Can Help Engineers Design Better Products

๐Ÿ“… Date:

๐Ÿ”– Topics: machine learning, generative design

๐Ÿข Organizations: Altair


By leveraging field predictive ML models engineers can explore more options without the use of a solver when designing different components and parts, saving time and resources. This ultimately produces higher quality results that can then be used to make more informed decisions throughout the design process.

Read more at Altair Engineering

Introducing Amazon SageMaker Reinforcement Learning Components for open-source Kubeflow pipelines

๐Ÿ“… Date:

โœ๏ธ Authors: Alex Chung, Kyle Saltmarsh, Leonard O'Sullivan, Matthew Rose, Nicholas Therkelsen-Terry, Nicholas Thomson, Ragha Prasad, Sahika Genc,

๐Ÿ”– Topics: AI, machine learning, robotics

๐Ÿข Organizations: AWS, Max Kelsen, Universal Robots, Woodside Energy


Woodside Energy uses AWS RoboMaker with Amazon SageMaker Kubeflow operators to train, tune, and deploy reinforcement learning agents to their robots to perform manipulation tasks that are repetitive or dangerous.

Read more at AWS Blog

Leveraging AI and Statistical Methods to Improve Flame Spray Pyrolysis

๐Ÿ“… Date:

โœ๏ธ Author: Stephen J. Mraz

๐Ÿ”– Topics: AI, machine learning, materials science

๐Ÿญ Vertical: Chemical

๐Ÿข Organizations: Argonne National Laboratory


Flame spray pyrolysis has long been used to make small particles that can be used as paint pigments. Now, researchers at Argonne National Laboratory are refining the process to make smaller, nano-sized particles of various materials that can make nano-powders for low-cobalt battery cathodes, solid state electrolytes and platinum/titanium dioxide catalysts for turning biomass into fuel.

Read more at Machine Design

Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all

๐Ÿ“… Date:

โœ๏ธ Author: @TiernanRayTech

๐Ÿ”– Topics: AI, machine learning, robotics, reinforcement learning

๐Ÿข Organizations: Google


With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.

Read more at ZDNet

AWS Announces General Availability of Amazon Lookout for Vision

๐Ÿ“… Date:

๐Ÿ”– Topics: cloud computing, computer vision, machine learning, quality assurance

๐Ÿข Organizations: AWS, Basler, Dafgards, General Electric


AWS announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called โ€œfew-shot learning,โ€ Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service.

Read more at Business Wire

Rearranging the Visual World

๐Ÿ“… Date:

โœ๏ธ Authors: Andy Zeng, Pete Florence

๐Ÿ”– Topics: AI, machine learning, robotics

๐Ÿข Organizations: Google


Transporter Nets use a novel approach to 3D spatial understanding that avoids reliance on object-centric representations, making them general for vision-based manipulation but far more sample efficient than benchmarked end-to-end alternatives. As a consequence, they are fast and practical to train on real robots. We are also releasing an accompanying open-source implementation of Transporter Nets together with Ravens, our new simulated benchmark suite of ten vision-based manipulation tasks.

Read more at Google AI Blog

Artificial Intelligence: Driving Digital Innovation and Industry 4.0

๐Ÿ“… Date:

โœ๏ธ Author: @ralph_ohr

๐Ÿ”– Topics: AI, machine learning

๐Ÿข Organizations: Siemens


Intelligent AI solutions can analyze high volumes of data generated by a factory to identify trends and patterns which can then be used to make manufacturing processes more efficient and reduce their energy consumption. Employing Digital Twin-enabled representations of a product and the associated process, AI is able to recognize whether the workpiece being manufactured meets quality requirements. This is how plants are constantly adapting to new circumstances and undergoing optimization with no need for operator input. New technologies are emerging in this application area, such as Reinforcement Learning โ€“ a topic that has not been deployed on a broad scale up to now. It can be used to automatically ascertain correlations between production parameters, product quality and process performance by learning through โ€˜trial-and-errorโ€™ โ€“ and thereby dynamically tuning the parameter values to optimize the overall process.

Read more at Siemens Ingenuity

Edge-Inference Architectures Proliferate

๐Ÿ“… Date:

โœ๏ธ Author: Bryon Moyer

๐Ÿ”– Topics: AI, machine learning, edge computing

๐Ÿญ Vertical: Semiconductor

๐Ÿข Organizations: Cadence, Hailo, Google, Flex Logix, BrainChip, Synopsys, GrAI Matter, Deep Vision, Maxim Integrated


What makes one AI system better than another depends on a lot of different factors, including some that arenโ€™t entirely clear.

The new offerings exhibit a wide range of structure, technology, and optimization goals. All must be gentle on power, but some target wired devices while others target battery-powered devices, giving different power/performance targets. While no single architecture is expected to solve every problem, the industry is in a phase of proliferation, not consolidation. It will be a while before the dust settles on the preferred architectures.

Read more at Semiconductor Engineering

Pushing The Frontiers Of Manufacturing AI At Seagate

๐Ÿ“… Date:

โœ๏ธ Author: Tom Davenport

๐Ÿ”– Topics: AI, machine learning, predictive maintenance, quality assurance

๐Ÿญ Vertical: Computer and Electronic

๐Ÿข Organizations: Seagate


Big data, analytics and AI are widely used in industries like financial services and e-commerce, but are less likely to be found in manufacturing companies. With some exceptions like predictive maintenance, few manufacturing firms have marshaled the amounts of data and analytical talent to aggressively apply analytics and AI to key processes.

Seagate Technology, an over $10B manufacturer of data storage and management solutions, is a prominent counter-example to this trend. It has massive amounts of sensor data in its factories and has been using it extensively over the last five years to ensure and improve the quality and efficiency of its manufacturing processes.

Read more at Forbes

Building effective IoT applications with tinyML and automated machine learning

๐Ÿ“… Date:

โœ๏ธ Authors: Rajen Bhatt, Tina Shyuan

๐Ÿ”– Topics: IIoT, machine learning

๐Ÿข Organizations: Qeexo


The convergence of IoT devices and ML algorithms enables a wide range of smart applications and enhanced user experiences, which are made possible by low-power, low-latency, and lightweight machine learning inference, i.e., tinyML.

Read more at Embedded

Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey

๐Ÿ“… Date:

โœ๏ธ Authors: Nikolas Zolas, Zachary Kroff, Erik Brynjolfsson, Kristina McElheran, David N. Beede, Cathy Buffington, Nathan Goldschlag, Lucia Foster, Emin Dinlersoz

๐Ÿ”– Topics: AI, augmented reality, cloud computing, machine learning, Radio-frequency identification, robotics


While robots are usually singled out as a key technology in studies of automation, the overall diffusion of robotics use and testing is very low across firms in the U.S. The use rate is only 1.3% and the testing rate is 0.3%. These levels correspond relatively closely with patterns found in the robotics expenditure question in the 2018 ASM. Robots are primarily concentrated in large, manufacturing firms. The distribution of robots among firms is highly skewed, and the skewness in favor of larger firms can have a disproportionate effect on the economy that is otherwise not obvious from the relatively low overall diffusion rate of robots. The least-used technologies are RFID (1.1%), Augmented Reality (0.8%), and Automated Vehicles (0.8%). Looking at the pairwise adoption of these technologies in Table 14, we find that use of Machine Learning and Machine Vision are most coincident. We find that use of Automated Guided Vehicles is closely associated with use of Augmented Reality, RFID, and Machine Vision.

Read more at National Bureau of Economic Research

How Instacart fixed its A.I. and keeps up with the coronavirus pandemic

๐Ÿ“… Date:

โœ๏ธ Author: @JonathanVanian

๐Ÿ”– Topics: COVID-19, demand planning, machine learning

๐Ÿข Organizations: Instacart


Like many companies, online grocery delivery service Instacart has spent the past few months overhauling its machine-learning models because the coronavirus pandemic has drastically changed how customers behave.

Starting in mid-March, Instacartโ€™s all-important technology for predicting whether certain products would be available at specific stores became increasingly inaccurate. The accuracy of a metric used to evaluate how many items are found at a store dropped to 61% from 93%, tipping off the Instacart engineers that they needed to re-train their machine learning model that predicts an itemโ€™s availability at a store. After all, customers could get annoyed being told one thingโ€”the item that they wanted was availableโ€”when in fact it wasnโ€™t, resulting in products never being delivered. โ€˜A shock to the systemโ€™ is how Instacartโ€™s machine learning director Sharath Rao described the problem to Fortune.

Read more at Fortune (Paid)