Convolutional Neural Network (CNN)

Assembly Line

CircularNet: Reducing waste with Machine Learning

πŸ“… Date:

✍️ Authors: Robert Little, Umair Sabir

πŸ”– Topics: Sustainability, Machine Learning, Convolutional Neural Network

🏒 Organizations: Google


The facilities where our waste and recyclables are processed are called β€œMaterial Recovery Facilities” (MRFs). Each MRF processes tens of thousands of pounds of our societal β€œwaste” every day, separating valuable recyclable materials like metals and plastics from non-recyclable materials. A key inefficiency within the current waste capture and sorting process is the inability to identify and segregate waste into high quality material streams. The accuracy of the sorting directly determines the quality of the recycled material; for high-quality, commercially viable recycling, the contamination levels need to be low. Even though the MRFs use various technologies alongside manual labor to separate materials into distinct and clean streams, the exceptionally cluttered and contaminated nature of the waste stream makes automated waste detection challenging to achieve, and the recycling rates and the profit margins stay at undesirably low levels.

Enter what we call β€œCircularNet”, a set of models that lowers barriers to AI/ML tech for waste identification and all the benefits this new level of transparency can offer. Our goal with CircularNet is to develop a robust and data-efficient model for waste/recyclables detection, which can support the way we identify, sort, manage, and recycle materials across the waste management ecosystem.

Read more at Tensorflow Blog

Improving Yield With Machine Learning

πŸ“… Date:

✍️ Author: Laura Peters

πŸ”– Topics: Machine Learning, Convolutional Neural Network, ResNet

🏭 Vertical: Semiconductor

🏒 Organizations: KLA, Synopsys, CyberOptics, Macronix


Machine learning is becoming increasingly valuable in semiconductor manufacturing, where it is being used to improve yield and throughput.

Synopsys engineers recently found that a decision tree deep learning method can classify 98% of defects and features at 60X faster retraining time than traditional CNNs. The decision tree utilizes 8 CNNs and ResNet to automatically classify 12 defect types with images from SEM and optical tools.

Macronix engineers showed how machine learning can expedite new etch process development in 3D NAND devices. Two parameters are particularly important in optimizing the deep trench slit etch β€” bottom CD and depth of polysilicon etch recess, also known as the etch stop.

KLA engineers, led by Cheng Hung Wu, optimized the use of a high landing energy e-beam inspection tool to capture defects buried as deep as 6Β΅m in a 96-layer ONON stacked structure following deep trench etch. The e-beam tool can detect defects that optical inspectors cannot, but only if operated with high landing energy to penetrate deep structures. With this process, KLA was looking to develop an automated detection and classification system for deep trench defects.

Read more at Semiconductor Engineering

Application of deep learning methods for more efficient water demand forecasting

πŸ“… Date:

✍️ Author: Anjana G. Rajakumar

πŸ”– Topics: Convolutional Neural Network

🏒 Organizations: Hitachi


In recent years, such predictions have also found wide application in near-optimal control operations of water networks. Water demand prediction is an active field, where different methods and techniques have been applied including conventional statistical methods and machine learning methods. Due to advancements in the field of sensing and IoT, an increasing amount of data is becoming available for water distribution systems, including water demand data. Therefore, we are seeing greater use of deep learning methods to develop models for water demand forecasting in recent years as deep learning methods can deal with seasonality as well as random patterns in the data, and provide accurate results compared to traditional methods.

We observed that the frequency of data, amount of data, and quality of data has an impact on the deep learning model accuracy. In CNN-LSTM, CNN effectively extracts the inherent characteristics of historical water consumption data such as seasonality, and LSTM can fully reflect the long-term historical process and future trend. Hence, water demand forecast predictions using CNN-LSTM produced a better result when compared to other single models such as GRU, MLP, CNN and LSTM.

Read more at Industrial AI Blog

Operation planning method using convolutional neural network for combined heat and power system

πŸ“… Date:

✍️ Author: Tetsushi Ono

πŸ”– Topics: Convolutional Neural Network

🏒 Organizations: Hitachi


The energy efficiency of a combined heat and power (CHP) can reach about 85%, whereas conventional thermal power plants operate only at 45% efficiency or lower. CHPs perform better mainly because the heat from generators can be used as a energy source to meet heat demands or power refrigerators to generate cold water, in other words the β€œwaste” heat is used and not wasted. Therefore, a growing number of factories and commercial buildings are installing combined heat and power (CHP) systems that include various energy storage devices. To reduce the energy cost of CHPs, optimal operation plans to satisfy time-varying energy demands with minimum energy cost are required. However, conventional operation planning methods using optimized calculation have an issue with long computing time. Especially these days, operation plans need to be generated within a few minutes or even seconds to make up for output of renewable energy sources.

Read more at Hitachi Industrial AI Blog

Deep learning-based automatic optical inspection system empowered by online multivariate autocorrelated process control

πŸ“… Date:

πŸ”– Topics: automated optical inspection, convolutional neural network

🏒 Organizations: National Taiwan University of Science and Technology


Defect identification of tiny-scaled electronics components with high-speed throughput remains an issue in quality inspection technology. Convolutional neural networks (CNNs) deployed in automatic optical inspection (AOI) systems are powerful for detecting defects. However, they focus on individual samples but suffer from poor process control and lack of monitoring and providing the online status regarding the production process. Integrating CNN and statistical process control models will empower high-speed production lines to achieve proactive quality inspection. With the performance of the average run length for a certain range of the shifts, the proposed control chart has high detection performance for small mean shifts in quality. The proposed control chart is successfully applied to an electronic conductor manufacturing process. The proposed model facilitates a systematic quality inspection for tiny electronics components in a high-speed production line. The CNN-based AOI model empowered by the proposed control chart enables quality checking at the individual product level and process monitoring at the system level simultaneously. The contribution of the present study lies in the proposed process control framework integrating with the CNN-based AOI model in which a residual-based mixed multivariate cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control chart for monitoring online multivariate autocorrelated processes to efficiently detect defects.

Read more at The International Journal of Advanced Manufacturing Technology

An implementation of YOLO-family algorithms in classifying the product quality for the acrylonitrile butadiene styrene metallization

πŸ“… Date:

✍️ Authors: Yuh Wen Chen, Jing Mau Shiu

πŸ”– Topics: Convolutional Neural Network, Visual Inspection

🏒 Organizations: Da-Yeh University


In the traditional electroplating industry of Acrylonitrile Butadiene Styrene (ABS), quality control inspection of the product surface is usually performed with the naked eye. However, these defects on the surface of electroplated products are minor and easily ignored under reflective conditions. If the number of defectiveness and samples is too large, manual inspection will be challenging and time-consuming. We innovatively applied additive manufacturing (AM) to design and assemble an automatic optical inspection (AOI) system with the latest progress of artificial intelligence. The system can identify defects on the reflective surface of the plated product. Based on the deep learning framework from You Only Look Once (YOLO), we successfully started the neural network model on graphics processing unit (GPU) using the family of YOLO algorithms: from v2 to v5. Finally, our efforts showed an accuracy rate over an average of 70 percentage for detecting real-time video data in production lines. We also compare the classification performance among various YOLO algorithms. Our visual inspection efforts significantly reduce the labor cost of visual inspection in the electroplating industry and show its vision in smart manufacturing.

Read more at The International Journal of Advanced Manufacturing Technology

YOLO V3 + VGG16-based automatic operations monitoring and analysis in a manufacturing workshop under Industry 4.0

πŸ“… Date:

✍️ Authors: Jihong Yan, Zipeng Wang

πŸ”– Topics: Convolutional Neural Network

🏒 Organizations: Harbin Institute of Technology


Under the background of Industry 4.0 and smart manufacturing, operators are still the core of manufacturing production, and the standardization of their actions greatly affects production efficiency and quality. However, they have not received enough attention. In view of the monitoring and analysis of operators’ actions in the manufacturing field, this paper proposes the YOLO V3 + VGG 16 transfer learning network. First, the region detection of key operators is realized by using YOLO V3, and an action dataset is constructed. Second, using transfer learning to realize the automatic recognition, monitoring and analysis of small sample data, the recognition accuracy of the proposed method is greater than 96%, and the average deviation of the action execution time is less than 1 s. This research is expected to provide guidance for increasing the degree of workshop automation, improving the standardization of operators’ actions, optimizing action processes and ensuring product quality.

Read more at ScienceDirect

How pioneering deep learning is reducing Amazon’s packaging waste

πŸ“… Date:

✍️ Author: Sean O'Neill

πŸ”– Topics: Machine Learning, Computer Vision, Convolutional Neural Network, Sustainability, E-commerce

🏒 Organizations: Amazon


Fortunately, machine learning approaches β€” particularly deep learning β€” thrive on big data and massive scale, and a pioneering combination of natural language processing and computer vision is enabling Amazon to hone in on using the right amount of packaging. These tools have helped Amazon drive change over the past six years, reducing per-shipment packaging weight by 36% and eliminating more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.

β€œWhen the model is certain of the best package type for a given product, we allow it to auto-certify it for that pack type,” says Bales. β€œWhen the model is less certain, it flags a product and its packaging for testing by a human.” The technology is currently being applied to product lines across North America and Europe, automatically reducing waste at a growing scale.

Read more at Amazon Science

A deep transfer learning method for monitoring the wear of abrasive belts with a small sample dataset

πŸ“… Date:

✍️ Authors: Zhihang Li, Qian Tang, Sibao Wang, Penghui Zhang

πŸ”– Topics: convolutional neural network, predictive maintenance

🏒 Organizations: Chongqing University


According to the analysis of displacement data, a new method for the prediction of abrasive belt wear states using a multiscale convolutional neural network based on transfer learning is proposed. Initially, first-order difference preprocessing is ingeniously performed on displacement data. Then, the network parameters of the model are obtained by pretraining the fault dataset and are directly transferred or fine-tuned according to the preprocessed displacement data. Finally, the preprocessed displacement data corresponding to different abrasive belt wear states are accurately classified. This method verifies the application of transfer learning between cross-domain data in industry and resolves the contradiction between the large sample size required for deep learning and the difficulty of obtaining a large amount of sample data in actual production. The experimental results show that this method can accurately predict the wear status of abrasive belts, with an average prediction accuracy of 93.1%. This method has the advantages of low cost and easy operation, and can be applied to guide the replacement time of abrasive belts in production.

Read more at ScienceDirect

Hybrid machine learning-enabled adaptive welding speed control

πŸ“… Date:

✍️ Authors: Joseph Kershaw, Rui Yu, YuMing Zhang, Peng Wang

πŸ”– Topics: machine learning, robot welding, convolutional neural network

🏒 Organizations: University of Kentucky


This research presents a preliminary study on developing appropriate Machine Learning (ML) techniques for real-time welding quality prediction and adaptive welding speed adjustment for GTAW welding at a constant current. In order to collect the data needed to train the hybrid ML models, two cameras are applied to monitor the welding process, with one camera (available in practical robotic welding) recording the top-side weld pool dynamics and a second camera (unavailable in practical robotic welding, but applicable for training purpose) recording the back-side bead formation. Given these two data sets, correlations can be discovered through a convolutional neural network (CNN) that is good at image characterization. With the CNN, top-side weld pool images can be analyzed to predict the back-side bead width during active welding control.

Read more at Science Direct

Applying deep learning to sensor data to support workers in manufacturing

πŸ“… Date:

✍️ Author: Yuichi Sakurai

πŸ”– Topics: cyber-physical systems, convolutional neural network

🏒 Organizations: Hitachi


To achieve next-generation production systems and Multiverse Mediation with CPSs, 4M (huMan, Machine, Material, and Method) work transitions need to be clarified and used more accurately. However, traditional systems cannot detect deviations in manual procedures. To resolve these issues, we are developing a highly accurate detection technology for β€œhuman work”. Figure 2 shows the assembly cells considered in this study.

Compared to conventional approaches, we achieved a 15% reduction in product assembly time and a deviation detection leak of almost zero (more than 95% work identification accuracy). These results demonstrated the potential for our system to efficiently and effectively support manufacturing workers and contribute to greater efficiency and quality management in the assembly of complex equipment.

Read more at Hitachi Industrial AI Blog

Fabs Drive Deeper Into Machine Learning

πŸ“… Date:

✍️ Author: Anne Meixner

πŸ”– Topics: machine learning, machine vision, defect detection, convolutional neural network

🏭 Vertical: Semiconductor

🏒 Organizations: GlobalFoundries, KLA, SkyWater Technology, Onto Innovation, CyberOptics, Hitachi, Synopsys


For the past couple decades, semiconductor manufacturers have relied on computer vision, which is one of the earliest applications of machine learning in semiconductor manufacturing. Referred to as Automated Optical Inspection (AOI), these systems use signal processing algorithms to identify macro and micro physical deformations.

Defect detection provides a feedback loop for fab processing steps. Wafer test results produce bin maps (good or bad die), which also can be analyzed as images. Their data granularity is significantly larger than the pixelated data from an optical inspection tool. Yet test results from wafer maps can match the splatters generated during lithography and scratches produced from handling that AOI systems can miss. Thus, wafer test maps give useful feedback to the fab.

Read more at Semiconductor Engineering

3D Vision Technology Advances to Keep Pace With Bin Picking Challenges

πŸ“… Date:

✍️ Author: Jimmy Carroll

πŸ”– Topics: machine vision, convolutional neural network

🏒 Organizations: Zivid, CapSen Robotics, IDS Imaging Development Systems, Photoneo, Universal Robots, Allied Moulded


When a bin has one type of object with a fixed shape, bin picking is straightforward, as CAD models can easily recognize and localize individual items. But randomly positioned objects can overlap or become entangled, presenting one of the greatest challenges in bin picking. Identifying objects with varying shapes, sizes, colors, and materials poses an even larger challenge, but by deploying deep learning algorithms, it is possible to find and match objects that do not conform to one single geometrical description but belong to a general class defined by examples, according to Andrea Pufflerova, Public Relations Specialist at Photoneo.

β€œA well-trained convolutional neural network (CNN) can recognize and classify mixed and new types of objects that it has never come across before,”

Read more at A3

Real-World ML with Coral: Manufacturing

πŸ“… Date:

✍️ Author: Michael Brooks

πŸ”– Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

🏒 Organizations: Coral


For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. We’ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog