Computer Vision

Assembly Line

Edge Learning Classify Tutorial - In-Sight 3800 Vision System

Automating 3D Printing with AI Vision: PrintPal’s PrintWatch System

📅 Date:

✍️ Author: Kerry Stevenson

🔖 Topics: 3D Printing, Computer Vision, Defect Detection

🏢 Organizations: PrintPal


PrintPal is a startup company launched in 2021 that has been developing a machine learning-based vision system for 3D printer monitoring. The PrintWatch concept is that a camera feed of the print surface is relayed to an AI analysis system that can classify artifacts at 93% accuracy within each image in only 5ms. Defects are automatically detected and allow the operator to stop failing jobs before they waste additional material or worse, damage the equipment PrintWatch runs 24/7, removing any need for a human operator to monitor print progress, and can do so better than humans who often stray elsewhere to work on other things.

Read more at Fabbaloo

Industrial defect detection at the edge

Building a Visual Quality Control solution in Google Cloud using Vertex AI

📅 Date:

✍️ Authors: Oleg Smirnov, Marko Nikolic, Ilya Katsov

🔖 Topics: Computer Vision, Quality Assurance

🏢 Organizations: Grid Dynamics, Google


In this blog post, we consider the problem of defect detection in packages on assembly and sorting lines. More specifically, we present a real-time visual quality control solution that is capable of tracking multiple objects (packages) on a line, analyzing each object, and evaluating the probability of a defect or damaged parcel. The solution was implemented using Google Cloud Platform (GCP) Vertex AI platforms and GCP AutoML services, and we have made the reference implementation available in our git repository. This implementation can be used as a starting point for developing custom visual quality control pipelines.

Read more at Grid Dynamics Blog

Building a Predictive Maintenance Solution Using AWS AutoML and No-Code Tools

📅 Date:

✍️ Authors: Volodymyr Koliadin, Andriy Drebot, Kavita Mahajan

🔖 Topics: Computer Vision, AWS

🏢 Organizations: Grid Dynamics, AWS


In this post, we describe how equipment operators can build a predictive maintenance solution using AutoML and no-code tools powered by Amazon Web Services (AWS). This type of solution delivers significant gains to large-scale industrial systems and mission-critical applications where costs associated with machine failure or unplanned downtime can be high.

To implement a prototype of the RUL model, we use a publicly available dataset known as NASA Turbofan Jet Engine Data Set. This dataset is often used for research and ML competitions. The dataset includes degradation trajectories of 100 turbofan engines obtained from a simulator. Here, we explore only one of the four sub-datasets included, namely the training part of the dataset: FD001.

Read more at AWS Blogs

AI Driven Vision Inspection Automation for Forged Connecting Rods

Using artificial intelligence to control digital manufacturing

📅 Date:

🔖 Topics: Additive Manufacturing, Computer Vision, AI

🏢 Organizations: MIT


MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time. They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.

The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.

Read more at MIT News

Andrew Ng’s Landing AI aims to help manufacturers deploy AI vision systems

📅 Date:

✍️ Author: Jean Thilmany

🔖 Topics: Computer Vision, PLC

🏢 Organizations: Landing AI


Today, the company announced its LandingEdge, which customers can use to deploy deep-learning based vision inspection to their production floor. The company’s first product, Landing Lens, enables teams, who don’t have to be trained software engineers, to develop deep learning models. LandingEdge extends that capability into deployment, Yang says. “Strategically, manufacturers start AI with inspection,” Yang said. “They use cameras to repurpose the human looking at the product, which makes inspection more precise.

LandingEdge attempts to simplify the platform deployment for a manufacturer. Typically users set up a method to “train” their vision system by plugging the LandingEdge app into programmable logical controller and cameras. The PLC continuously monitors the state of cameras and the vision system itself.

Read more at VentureBeat

Visual search: how to find manufacturing parts in a cinch

📅 Date:

✍️ Authors: Anton Katanaev, Yaroslav Ukhmylov, Alfiya Chekmareva, Roman Khalili

🔖 Topics: Computer Vision, Optical Character Recognition, Visual Search

🏢 Organizations: Grid Dynamics, AWS


In the modern world, advanced recognition technologies play an increasingly important role in various areas of human life. Recognizing the characteristics of vehicle tires is one such area where deep learning is making a valuable difference. Solving the problem of recognizing tire parameters can help to simplify the process of selecting tire replacements when you don’t know which tires will fit. This recognition can be useful both for customer-facing ecommerce and in-store apps used by associates to quickly read necessary tire specs.

During the research process, we decided that online stores and bulletin boards would be the main data sources, since there were thousands of images and, most importantly, almost all of them had structured descriptions. Images from search engines could only be used for training segmentation, because they did not contain the necessary structured features.

In this blog post we have described the complete process of creating a tire lettering recognition system from start to finish. Despite the large number of existing methods, approaches and functions in the field of image recognition and processing, there remains a huge gap in available research and implementation for very complex and accurate visual search systems.

Read more at Grid Dynamics Blog

How pioneering deep learning is reducing Amazon’s packaging waste

📅 Date:

✍️ Author: Sean O'Neill

🔖 Topics: Machine Learning, Computer Vision, Convolutional Neural Network, Sustainability, E-commerce

🏢 Organizations: Amazon


Fortunately, machine learning approaches — particularly deep learning — thrive on big data and massive scale, and a pioneering combination of natural language processing and computer vision is enabling Amazon to hone in on using the right amount of packaging. These tools have helped Amazon drive change over the past six years, reducing per-shipment packaging weight by 36% and eliminating more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.

“When the model is certain of the best package type for a given product, we allow it to auto-certify it for that pack type,” says Bales. “When the model is less certain, it flags a product and its packaging for testing by a human.” The technology is currently being applied to product lines across North America and Europe, automatically reducing waste at a growing scale.

Read more at Amazon Science

Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection

📅 Date:

✍️ Author: Angie Lee

🔖 Topics: computer vision, defect detection

🏢 Organizations: Mariner, NVIDIA


Traditional machine vision systems installed in factories have difficulty discerning between true defects — like a stain in fabric or a chip in glass — and false positives, like lint or a water droplet that can be easily wiped away.

Spyglass Visual Inspection, or SVI, helps manufacturers detect the defects they couldn’t see before. SVI uses AI software and NVIDIA hardware connected to camera systems that provide real-time inspection of pieces on production lines, identify potential issues and determine whether they are true material defects — in just a millisecond.

Read more at NVIDIA Blog

Visual search: how to find manufacturing parts in a cinch

📅 Date:

✍️ Authors: Artem Ivashchenko, Sergey Parakhin, Aleksey Romanov

🔖 Topics: Convolutional Neural Network, Computer Vision, Optical Character Recognition, Visual Search

🏢 Organizations: Grid Dynamics


The process of engineering a robust mechanical product, whether it’s an escalator or a car engine, requires many small parts. We accept that these parts wear out over time and require replacement to avoid breakdowns and to keep the mechanics of the product running smoothly.

During our analysis of the data that the client shared with us, we found a mix of photos of the parts themselves, photos of packages or only product labels. Serial numbers or easily distinguishable characters were clearly visible in some photographs, but not in all of them. One of the primary challenges we faced, therefore, was dealing with the differences between the photos the engineers were submitting compared to the images in the search catalog. For example, there were examples of visually indistinguishable images where only the model number differentiated the part, photos of a sticker with a serial number instead of an object itself, rulers alongside objects in photos to indicate scale, and drawings of the part in the catalog instead of photos.

For this use case we implemented the CNN model based on ResNeXt architecture (ResNeXt-50 (32×4d)) pre-trained on an ImageNet dataset. However, the manufacturing parts we were dealing with were not adequately available in the pre-trained dataset, which meant we had to enhance the training dataset with about 10 000 independently sourced manufacturing part images along with the client-supplied labeled dataset.

Read more at Grid Dynamics Blog

How DeepMind is Reinventing the Robot

📅 Date:

✍️ Author: Tom Chivers

🔖 Topics: robotics, artificial intelligence, robotic arm, computer vision

🏢 Organizations: DeepMind


To train a robot, though, such huge data sets are unavailable. “This is a problem,” notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What’s more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you’ll have a badly dented robot, if not worse.

There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones. “One of our classic examples was training an agent to play Pong,” says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, “then the performance will—boop!—go off a cliff.” Suddenly it will lose 20 to zero every time.

There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network’s weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.

But that strategy is limited. For one thing, it’s not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you’d have to train it on every single one of them. And if the environment is unstructured, you won’t even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn’t let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.

Hadsell’s preferred approach is something called “elastic weight consolidation.” The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights.

Read more at IEEE Spectrum

Real-World ML with Coral: Manufacturing

📅 Date:

✍️ Author: Michael Brooks

🔖 Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

🏢 Organizations: Coral


For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. We’ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog

Visual Inspection AI: a purpose-built solution for faster, more accurate quality control

📅 Date:

✍️ Authors: Mandeep Wariach, Thomas Reinbacher

🔖 Topics: cloud computing, computer vision, machine learning, quality assurance

🏢 Organizations: Google


The Google Cloud Visual Inspection AI solution automates visual inspection tasks using a set of AI and computer vision technologies that enable manufacturers to transform quality control processes by automatically detecting product defects.

We built Visual Inspection AI to meet the needs of quality, test, manufacturing, and process engineers who are experts in their domain, but not in AI. By combining ease of use with a focus on priority uses cases, customers are realizing significant benefits compared to general purpose machine learning (ML) approaches.

Read more at Google Cloud Blog

Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects

📅 Date:

✍️ Author: Scott Martin

🔖 Topics: AI, edge computing, computer vision, recycling

🏭 Vertical: Plastics and Rubber

🏢 Organizations: NVIDIA, AMP Robotics


People worldwide produce 2 billion tons of waste a year, with 37 percent going to landfill, according to the World Bank.

“Sorting by hand on conveyor belts is dirty and dangerous, and the whole place smells like rotting food. People in the recycling industry told me that robots were absolutely needed,” said Horowitz, the company’s CEO.

His startup, AMP Robotics, can double sorting output and increase purity for bales of materials. It can also sort municipal waste, electronic waste, and construction and demolition materials.

Read more at NVIDIA Blog

AWS Announces General Availability of Amazon Lookout for Vision

📅 Date:

🔖 Topics: cloud computing, computer vision, machine learning, quality assurance

🏢 Organizations: AWS, Basler, Dafgards, General Electric


AWS announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called “few-shot learning,” Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service.

Read more at Business Wire

Computer Vision Advances Zero-Defect Manufacturing

📅 Date:

🔖 Topics: computer vision, quality assurance, zero defect manufacturing

🏢 Organizations: Relimetrics, Hewlett Packard Enterprise


A key part of the process it wanted to automate is server assembly quality assurance, which was being done manually by quality operators. This labor-intensive process is prone to error due to human eye fatigue and the inability of quality operators to catch critical defects.

This situation is hardly unusual. According to Kemal Levi, Founder and CEO of Relimetrics, there is “a strong demand for computer vision to replace manual visual inspections. Yet, due to a high production variability, particularly in the case of discrete manufacturing, computer vision systems today are not able to keep up with the rate of change in configurations.”

Read more at Insight Tech