Computer Vision

Assembly Line

Using artificial intelligence to control digital manufacturing

Date:

Topics: Additive Manufacturing, Computer Vision, AI

Organizations: MIT

MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time. They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.

The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.

Read more at MIT News

Andrew Ng’s Landing AI aims to help manufacturers deploy AI vision systems

Date:

Author: Jean Thilmany

Topics: Computer Vision, PLC

Organizations: Landing AI

Today, the company announced its LandingEdge, which customers can use to deploy deep-learning based vision inspection to their production floor. The company’s first product, Landing Lens, enables teams, who don’t have to be trained software engineers, to develop deep learning models. LandingEdge extends that capability into deployment, Yang says. “Strategically, manufacturers start AI with inspection,” Yang said. “They use cameras to repurpose the human looking at the product, which makes inspection more precise.

LandingEdge attempts to simplify the platform deployment for a manufacturer. Typically users set up a method to “train” their vision system by plugging the LandingEdge app into programmable logical controller and cameras. The PLC continuously monitors the state of cameras and the vision system itself.

Read more at VentureBeat

How pioneering deep learning is reducing Amazon’s packaging waste

Date:

Author: Sean O'Neill

Topics: Machine Learning, Computer Vision, Convolutional Neural Network, Sustainability, E-commerce

Organizations: Amazon

Fortunately, machine learning approaches — particularly deep learning — thrive on big data and massive scale, and a pioneering combination of natural language processing and computer vision is enabling Amazon to hone in on using the right amount of packaging. These tools have helped Amazon drive change over the past six years, reducing per-shipment packaging weight by 36% and eliminating more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.

“When the model is certain of the best package type for a given product, we allow it to auto-certify it for that pack type,” says Bales. “When the model is less certain, it flags a product and its packaging for testing by a human.” The technology is currently being applied to product lines across North America and Europe, automatically reducing waste at a growing scale.

Read more at Amazon Science

Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection

Date:

Author: Angie Lee

Topics: computer vision, defect detection

Organizations: Mariner, NVIDIA

Traditional machine vision systems installed in factories have difficulty discerning between true defects — like a stain in fabric or a chip in glass — and false positives, like lint or a water droplet that can be easily wiped away.

Spyglass Visual Inspection, or SVI, helps manufacturers detect the defects they couldn’t see before. SVI uses AI software and NVIDIA hardware connected to camera systems that provide real-time inspection of pieces on production lines, identify potential issues and determine whether they are true material defects — in just a millisecond.

Read more at NVIDIA Blog

How DeepMind is Reinventing the Robot

Date:

Author: Tom Chivers

Topics: robotics, artificial intelligence, robotic arm, computer vision

Organizations: DeepMind

To train a robot, though, such huge data sets are unavailable. “This is a problem,” notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What’s more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you’ll have a badly dented robot, if not worse.

There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones. “One of our classic examples was training an agent to play Pong,” says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, “then the performance will—boop!—go off a cliff.” Suddenly it will lose 20 to zero every time.

There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network’s weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.

But that strategy is limited. For one thing, it’s not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you’d have to train it on every single one of them. And if the environment is unstructured, you won’t even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn’t let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.

Hadsell’s preferred approach is something called “elastic weight consolidation.” The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights.

Read more at IEEE Spectrum

Real-World ML with Coral: Manufacturing

Date:

Author: Michael Brooks

Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

Organizations: Coral

For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. We’ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog

Visual Inspection AI: a purpose-built solution for faster, more accurate quality control

Date:

Authors: Mandeep Wariach, Thomas Reinbacher

Topics: cloud computing, computer vision, machine learning, quality assurance

Organizations: Google

The Google Cloud Visual Inspection AI solution automates visual inspection tasks using a set of AI and computer vision technologies that enable manufacturers to transform quality control processes by automatically detecting product defects.

We built Visual Inspection AI to meet the needs of quality, test, manufacturing, and process engineers who are experts in their domain, but not in AI. By combining ease of use with a focus on priority uses cases, customers are realizing significant benefits compared to general purpose machine learning (ML) approaches.

Read more at Google Cloud Blog

Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects

Date:

Author: Scott Martin

Topics: AI, edge computing, computer vision, recycling

Vertical: Plastics and Rubber

Organizations: NVIDIA, AMP Robotics

People worldwide produce 2 billion tons of waste a year, with 37 percent going to landfill, according to the World Bank.

“Sorting by hand on conveyor belts is dirty and dangerous, and the whole place smells like rotting food. People in the recycling industry told me that robots were absolutely needed,” said Horowitz, the company’s CEO.

His startup, AMP Robotics, can double sorting output and increase purity for bales of materials. It can also sort municipal waste, electronic waste, and construction and demolition materials.

Read more at NVIDIA Blog

AWS Announces General Availability of Amazon Lookout for Vision

Date:

Topics: cloud computing, computer vision, machine learning, quality assurance

Organizations: AWS, Basler, Dafgards, General Electric

AWS announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called “few-shot learning,” Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service.

Read more at Business Wire