Canvas Category: Software : Cloud Computing : General
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
Industrial Automation Software Management on AWS—Best Practices for Operational Excellence
Operational and maintenance tasks can become complex, and change control becomes challenging as the number of PLCs and robotics or other automation systems increases. Problems arise when the right version and right configuration of the code is not found. While code and configuration management is a standard DevOps practice for software development, these practices are not as common in the world of industrial automation, primarily due to lack of good tooling. These challenges can now be solved through systematic, secure, and easily accessible solutions in the AWS cloud.
One such solution is Copia Automation’s Git-based source control (Git is an open-source DevOps tool for source code management). Copia Automation brings the power of a modern source control system (specifically, Git) to industrial automation. The Copia solution is deployed in Amazon’s own AWS account. In this type of deployment model, Amazon is responsible for managing and configuring its own infrastructure needed to run Copia’s software.
☁️🧠 Automated Cloud-to-Edge Deployment of Industrial AI Models with Siemens Industrial Edge
Due to the sensitive nature of OT systems, a cloud-to-edge deployment can become a challenge. Specialized hardware devices are required, strict network protection is applied, and security policies are in place. Data can only be pulled by an intermediate factory IT system from where it can be deployed to the OT systems through highly controlled processes.
The following solution describes the “pull” deployment mechanism by using AWS services and Siemens Industrial AI software portfolio. The deployment process is enabled by three main components, the first of which is the Siemens AI Software Development Kit (AI SDK). After a model is created by a data scientist on Amazon SageMaker and stored in the SageMaker model registry, this SDK allows users to package a model in a format suitable for edge deployment using Siemens Industrial Edge. The second component, and the central connection between cloud and edge, is the Siemens AI Model Manager (AI MM). The third component is the Siemens AI Inference Server (AIIS), a specialized and hardened AI runtime environment running as a container on Siemens IEDs deployed on the shopfloor. The AIIS receives the packaged model from AI MM and is responsible to load, execute, and monitor ML models close to the production lines.
Transforming Semiconductor Yield Management with AWS and Deloitte
Together, AWS and Deloitte have developed a reference architecture to enable the aforementioned yield management capabilities. The architecture, shown in Figure 1, depicts how to collect, store, analyze and act on the yield related data throughout the supply chain. The following describes how the modernized yield management architecture enables the six capabilities discussed earlier.
IBM and AWS partnering to transform industrial welding with AI and machine learning
IBM Smart Edge for Welding on AWS utilizes audio and visual capturing technology developed in collaboration with IBM Research. Using visual and audio recordings taken at the time of the weld, state-of-the-art artificial intelligence and machine learning models analyze the quality of the weld. If the quality does not meet standards, alerts are sent, and remediation action can take place without delay.
The solution substantially reduces the time between detection and remediation of defects, as well as the number of defects on the manufacturing line. By leveraging a combination of optical, thermal, and acoustic insights during the weld inspection process, two key manufacturing personas can better determine whether a welding discontinuity may result in a defect that will cost time and money: weld technician and process engineer.
Predictive Maintenance for Semiconductor Manufacturers with SEEQ powered by AWS
There are challenges in creating predictive maintenance models, such as siloed data, the offline nature of data processing and analytics, and having the necessary domain knowledge to build, implement, and scale models. In this blog, we will explore how using Seeq software on Amazon Web Services can help overcome these challenges.
The combination of AWS and Seeq pairs a secure cloud services platform with advanced analytics innovation. Seeq on AWS can access time series and relational data stored in AWS data services including Amazon Redshift, Amazon DynamoDB, Amazon Simple Storage Service (S3), and Amazon Athena. Once connected, engineers and other technical staff have direct access to all the data in those databases in a live streaming environment, enabling exploration and data analytics without needing to go through the steps to extract data and align timestamps whenever more data is required. As a result, monitoring dashboards and running reports can be set to auto generate and are easily shared among groups or sites. This enables balancing machine downtimes and planning ahead for maintenance without disrupting schedules or compromising yields.
Kibsi Launches AI Platform to Help Customers Gain Business Insights from Cameras After Securing $9.3 Million in Funding
The world of computer vision is transformed with the launch of Kibsi, a platform designed to redefine the way businesses build and deploy computer vision applications. Kibsi offers an intuitive, low-code, drag-and-drop environment that makes it incredibly easy for anyone to leverage the power of AI to breathe new life into their existing cameras.
Kibsi has already attracted some of the world’s most exciting companies as early customers, including Owens Corning, Whisker, and Woodland Park Zoo. These pioneers recognize the transformative impact AI brings to their businesses and the future possibilities that computer vision creates.
The company is also excited to announce its partnership with Amazon Web Services (AWS), recently joining the prestigious ISV Accelerate Program, demonstrating a mutual commitment to provide exceptional outcomes for joint customers. Further, to ensure a smooth purchasing and integration process, Kibsi is now available in the AWS Marketplace.
GE Aerospace's cloud journey with AWS
🔏🚗 In-Depth Analysis of Cyber Threats to Automotive Factories
We found that Ransomware-as-a-Service (RaaS) operations, such as Conti and LockBit, are active in the automotive industry. These are characterized by stealing confidential data from within the target organization before encrypting their systems, forcing automakers to face threats of halted factory operations and public exposure of intellectual property (IP). For example, Continental (a major automotive parts manufacturer) was attacked in August, with some IT systems accessed. They immediately took response measures, restoring normal operations and cooperating with external cybersecurity experts to investigate the incident. However, in November, LockBit took to its data leak website and claimed to have 40TB of Continental’s data, offering to return the data for a ransom of $40 million.
Previous studies on automotive factories mainly focus on the general issues in the OT/ICS environment, such as difficulty in executing security updates, knowledge gaps among OT personnel regarding security, and weak vulnerability management. In light of this, TXOne Networks has conducted a detailed analysis of common automotive factory digital transformation applications to explain how attackers can gain initial access and link different threats together into a multi-pronged attack to cause significant damage to automotive factories.
In the study of industrial robots, controllers sometimes enable universal remote connection services (such as FTP or Web) or APIs defined by the manufacturer to provide operators with convenient robot operation through the Control Station. However, we found that most robot controllers do not enable any authentication mechanism by default and cannot even use it. This allows attackers lurking in the factory to directly execute any operation on robots through tools released by robot manufacturers. In the case of Digital Twin applications, attackers lurking in the factory can also use vulnerabilities in simulation devices to execute malicious code attacks on their models. When a Digital Twin’s model is attacked, it means that the generated simulation environment cannot maintain congruency with the physical environment. This entails that, after the model is tampered with, there may not necessarily be obvious malicious behavior which is a serious problem because of how long this can go unchecked and unfixed. This makes it easy for engineers to continue using the damaged Digital Twin in unknown circumstances, leading to inaccurate research and development or incorrect decisions made by the factory based on false information, which can result in greater financial losses than ransomware attacks.
Element and HighByte Announce Partnership, Launch Solution Based on AWS’s Industrial Data Fabric Architecture
Element and HighByte, leading data management providers to global industrial companies, announced the launch of an integrated solution based on AWS’s Industrial Data Fabric offerings. The solution, powered by Amazon Web Services (AWS), allows information technology (IT) and operational technology (OT) users to contextualize and normalize data into rich information for analytics and other business systems. The solution is designed to be maintained and scaled across the enterprise as the number of use cases that rely on industrial data grow exponentially.
HAYAT HOLDING uses Amazon SageMaker to increase product quality and optimize manufacturing output, saving $300,000 annually
In this post, we share how HAYAT HOLDING—a global player with 41 companies operating in different industries, including HAYAT, the world’s fourth-largest branded diaper manufacturer, and KEAS, the world’s fifth-largest wood-based panel manufacturer—collaborated with AWS to build a solution that uses Amazon SageMaker Model Training, Amazon SageMaker Automatic Model Tuning, and Amazon SageMaker Model Deployment to continuously improve operational performance, increase product quality, and optimize manufacturing output of medium-density fiberboard (MDF) wood panels.
Quality prediction using ML is powerful but requires effort and skill to design, integrate with the manufacturing process, and maintain. With the support of AWS Prototyping specialists, and AWS Partner Deloitte, HAYAT HOLDING built an end-to-end pipeline. Product quality prediction and adhesive consumption recommendation results can be observed by field experts through dashboards in near-real time, resulting in a faster feedback loop. Laboratory results indicate a significant impact equating to savings of $300,000 annually, reducing their carbon footprint in production by preventing unnecessary chemical waste.
📦 How AWS used ML to help Amazon fulfillment centers reduce downtime by 70%
The retail leader has announced it uses Amazon Monitron, an end-to-end machine learning (ML) system to detect abnormal behavior in industrial machinery — that launched in December 2020 — to provide predictive maintenance. As a result, Amazon has reduced unplanned downtime at the fulfillment centers by nearly 70%, which helps deliver more customer orders on time.
Monitron receives automatic temperature and vibration measurements every hour, detecting potential failures within hours, compared with 4 weeks for the previous manual techniques. In the year and a half since the fulfillment centers began using it, they have helped avoid about 7,300 confirmed issues across 88 fulfillment center sites across the world.
Boehringer Ingelheim: Healthy data creates a better world
How Corning Built End-to-end ML on Databricks Lakehouse Platform
Specifically for quality inspection, we take high-resolution images to look for irregularities in the cells, which can be predictive of leaks and defective parts. The challenge, however, is the prevalence of false positives due to the debris in the manufacturing environment showing up in pictures.
To address this, we manually brush and blow the filters before imaging. We discovered that by notifying operators of which specific parts to clean, we could significantly reduce the total time required for the process, and machine learning came in handy. We used ML to predict whether a filter is clean or dirty based on low-resolution images taken while the operator is setting up the filter inside the imaging device. Based on the prediction, the operator would get the signal to clean the part or not, thus reducing false positives on the final high-res images, helping us move faster through the production process and providing high-quality filters.
Building a Predictive Maintenance Solution Using AWS AutoML and No-Code Tools
In this post, we describe how equipment operators can build a predictive maintenance solution using AutoML and no-code tools powered by Amazon Web Services (AWS). This type of solution delivers significant gains to large-scale industrial systems and mission-critical applications where costs associated with machine failure or unplanned downtime can be high.
To implement a prototype of the RUL model, we use a publicly available dataset known as NASA Turbofan Jet Engine Data Set. This dataset is often used for research and ML competitions. The dataset includes degradation trajectories of 100 turbofan engines obtained from a simulator. Here, we explore only one of the four sub-datasets included, namely the training part of the dataset: FD001.
Building Industrial Digital Twins on AWS Using MQTT Sparkplug
Even better, a Sparkplug solution is built around an event-based and publish-subscribe architectural model that uses Report-By-Exception for communication. Meaning that your Digital Twin instances get updated with information only when a change in the dynamic properties is detected. Firstly, this saves computational and network resources such as CPU, memory, power and bandwidth. Secondly, this results in a highly responsive system whereby anomalies picked up by the analytics system can be adjusted in real-time.
Further, due to the underlying MQTT infrastructure, a Sparkplug based Digital Twin solution can scale to support millions of physical assets, which means that you can keep adding more assets with no disruptions. What’s more, MQTT Sparkplug’s definition of an MQTT Session State Management ensures that your Digital twin Solution is always aware of the status of all your physical assets at any given time.
How KAMAX connected their industrial machines to AWS in hours instead of weeks
Every manufacturing customer these days is talking about Industry 4.0, digital transformation, or AI/ML, but these can be daunting topics for manufacturers. Historically, connecting industrial assets to the cloud has been a large and complicated undertaking. Older assets increase the complexity, leaving many manufacturers with legacy equipment stalled at the starting gate. KAMAX, a player for cold forming parts in the sector of steel processing, shows that it is not only possible to transform, but can be easy when working with the right partners. KAMAX wanted a fully managed shop floor solution to acquire data from industrial equipment, process the data and make it available fast, to improve their operational efficiency. KAMAX employed their subsidiary and digital incubator, nexineer digital, Amazon Web Services (AWS) and CloudRail to help. This Industrial IoT collaboration increased manufacturing efficiency and effectiveness within their plants by automating and optimizing traditionally manual tasks, increasing production capacity, and optimizing tool changeover times (planned downtimes) of machines. This solution helped KAMAX realize quantifiable time savings of 2.5% – 3.5%.
California’s AI-Powered Wildfire Prevention Efforts Contend With Data Challenge
Southern California Edison, San Diego Gas & Electric Co. and PG&E Corp. say they see promise in AI algorithms that use images captured by drones and other means to detect anomalies in infrastructure that could lead to wildfires. However, they say it will likely take years to gather enough data to deploy the algorithms at scale across their infrastructure, where they would augment ongoing manual inspections.
San Diego Gas & Electric said it has 75 working models designed to detect specific conditions or damages on company assets or third-party equipment. Gabe Mika, senior group product manager, said each is trained on anywhere from 100 to 5,000 images. SDG&E has leveraged several of Amazon Web Services’ machine-learning and computer vision tools to help build the models, the company said.
Visual search: how to find manufacturing parts in a cinch
In the modern world, advanced recognition technologies play an increasingly important role in various areas of human life. Recognizing the characteristics of vehicle tires is one such area where deep learning is making a valuable difference. Solving the problem of recognizing tire parameters can help to simplify the process of selecting tire replacements when you don’t know which tires will fit. This recognition can be useful both for customer-facing ecommerce and in-store apps used by associates to quickly read necessary tire specs.
During the research process, we decided that online stores and bulletin boards would be the main data sources, since there were thousands of images and, most importantly, almost all of them had structured descriptions. Images from search engines could only be used for training segmentation, because they did not contain the necessary structured features.
In this blog post we have described the complete process of creating a tire lettering recognition system from start to finish. Despite the large number of existing methods, approaches and functions in the field of image recognition and processing, there remains a huge gap in available research and implementation for very complex and accurate visual search systems.
Connecting an Industrial Universal Namespace to AWS IoT SiteWise using HighByte Intelligence Hub
Merging industrial and enterprise data across multiple on-premises deployments and industrial verticals can be challenging. This data comes from a complex ecosystem of industrial-focused products, hardware, and networks from various companies and service providers. This drives the creation of data silos and isolated systems that propagate one-to-one integration strategy.
HighByte Intelligence Hub does just that. It is a middleware solution for universal namespace that helps you build scalable, modern industrial data pipelines in AWS. It also allows users to collect data from various sources, add context to the data being collected, and transform it to a format that other systems can understand.
Koch Ag & Energy High Value Digitalization Deployments Leverages AWS
This application uses existing plant sensors, Monitron sensors, Amazon Lookout and SeeQ software to implement predictive maintenance on more complex equipment. The goal accomplished was successfully implementing predictive maintenance requires data from thousands of sensors to gain a clear understanding of unique operating conditions and applying machine learning models to achieve highly accurate predictions. In the past modeling equipment behavior and diagnosis issues requiring significant investment in time money inhabiting scaling this capability across all assets.
AWS Announces AWS IoT TwinMaker
Industrial companies collect and process vast troves of data about their equipment and facilities from sources like equipment sensors, video cameras, and business applications (e.g. enterprise resource planning systems or project management systems). Many customers want to combine these data sources to create a virtual representation of their physical systems (called a digital twin) to help them simulate and optimize operational performance. But building and managing digital twins is hard even for the most technically advanced organizations. To build digital twins, customers must manually connect different types of data from diverse sources (e.g. time-series sensor data from equipment, video feeds from cameras, maintenance records from business applications, etc.). Then customers have to create a knowledge graph that provides common access to all the connected data and maps the relationships between the data sources to the physical environment. To complete the digital twin, customers have to build a 3D virtual representation of their physical systems (e.g. buildings, factories, equipment, production lines, etc.) and overlay the real-world data on to the 3D visualization. Once they have a virtual representation of their real-world systems with real-time data, customers can build applications for plant operators and maintenance engineers that can leverage machine learning and analytics to extract business insights about the real-time operational performance of their physical systems. Because of the work required, the vast majority of organizations are unable to use digital twins to improve their operations.
Apollo Tyres Moves to AWS to Build Smart, Connected Factories
Apollo Tyres needed to upgrade its infrastructure to develop new ways of engaging with fleet operators, tyre dealers, and consumers, while delivering tyres and services efficiently at competitive prices. The company’s first step was to create a data lake on AWS, which centrally stores Apollo Tyres’ structured and unstructured data at scale. This data lake provides the foundation for an integrated data platform, which enables Apollo Tyres’ engineers around the world to collaborate in developing cloud-native applications and improve enterprise-wide decision making. The integrated data platform enables Apollo Tyres to innovate new products and services, including energy-efficient tires and remote warranty fulfillment.
AWS, Google, Microsoft apply expertise in data, software to manufacturing
As manufacturing becomes digitized, Google’s methodologies that were developed for the consumer market are becoming relevant for industry, said Wee, who previously worked in the semiconductor industry as an industrial engineer. “We believe we’re at a point in time where these technologies—primarily the analytics and AI area—that have been very difficult to use for the typical industrial engineer are becoming so easy to use on the shop floor,” he said. “That’s where we believe our competitive differentiation lies.”
Meanwhile, Ford is also selectively favoring human brain power over software to analyze data and turning more and more to in-house coders than applications vendors. “The solution will be dependent upon the application,” Mikula said. “Sometimes it will be software, and sometimes it’ll be a data analyst who crunches the data sources. We would like to move to solutions that are more autonomous and driven by machine learning and artificial intelligence. The goal is to be less reliant on purchased SaaS.”
AWS IoT SiteWise Edge Is Now Generally Available for Processing Industrial Equipment Data on Premises
With AWS IoT SiteWise Edge, you can organize and process your equipment data in the on-premises SiteWise gateway using AWS IoT SiteWise asset models. You can then read the equipment data locally from the gateway using the same application programming interfaces (APIs) that you use with AWS IoT SiteWise in the cloud. For example, you can compute metrics such as Overall Equipment Effectiveness (OEE) locally for use in a production-line monitoring dashboard on the factory floor.
Seeq Accelerates Chemical Industry Success with AWS
Seeq Corporation, a leader in manufacturing and Industrial Internet of Things (IIoT) advanced analytics software, today announced agreements with two of the world’s premier chemical companies: Covestro and allnex. These companies have selected Seeq on Amazon Web Services (AWS) as their corporate solution, empowering their employees to improve production and business outcomes.
Amazon Lookout For Equipment – Predictive Maintenance Is Now Mature
Amazon Lookout for Equipment is designed for maintainers, not data scientists, and it comes from a place of knowledge. Incorporating expertise and insight gleaned from maintaining its own assets, Amazon aims to make it as easy as possible for users to get started and begin seeing value, addressing potential issues around usability and configurability.
In terms of technical abilities, it currently only covers simple assets like motors, conveyors, and servos – essentially, the kind of assets Amazon itself uses. It doesn’t yet monitor more sophisticated assets like robots or CNC machinery, although, in time, I do not doubt that these, too, will also be covered. As it stands, though, it will be competent for a lot of standard factory equipment.
How to build a predictive maintenance solution using Amazon SageMaker
Run Semiconductor Design Workflows on AWS
This implementation guide provides you with information and guidance to run production semiconductor workflows on AWS, from customer specification, to front-end design and verification, back-end fabrication, packaging, and assembly. Additionally, this guide shows you how to build secure chambers to quickly enable third-party collaboration, as well as leverage an analytics pipeline and artificial intelligence/machine learning (AI/ML) services to decrease time-to-market and increase return on investment (ROI). Customers that run semiconductor design workloads on AWS have designed everything from simple ASICs to large SOCs with tens of billions of transistors, at the most advanced process geometries. This guide describes the numerous AWS services involved with these workloads, including compute, storage, networking, and security. Finally, this paper provides guidance on hybrid flows and data transfer methods to enable a seamless hybrid environment between on-premises data centers and AWS.
Introducing Amazon SageMaker Reinforcement Learning Components for open-source Kubeflow pipelines
Woodside Energy uses AWS RoboMaker with Amazon SageMaker Kubeflow operators to train, tune, and deploy reinforcement learning agents to their robots to perform manipulation tasks that are repetitive or dangerous.
AWS Announces General Availability of Amazon Lookout for Vision
AWS announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called “few-shot learning,” Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service.
AWS Predictive Quality Industrial Demo
Facilitating IoT provisioning at scale
Whether you’re looking to design a new device or retrofitting an existing device for the IoT, you will need to consider IoT provisioning which brings IoT devices online to cloud services. IoT provisioning design requires decisions to be made that impact user experience and security for both network commissioning and credential provisioning mechanisms which configure digital identities, cloud end-points, and network credentials so that devices can securely connect to the cloud.
AI Solution for Operational Excellence
Falkonry Clue is a plug-and-play solution for predictive production operations that identifies and addresses operational inefficiencies from operational data. It is designed to be used directly by operational practitioners, such as production engineers, equipment engineers or manufacturing engineers, without requiring the assistance of data scientists or software engineers.
Unchain the ShopFloor through Software-Defined Automation
But, what happens as soon as insight is generated and the status of the physical process needs to be changed to a better state? In manufacturing for discrete and process industries, the process is defined by fixed code routines and programmable parameters. It has its own world of control code languages and standards to define the behavior of controllers, robot arms, sensors and actuators of all kinds. This world has remained remarkably stable over the past 40-plus years. Control code resides on a controller and special tools, as well as highly skilled automation engineers, who define the behavior of a specific production system. Changing the state of an existing and running production system changes the programs and parameters required to physically access the automation equipment—OT equipment needs to be re-programmed, often on every single component locally. To give a concrete example, let’s assume we can determine from field data, using applied machine learning (also referenced as Industrial IoT), that a behavior of a robotic handling process needs to be adapted. In the existing world, production needs to stop. A skilled engineer needs to physically re-teach or flash the robot controller. The new movement needs to be tested individually and in context of the adjacent production components. Finally, production can start again. This process can take minutes to hours depending on the complexity of the production system.
Production systems will optimize themselves based on simulated and real experiment. Improvements will rapidly be propagated around the globe. Labor will optimize the learning, not the system. This could also differ over time or by external influence. In times where renewable energy was cheap, output could have been one of the core drivers for optimization, while the minimization of input factors could have been paramount in other circumstances.