Amazon Web Services (AWS)
Canvas Category Software : Cloud Computing : General
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
Assembly Line
PTC Announces Strategic Collaboration Agreement with AWS to Help Companies Design Products Faster and Easier with Onshape
PTC (NASDAQ: PTC) announced entry into a Strategic Collaboration Agreement (SCA) with Amazon Web Services (AWS) to accelerate the growth of its Onshape® cloud-native computer-aided design (CAD) and product data management (PDM) solution. This collaboration will focus on advancing Onshape product enhancements, customer adoption programs, and artificial intelligence (AI) initiatives, all of which are aimed at helping product designers and engineers create new, high-quality products faster and more efficiently.
Litmus Edge Unlocks Industrial Data Potential With New AWS IoT SiteWise Integration
Litmus, a leading industrial dataops company announced a new integration with Amazon Web Services, Inc. (AWS) that enables industrial customers to unlock a wider range of equipment data and drive actionable business insights. By seamlessly integrating Litmus Edge with AWS IoT SiteWise, customers can more easily collect, organize, process, and monitor equipment data on-premises. This offering seamlessly integrates with their current infrastructure, providing advanced data analytics and enabling real-time industrial data operations and management at scale. This collaboration helps manufacturers streamline data handling, boost operational efficiency, reduce downtime, and enhance decision-making. The integration also eases deployment, providing a scalable, robust solution that drives significant value and innovation.
Powering Intelligent Factory Operations with Cognizant’s APEx Factory Whisperer and AWS
In this blog post, we expand on a prior blog post on Cognizant’s APEx solution and demonstrate how AWS IoT SiteWise, AWS IoT TwinMaker, and Amazon Bedrock are used to provide expert guidance to mitigate critical issues in the manufacturing plant. The solution parses operational data from the manufacturing environment, alarms, historical trends, troubleshooting results, workshop manuals, standard operating procedures (SOPs), and Piping & Instrumentation (P&ID) diagrams of a factory and associated assets. We will describe how customers use “Factory Whisperer”, Cognizant’s APEx generative AI assistant solution, to increase uptime, improve quality, and reduce operating costs for manufacturing organizations.
Factory Whisperer uses several different AWS services to provide expert-level guidance to plant managers and maintenance teams. When a user asks Factory Whisperer about a problem, the solution uses natural language processing to understand the question. It then retrieves relevant information from a corporate knowledge base containing manuals, procedures, and past troubleshooting data, as well as real-time and historical sensor recordings. The solution uses a Retrieval-Augmented Generation (RAG) technique to combine this contextual information to provide a tailored, expert-like response that suggests steps to diagnose and fix the issue, using a large language model.
APA Group delivers new ERP and cloud-based data strategy
According to Butler, the adoption of Delta Live Tables assisted APA when the company needed to develop integrations for the same source system for two separate projects, one of which was its ERP.
Responding to a question from iTnews, Butler elaborated: “We had ERP and we had another large-scale, similar project somewhere else in the business and they both had their own reporting requirements.
“We didn’t want two different methods for doing data warehousing and reporting. So, we created the third project, which was to stand up Databricks, to facilitate those two [projects].
Building a generative AI reservoir simulation assistant with Stone Ridge Technology
In the field of reservoir simulation, accurate modeling is paramount for understanding and predicting the behavior of subsurface flow through geological formations. However, the complexities involved in creating, implementing, and optimizing these models often pose significant challenges, even for experienced professionals. Fortunately, the integration of artificial intelligence (AI) and large language models (LLMs) offers a transformative solution to streamline and enhance the reservoir simulation workflow. This post describes our efforts in developing an intelligent simulation assistant powered by Amazon Bedrock, Anthropic’s Claude, and Amazon Titan LLMs, aiming to revolutionize the way reservoir engineers approach simulation tasks.
Although not covered in this architecture, two key elements enhance this workflow significantly and are the topic of future exploration: 1) simulation execution using natural language by orchestration through a generative AI agent, and 2) multimodal generative AI (vision and text) analysis and interpretation of reservoir simulation results such as well production logs and 3D depth slices for pressure and saturation evolution. As future work, automating aspects of our current architecture is being explored using an agentic workflow framework as described in this AWS HPC post.
Amazon Web Services launches its most powerful chip yet
AI is touching your food—maybe most of it—by solving the food industry’s unique supply-chain challenges
Using AI to get your products from point A to point B is a growing solution to logistical hurdles, but in no other industry does it feel as nuanced as the food supply chain. That supply chain includes everything from natural agricultural and weather-related challenges to grow ingredients to inventory management and product shelf life: The end consumer needs that item to stay fresh long enough to cook it and eat it, be it at home or at a food service establishment.
Erik Nieves, cofounder and CEO of Plus One Robotics, explains, he has seen AI greatly reduce the time-to-shelf for several products. Part of that is with robotics, like his, that automate packaging systems in warehouses with the help of machine learning and 3D computer vision. The robots can hang out longer in a cold freezer to package temperature-controlled goods and also handle more manual labor than a human, even with a forklift. They are getting pretty good, he says, at detecting different types of fruit and adjusting their gripper strength to avoid bruising a ripe pear.
By analyzing historical sales data, AI is giving food distributors more insight into what is selling when—and informing its purchase orders accordingly. A 2022 study by the World Wildlife Fund found that AI software offered a 14.8% reduction in food waste per grocery store.
Improve your industrial operations with cloud-based SCADA systems
There are several advantages of cloud-based SCADA systems, such as reducing the need for installing and maintaining expensive server hardware and software on premises and making your industrial data available wherever and whenever you need it. Cloud-based SCADA systems are increasingly important in IIoT and Industry 4.0 because they provide the automation, data collection, analysis, analytics, machine learning, and connectivity necessary to improve processes and operations. With cloud-based SCADA systems, customers have easier access to the data and can use cloud services to manage and analyze the data at scale.
Ignition is an integrated Software Platform for SCADA systems by Inductive Automation. The Inductive Automation partner solution deploys Ignition, a solution by AWS Partner Inductive Automation, to the AWS Cloud. The partner solution enhances availability, performance, observability, and resilience of SCADA applications. It provides both standalone and cluster deployment options of Ignition on Amazon EC2 Linux instances. Both options are designed to be secure and highly available, configured with best practices for security, network gateway connections, and database connectivity.
Industrial automation software management on AWS: End-to-end DevOps for factory automation coding to commissioning
An industrial DevOps solution needs to break these barriers to traditional PLCs, coding, and commissioning. This article presents the application of end-to-end DevOps, from PLC code development to commissioning and beyond, based on a solution by Software Defined Automation (SDA), an AWS Partner. It delves into how DevOps, traditionally not synonymous with PLC or robot programming, can revolutionize these domains and how SDA’s solution built on AWS storage and compute services provides a reliable, scalable, and secure platform for automation engineers to collaborate remotely and increase their productivity. This blog post elucidates the advantages of cloud-based DevOps using a customer case, particularly focusing on agile project management aspects; collaboration tools for industrial automation SIs; and a platform for code backups, version management, and reusable automation code standards.
The core features provided by SDA are Backup, Version Control, Browser-Based Engineering, and Secure Remote Access with Role Based Access Control. Version Control provides secure storage and traceability of PLC source code versions and changes backed by Amazon Simple Storage Service (Amazon S3), an object storage built to retrieve virtually any amount of data from anywhere. Version Pro is also central for collaboration, project management, the checkout/check-in process, and version comparisons. SDA Browser-Based Engineering uses AWS-hosted IDEs on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. These IDEs are streamed to web browsers using NICE DCV, a high-performance remote display protocol. SDA PLC Ops provides API-driven capabilities for vendor IDE interaction. It can be used for code-integrity checks and on-demand or scheduled backups of PLCs. This service is backed by Amazon EC2 for vendor-specific installations and Amazon Dynamo DB—a serverless, NoSQL, fully managed database—for metadata storage.
Achieving robust closed-loop control in remote locations with Kelvin’s edge-cloud communication
In today’s digital landscape, optimizing the performance of distributed assets in remote locations poses unique challenges. Achieving closed-loop control, where real-time monitoring and adjustments are made based on feedback, becomes particularly difficult when reliable and consistent connectivity is not guaranteed. However, with the advent of distributed edge computing, companies like Kelvin are revolutionizing the way we approach closed-loop control in remote areas. In this blog post, we will delve into Kelvin’s innovative edge-cloud communication mechanism and explore how it enables robust closed-loop control of distributed, networked assets in remote locations.
Kelvin, a leading next-gen industrial automation software company, provides artificial intelligence (AI)–powered asset performance optimization software that focuses on the industries of energy (for example, well construction and completions), upstream oil and gas production, midstream oil and gas operations, process manufacturing (for example, chemicals, food and beverages, and pulp and paper), mining and metals, and renewable energy. Multiple global enterprises that operate thousands of assets (e.g. BP, Halliburton and Santos) have used Kelvin solutions built on Amazon Web Services (AWS) to connect, create, and scale advanced closed-loop supervisory-control applications across their operations without needing to rip and replace any of their existing infrastructure.
Tulip Signs Strategic Collaboration Agreement with AWS to Expand Capabilities Driving Resilience for Frontline Operations
Tulip Interfaces, a leader in frontline operations, has announced that it has signed a strategic collaboration agreement (SCA) with Amazon Web Services, Inc. (AWS). The new agreement is designed to advance the adoption of cloud solutions for manufacturing and fuel the development of flexible and adaptable manufacturing operations solutions. Through this multi-year SCA, Tulip and AWS will deliver further value to their customers by expanding collaboration in the areas of analytics, computer vision, and edge computing to enhance Tulip product capabilities and offerings with AWS services.
Accelerating industrialization of Machine Learning at BMW Group using the Machine Learning Operations (MLOps) solution
The BMW Group’s Cloud Data Hub (CDH) manages company-wide data and data solutions on AWS. The CDH provides BMW Analysts and Data Scientists with access to data that helps drive business value through Data Analytics and Machine Learning (ML). The BMW Group’s MLOps solution includes (1) Reference architecture, (2) Reusable Infrastructure as Code (IaC) modules that use Amazon SageMaker and Analytics services, (3) ML workflows using AWS Step Functions, and (4) Deployable MLOps template that covers the ML lifecycle from data ingestion to inference.
Seeq and AspenTech accelerate self-service industrial analytics on AWS
With Seeq powered by the wealth of data stored in IP.21 running on AWS, you can clean, perform calculations on, and analyze IP.21 data—including context from relational data sources such as MES, batch, and other applications—to diagnose and predict issues and share findings across the organization. With NRT expert collaboration and deeper insights, Seeq helps organizations advance toward their sustainability and operational excellence goals. By tapping into rich data from IP.21, Seeq helps substantially reduce maintenance costs and minimize downtime. You can set up advanced workflows like ML with data-driven, state-of-the-art methods already proven in critical industries using the Seeq SaaS platform in conjunction with the AWS Cloud. The Seeq SaaS solution is listed on AWS Marketplace, making it easier to procure, deploy, and manage your workload.
Accelerate Semiconductor machine learning initiatives with Amazon Bedrock
Manufacturing processes generate large amounts of sensor data that can be used for analytics and machine learning models. However, this data may contain sensitive or proprietary information that cannot be shared openly. Synthetic data allows the distribution of realistic example datasets that preserve the statistical properties and relationships in the real data, without exposing confidential information. This enables more open research and benchmarking on representative data. Additionally, synthetic data can augment real datasets to provide more training examples for machine learning algorithms to generalize better. Data augmentation with synthetic manufacturing data can help improve model accuracy and robustness. Overall, synthetic data enables sharing, enhanced research abilities, and expanded applications of AI in manufacturing while protecting data privacy and security.
Verusen Joins AWS ISV Accelerate Program
Verusen, the industry leader driving MRO (Maintenance, Repair, and Operations) optimization and collaboration, today announced that it has joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners that provides software solutions that run on or integrate with AWS.
AWS customers can access Verusen’s game-changing AI solution for optimizing MRO within the AWS Marketplace. Verusen’s listing in the AWS Marketplace allows customers to streamline the purchasing and managing Verusen’s MRO optimization platform product offerings within their AWS respective Marketplace accounts.
How Audi improved their chat experience with Generative AI on Amazon SageMaker
Audi, and Reply worked with Amazon Web Services (AWS) on a project to help improve their enterprise search experience through a Generative AI chatbot. The solution is based on a technique named Retrieval Augmented Generation (RAG), which uses AWS services such as Amazon SageMaker and Amazon OpenSearch Service. Ancillary capabilities are offered by other AWS services, such as Amazon Simple Storage Service (Amazon S3), AWS Lambda, Amazon CloudFront, Amazon API Gateway, and Amazon Cognito.
In this post, we discuss how Audi improved their chat experience, by using a Generative AI solution on Amazon SageMaker, and dive deeper into the background of the essential components of their chatbot, by showcasing how to deploy and consume two state-of-the-art Large Language Models (LLMs), Falcon 7B-Instruct, designed for Natural Language Processing (NLP) tasks in specific domains where the model follows user instructions and produces the desired output, and Llama-2 13B-Chat, designed for conversational contexts where the model responds to user’s messages in a natural and engaged way.
Customize large language models with oil and gas terminology using Amazon Bedrock
The Norwegian multinational energy company Equinor has made Volve dataset, a set of drilling reports available for research, study, and development purposes. (When using external data, be sure to abide by the license the data is offered under.) The dataset contains 1,759 daily drilling reports—each containing both hourly comments and a daily summary—from the Volve field in the North Sea. Drilling rig supervisors tend to use domain-specific terminology and grammar when describing operations in both the hourly comments and the daily summary. This terminology is standard in the industry, which is why fine-tuning a foundation model using these reports is likely to improve summarization accuracy by enhancing the LLM’s ability to understand jargon and speak like a drilling engineer.
Generative AI has the potential to improve efficiency by automating time-consuming tasks even in domains that require deep knowledge of industry-specific nomenclature and acronyms. Having a custom model that provides drilling engineers with a draft of daily activities has the potential to save hours of work every week. Model customization can also help energy and utilities customers in other applications that involve the generation of highly technical content, as is the case of geological analyses, maintenance reports, and shift handover reports.
Ingest and analyze equipment data in the cloud
A sugar manufacturer with multiple plants across India use molasses as the key raw material to produce Extra Neutral Alcohol (ENA) through a 4-step process; 1/ Fermentation, 2/ Distillation, 3/ Evaporation, and 4/ Purification. This company needed better visibility into their production data to make better decisions, and ultimately improve overall equipment effectiveness (OEE).
AWS worked closely with the customer to build a solution that supported their Smart Manufacturing vision by providing: 1/ a mechanism to ingest data from PLC and DCS systems, 2/ support to securely ingest the data into the AWS Cloud, 3/ ability to analyze the OT data, and 4/ a dashboard for centralized real-time visibility into their production operations to aid in decision making.
Improve tire manufacturing effectiveness with a process digital twin
In the rubber-mixing stage, the recipe of various raw material constituents like rubber, chemicals, carbon, oil, and other additives plays a vital role in the control of process standards and final product quality. In the current schema of things, parameters like Mooney viscosity, specific gravity, and Rheo (the level of curing that can be achieved over the compound) are fairly manual and offline. In addition, the correlation of these parameters is conducted either on a standard spreadsheet solver or statistical package. Because of the delay in such correlation and interdependency, the extent of control a process engineer has on the deviation (such as drop temperature, mixing time, ram pressure, injection time, and so on) are limited.
There are four steps to operationalize, the first being data acquisition and noise removal—a process of 3–6 weeks with the built-in and external connectors. Next is model tuning and ascertaining what is fit for our purpose. Since we are considering a list of defect types, we are talking about another four weeks for training, validating, creating test sets, and delivering a simulation environment with minimum error. The third step is delivering the set points and boundary conditions for each grade of compound.
For example, the process digital twin cockpit has three desirable sub-environments:
- Carcass level—machine ID, drum width, drum diameter, module number, average weight, actual weight, and deviation results
- Tread roll level—machine number, average weight, actual weight, deviation, and SKU number
- Curing level—curing ID, handling time, estimated curing time, curing schedule, and associated deviations in curing time
The final step is ascertaining the model outcome and computing the simulation result (bias, Sum of Squares Error (SSE), deviation, and so on) with respect to the business outcome like defect percentage, speed of work, overall accuracy, and so on.
Siemens and AWS join forces to democratize generative AI in software development
Siemens and Amazon Web Services (AWS) are strengthening their partnership and making it easier for businesses of all sizes and industries to build and scale generative artificial intelligence (AI) applications. Domain experts in fields such as engineering and manufacturing, as well as logistics, insurance or banking will be able to create new and upgrade existing applications with the most advanced generative AI technology. To make this possible, Siemens is integrating Amazon Bedrock - a service that offers a choice of high-performing foundation models from leading AI companies via a single API, along with security, privacy, and responsible AI capabilities - with Mendix, the leading low-code platform that is part of the Siemens Xcelerator portfolio.
Siemens delivers innovations in immersive engineering and artificial intelligence to enable the industrial metaverse
Siemens and Sony Corporation (Sony) are partnering to introduce a new solution that combines the Siemens Xcelerator portfolio of industry software with Sony’s new spatial content creation system, featuring the XR head-mounted display with high-quality 4K OLED Microdisplays and controllers for intuitive interaction with 3D objects.
In addition, Siemens and Amazon Web Services (AWS) are strengthening their partnership and making it easier for businesses of all sizes and industries to build and scale generative artificial intelligence (AI) applications. Siemens is integrating Amazon Bedrock - a service that offers a choice of high-performing foundation models from leading AI companies via a single API, along with security, privacy, and responsible AI capabilities - with Mendix, the leading low-code platform that is part of the Siemens Xcelerator portfolio.
Unlocking the Full Potential of Manufacturing Capabilities Through Digital Twins on AWS
In this post, we will explore the collaboration between Amazon Web Services (AWS) and Matterport to create a digital twin proof of concept (POC) for Belden Inc. at one of its major manufacturing facilities in Richmond, Indiana. The purpose of this digital twin POC was to gain insights and optimize operations in employee training, asset performance monitoring, and remote asset inspection at one of its assembly lines.
The onsite capture process required no more than an hour to capture a significant portion of the plant operation. Using the industry-leading Matterport 3D Pro3 capture camera system, we captured high-resolution imagery with high-fidelity measurement information to digitally recreate the entire plant environment.
The use of MQTT protocol to natively connect and send equipment data to AWS IoT Core further streamlined the process. MQTT, an efficient and lightweight messaging protocol designed for Internet of Things (IoT) applications, ensured seamless communication with minimal latency. This integration allowed for quick access to critical equipment data, facilitating informed decision making and enabling proactive maintenance measures.
Throughout the plant, sensors were strategically deployed to collect essential operational data that was previously missing. These sensors were responsible for monitoring various aspects of machine performance, availability, and health status, including indicators such as vibration, temperature, current, and power. Subsequently, the gathered operational data was transmitted through Belden’s zero-trust operational technology network to Belden Horizon Data Operations (BHDO).
Automating Quality Machine Inspection Infused with Edge AI and Digital Twins for Device Monitoring
In this post, we will discuss an AI-based solution Kyndryl has built on Amazon Web Services (AWS) to detect pores on the welding process using acoustic data and a custom-built algorithm leveraging voltage data. We’ll describe how Kyndryl collaborated with AWS to design an end-to-end solution for detecting welding pores in a manufacturing plant using AWS analytics services and by enabling digital twins to monitor welding machines effectively.
Kyndryl’s solution flow consists of collecting acoustic data with voltage and current from welding machines, processing and inferencing data at the edge to detect welding pores while providing actionable insights to welding operators. Additionally, data is streamed to the cloud to perform historical analysis and improve operational efficiency and product quality over time. A digital twin is enabled to monitor the welding operation in real-time with warnings created to proactively manage the asset when predefined thresholds are met.
Monitor factory operations using digital twins from AWS and Matterport
AWS IoT TwinMaker is a solution from Amazon Web Services (AWS) that makes it easy for industrial companies to create digital twins of real-world systems, such as buildings, factories, industrial equipment, and production lines. Matterport is the leading spatial data company focused on digitizing and indexing the built world. The Matterport 3D data platform enables anyone to turn a space into an accurate and immersive digital twin, which is used to design, build, operate, promote, and understand any space.
With AWS IoT TwinMaker and Matterport integration, developers leverage this technology to combine the data from the manufacturing floor with the 3D models of the factory. This helps to create a fully integrated digital twin of the factory or remote facility. All of this is done in a short period of time and at a low cost, giving the customers the spatial data insights they need to monitor and manage their operations more efficiently than ever before.
IDEMIA: How a global leader in identity leverages AWS to improve productivity in Manufacturing
At IDEMIA, the flywheel started by prioritizing and grouping high-value and low-hanging use cases that could be implemented quickly and easily. The Cobot use cases were selected because they provided a clear business impact and had low technical complexity. Deploying these use cases in production generated a positive ROI in a short period of time for IDEMIA. It not only increased the profitability and efficiency of the industrial sites but also created a positive feedback loop that fostered further adoptions and investments. With the benefits generated from this initial use case, IDEMIA had the opportunity to reinvest in the IoT platform, making it more robust and scalable. This mitigated risks, lowered costs for the next use cases, and improved the performance and reliability of the existing ones. Demonstrating tangible benefits of Industrial Internet of Things (IIoT) solutions expanded adoption and engagement across IDEMIA’s organization, fostering a culture of continuous improvement and learning.
Securely sending industrial data to AWS IoT services using unidirectional gateways
Unidirectional gateways are a combination of hardware and software. Unidirectional gateway hardware is physically able to send data in only one direction, while the gateway software replicates servers and emulates devices. Since the gateway is physically able to send data in only one direction, there is no possibility of IT-based or internet-based security events pivoting into the OT networks. The gateway’s replica servers and emulated devices simplify OT/IT integration.
A typical unidirectional gateway hardware implementation consists of a network appliance containing two separate circuit boards joined by a fiberoptic cable. The “TX,” or “transmit,” board contains a fiber-optic transmitter, and the “RX,” or “receive,” board contains a fiber-optic receiver. Unlike conventional fiber-optic communication components, which are transceivers, the TX appliance does not contain a receiver and the RX appliance does not contain a transmitter. Because there is no laser in the receiver, there is no physical way for the receiving circuit board to send any information back to the transmitting board. The appliance can be used to transmit information out of the control system network into an external network, or directly to the internet, without the risk of a cyber event or another signal returning into the control system.
OT-IT Integration: AWS and Siemens break down data silos by closing the machine-to-cloud gap
AWS announced that AWS IoT SiteWise Edge, on-premises software that makes it easy to collect, organize, process, and monitor equipment data, can now be deployed directly from the Siemens Industrial Edge Marketplace to help simplify, accelerate, and reduce the cost of sending industrial equipment data to the AWS cloud. This new offering aims to help bridge the chasm between OT and IT by allowing customers to start ingesting OT data from a variety of industrial protocols into the cloud faster using Siemens Industrial Edge Devices already connected to machines, removing layers of configuration and accelerating time to value.
Customers can now jumpstart industrial data ingestion from machine to edge (Level 1 and Level 2 OT networks) by deploying AWS IoT SiteWise Edge using existing Siemens Industrial Edge infrastructure and connectivity applications such as SIMATIC S7+ Connector, Modbus TCP Connector, and more. You can then securely aggregate and process data from a large number of machines and production lines (Level 3), as well as send it to the AWS cloud for use across a wide range of use cases. This empowers process engineers, maintenance technicians, and efficiency champions to derive business value from operational data that is organized and contextualized for use in local and cloud applications, unlocking use cases such as asset monitoring, predictive maintenance, quality inspection, and energy management.
The Blueprint for Industrial Transformation: Building a Strong Data Foundation with AWS IoT SiteWise
AWS IoT SiteWise is a managed service that makes it easy to collect, organize, and analyze data from industrial equipment at scale, helping customers make better, data-driven decisions. Our customers such as Volkswagen Group, Coca-Cola İçecek, and Yara International have used AWS IoT SiteWise to build industrial data platforms that allow them to contextualize and analyze Operational Technology (OT) data generated across their plants, creating a global view of their operations and businesses. In addition, our AWS Partners such as Embassy of Things (EOT), Tata Consulting Services (TCS) Edge2Web, TensorIoT, and Radix Engineering have made AWS IoT SiteWise the foundation for purpose-built applications that enable use cases such as predictive maintenance and asset performance monitoring. Through these engagements with customers and partners, we have learned that the main obstacles in scaling digital transformation initiatives include project complexity, infrastructure costs, and time to value.
With newly added APIs, AWS IoT SiteWise now allows you to bulk import, export, and update industrial asset model metadata at scale from diverse systems such as data historians, other AWS accounts, or – in the case of AWS Independent Software Vendors (ISV) Partners – their own industrial data modeling tools.
To collect real-time data from equipment, AWS IoT SiteWise provides AWS IoT SiteWise Edge, software created by AWS and deployed on premises to make it easy to collect, organize, process, and monitor equipment at the edge. With SiteWise Edge, customers can securely connect to and read data from equipment using industrial protocols and standards such as OPC-UA. In collaboration with AWS Partner Domatica, we recently added support for an additional 10 industrial protocols including MQTT, Modbus, and SIMATIC S7, diversifying the type of data that can be ingested into AWS IoT SiteWise from equipment, machines, and legacy systems for processing at the edge or enriching your industrial data lake. By ingesting data to the cloud with sub-second latency, customers can use AWS IoT SiteWise to monitor hundreds of thousands of high-value assets across their industrial operations in near real time.
Industrial Automation Software Management on AWS—Best Practices for Operational Excellence
Operational and maintenance tasks can become complex, and change control becomes challenging as the number of PLCs and robotics or other automation systems increases. Problems arise when the right version and right configuration of the code is not found. While code and configuration management is a standard DevOps practice for software development, these practices are not as common in the world of industrial automation, primarily due to lack of good tooling. These challenges can now be solved through systematic, secure, and easily accessible solutions in the AWS cloud.
One such solution is Copia Automation’s Git-based source control (Git is an open-source DevOps tool for source code management). Copia Automation brings the power of a modern source control system (specifically, Git) to industrial automation. The Copia solution is deployed in Amazon’s own AWS account. In this type of deployment model, Amazon is responsible for managing and configuring its own infrastructure needed to run Copia’s software.
ANYbotics uses AWS to deploy a global robot workforce for industrial inspections
ANYbotics, a pioneering company at the forefront of autonomous mobile robots, is using AWS to deploy their global robot workforce. They revolutionize the operation of large industrial facilities by providing intelligent inspection solutions that improve safety, efficiency, and sustainability. Connecting the physical and digital assets, ANYbotics helps companies with cutting-edge robotics technology to create an environment where robots and humans can work seamlessly together to achieve better results.
Robot-as-a-Service (RaaS) is a business model that offers robots and robotic services to customers on a subscription or pay-as-you-go basis, rather than selling robots as a one-time product. RaaS is ANYbotics’ preferred model to scale the ANYmal fleet with a fully serviced offering for hardware and software. It’s a flexible business model without the need for upfront investments for their customers.
By using AWS services, ANYbotics can scale their applications up and down, depending on the current workload. They can add compute resources on demand within minutes and use the pay-as-you-go pricing model to operate cost efficiently. This is crucial for ANYbotics since they can easily adapt to fluctuation in the number of robots or the complexity of tasks without investing in on-premises hardware that might be underutilized during periods of lower demand. Scaling up is essential to ensure the future readiness of operating a growing fleet of ANYmal robots and meet the demand for more complex task solving applications.
☁️🧠 Automated Cloud-to-Edge Deployment of Industrial AI Models with Siemens Industrial Edge
Due to the sensitive nature of OT systems, a cloud-to-edge deployment can become a challenge. Specialized hardware devices are required, strict network protection is applied, and security policies are in place. Data can only be pulled by an intermediate factory IT system from where it can be deployed to the OT systems through highly controlled processes.
The following solution describes the “pull” deployment mechanism by using AWS services and Siemens Industrial AI software portfolio. The deployment process is enabled by three main components, the first of which is the Siemens AI Software Development Kit (AI SDK). After a model is created by a data scientist on Amazon SageMaker and stored in the SageMaker model registry, this SDK allows users to package a model in a format suitable for edge deployment using Siemens Industrial Edge. The second component, and the central connection between cloud and edge, is the Siemens AI Model Manager (AI MM). The third component is the Siemens AI Inference Server (AIIS), a specialized and hardened AI runtime environment running as a container on Siemens IEDs deployed on the shopfloor. The AIIS receives the packaged model from AI MM and is responsible to load, execute, and monitor ML models close to the production lines.
Transforming Semiconductor Yield Management with AWS and Deloitte
Together, AWS and Deloitte have developed a reference architecture to enable the aforementioned yield management capabilities. The architecture, shown in Figure 1, depicts how to collect, store, analyze and act on the yield related data throughout the supply chain. The following describes how the modernized yield management architecture enables the six capabilities discussed earlier.
IBM and AWS partnering to transform industrial welding with AI and machine learning
IBM Smart Edge for Welding on AWS utilizes audio and visual capturing technology developed in collaboration with IBM Research. Using visual and audio recordings taken at the time of the weld, state-of-the-art artificial intelligence and machine learning models analyze the quality of the weld. If the quality does not meet standards, alerts are sent, and remediation action can take place without delay.
The solution substantially reduces the time between detection and remediation of defects, as well as the number of defects on the manufacturing line. By leveraging a combination of optical, thermal, and acoustic insights during the weld inspection process, two key manufacturing personas can better determine whether a welding discontinuity may result in a defect that will cost time and money: weld technician and process engineer.
Predictive Maintenance for Semiconductor Manufacturers with SEEQ powered by AWS
There are challenges in creating predictive maintenance models, such as siloed data, the offline nature of data processing and analytics, and having the necessary domain knowledge to build, implement, and scale models. In this blog, we will explore how using Seeq software on Amazon Web Services can help overcome these challenges.
The combination of AWS and Seeq pairs a secure cloud services platform with advanced analytics innovation. Seeq on AWS can access time series and relational data stored in AWS data services including Amazon Redshift, Amazon DynamoDB, Amazon Simple Storage Service (S3), and Amazon Athena. Once connected, engineers and other technical staff have direct access to all the data in those databases in a live streaming environment, enabling exploration and data analytics without needing to go through the steps to extract data and align timestamps whenever more data is required. As a result, monitoring dashboards and running reports can be set to auto generate and are easily shared among groups or sites. This enables balancing machine downtimes and planning ahead for maintenance without disrupting schedules or compromising yields.
Kibsi Launches AI Platform to Help Customers Gain Business Insights from Cameras After Securing $9.3 Million in Funding
The world of computer vision is transformed with the launch of Kibsi, a platform designed to redefine the way businesses build and deploy computer vision applications. Kibsi offers an intuitive, low-code, drag-and-drop environment that makes it incredibly easy for anyone to leverage the power of AI to breathe new life into their existing cameras.
Kibsi has already attracted some of the world’s most exciting companies as early customers, including Owens Corning, Whisker, and Woodland Park Zoo. These pioneers recognize the transformative impact AI brings to their businesses and the future possibilities that computer vision creates.
The company is also excited to announce its partnership with Amazon Web Services (AWS), recently joining the prestigious ISV Accelerate Program, demonstrating a mutual commitment to provide exceptional outcomes for joint customers. Further, to ensure a smooth purchasing and integration process, Kibsi is now available in the AWS Marketplace.
GE Aerospace's cloud journey with AWS
🔏🚗 In-Depth Analysis of Cyber Threats to Automotive Factories
We found that Ransomware-as-a-Service (RaaS) operations, such as Conti and LockBit, are active in the automotive industry. These are characterized by stealing confidential data from within the target organization before encrypting their systems, forcing automakers to face threats of halted factory operations and public exposure of intellectual property (IP). For example, Continental (a major automotive parts manufacturer) was attacked in August, with some IT systems accessed. They immediately took response measures, restoring normal operations and cooperating with external cybersecurity experts to investigate the incident. However, in November, LockBit took to its data leak website and claimed to have 40TB of Continental’s data, offering to return the data for a ransom of $40 million.
Previous studies on automotive factories mainly focus on the general issues in the OT/ICS environment, such as difficulty in executing security updates, knowledge gaps among OT personnel regarding security, and weak vulnerability management. In light of this, TXOne Networks has conducted a detailed analysis of common automotive factory digital transformation applications to explain how attackers can gain initial access and link different threats together into a multi-pronged attack to cause significant damage to automotive factories.
In the study of industrial robots, controllers sometimes enable universal remote connection services (such as FTP or Web) or APIs defined by the manufacturer to provide operators with convenient robot operation through the Control Station. However, we found that most robot controllers do not enable any authentication mechanism by default and cannot even use it. This allows attackers lurking in the factory to directly execute any operation on robots through tools released by robot manufacturers. In the case of Digital Twin applications, attackers lurking in the factory can also use vulnerabilities in simulation devices to execute malicious code attacks on their models. When a Digital Twin’s model is attacked, it means that the generated simulation environment cannot maintain congruency with the physical environment. This entails that, after the model is tampered with, there may not necessarily be obvious malicious behavior which is a serious problem because of how long this can go unchecked and unfixed. This makes it easy for engineers to continue using the damaged Digital Twin in unknown circumstances, leading to inaccurate research and development or incorrect decisions made by the factory based on false information, which can result in greater financial losses than ransomware attacks.
Element and HighByte Announce Partnership, Launch Solution Based on AWS’s Industrial Data Fabric Architecture
Element and HighByte, leading data management providers to global industrial companies, announced the launch of an integrated solution based on AWS’s Industrial Data Fabric offerings. The solution, powered by Amazon Web Services (AWS), allows information technology (IT) and operational technology (OT) users to contextualize and normalize data into rich information for analytics and other business systems. The solution is designed to be maintained and scaled across the enterprise as the number of use cases that rely on industrial data grow exponentially.
HAYAT HOLDING uses Amazon SageMaker to increase product quality and optimize manufacturing output, saving $300,000 annually
In this post, we share how HAYAT HOLDING—a global player with 41 companies operating in different industries, including HAYAT, the world’s fourth-largest branded diaper manufacturer, and KEAS, the world’s fifth-largest wood-based panel manufacturer—collaborated with AWS to build a solution that uses Amazon SageMaker Model Training, Amazon SageMaker Automatic Model Tuning, and Amazon SageMaker Model Deployment to continuously improve operational performance, increase product quality, and optimize manufacturing output of medium-density fiberboard (MDF) wood panels.
Quality prediction using ML is powerful but requires effort and skill to design, integrate with the manufacturing process, and maintain. With the support of AWS Prototyping specialists, and AWS Partner Deloitte, HAYAT HOLDING built an end-to-end pipeline. Product quality prediction and adhesive consumption recommendation results can be observed by field experts through dashboards in near-real time, resulting in a faster feedback loop. Laboratory results indicate a significant impact equating to savings of $300,000 annually, reducing their carbon footprint in production by preventing unnecessary chemical waste.
📦 How AWS used ML to help Amazon fulfillment centers reduce downtime by 70%
The retail leader has announced it uses Amazon Monitron, an end-to-end machine learning (ML) system to detect abnormal behavior in industrial machinery — that launched in December 2020 — to provide predictive maintenance. As a result, Amazon has reduced unplanned downtime at the fulfillment centers by nearly 70%, which helps deliver more customer orders on time.
Monitron receives automatic temperature and vibration measurements every hour, detecting potential failures within hours, compared with 4 weeks for the previous manual techniques. In the year and a half since the fulfillment centers began using it, they have helped avoid about 7,300 confirmed issues across 88 fulfillment center sites across the world.
Boehringer Ingelheim: Healthy data creates a better world
How Corning Built End-to-end ML on Databricks Lakehouse Platform
Specifically for quality inspection, we take high-resolution images to look for irregularities in the cells, which can be predictive of leaks and defective parts. The challenge, however, is the prevalence of false positives due to the debris in the manufacturing environment showing up in pictures.
To address this, we manually brush and blow the filters before imaging. We discovered that by notifying operators of which specific parts to clean, we could significantly reduce the total time required for the process, and machine learning came in handy. We used ML to predict whether a filter is clean or dirty based on low-resolution images taken while the operator is setting up the filter inside the imaging device. Based on the prediction, the operator would get the signal to clean the part or not, thus reducing false positives on the final high-res images, helping us move faster through the production process and providing high-quality filters.
Building a Predictive Maintenance Solution Using AWS AutoML and No-Code Tools
In this post, we describe how equipment operators can build a predictive maintenance solution using AutoML and no-code tools powered by Amazon Web Services (AWS). This type of solution delivers significant gains to large-scale industrial systems and mission-critical applications where costs associated with machine failure or unplanned downtime can be high.
To implement a prototype of the RUL model, we use a publicly available dataset known as NASA Turbofan Jet Engine Data Set. This dataset is often used for research and ML competitions. The dataset includes degradation trajectories of 100 turbofan engines obtained from a simulator. Here, we explore only one of the four sub-datasets included, namely the training part of the dataset: FD001.
Building Industrial Digital Twins on AWS Using MQTT Sparkplug
Even better, a Sparkplug solution is built around an event-based and publish-subscribe architectural model that uses Report-By-Exception for communication. Meaning that your Digital Twin instances get updated with information only when a change in the dynamic properties is detected. Firstly, this saves computational and network resources such as CPU, memory, power and bandwidth. Secondly, this results in a highly responsive system whereby anomalies picked up by the analytics system can be adjusted in real-time.
Further, due to the underlying MQTT infrastructure, a Sparkplug based Digital Twin solution can scale to support millions of physical assets, which means that you can keep adding more assets with no disruptions. What’s more, MQTT Sparkplug’s definition of an MQTT Session State Management ensures that your Digital twin Solution is always aware of the status of all your physical assets at any given time.
How KAMAX connected their industrial machines to AWS in hours instead of weeks
Every manufacturing customer these days is talking about Industry 4.0, digital transformation, or AI/ML, but these can be daunting topics for manufacturers. Historically, connecting industrial assets to the cloud has been a large and complicated undertaking. Older assets increase the complexity, leaving many manufacturers with legacy equipment stalled at the starting gate. KAMAX, a player for cold forming parts in the sector of steel processing, shows that it is not only possible to transform, but can be easy when working with the right partners. KAMAX wanted a fully managed shop floor solution to acquire data from industrial equipment, process the data and make it available fast, to improve their operational efficiency. KAMAX employed their subsidiary and digital incubator, nexineer digital, Amazon Web Services (AWS) and CloudRail to help. This Industrial IoT collaboration increased manufacturing efficiency and effectiveness within their plants by automating and optimizing traditionally manual tasks, increasing production capacity, and optimizing tool changeover times (planned downtimes) of machines. This solution helped KAMAX realize quantifiable time savings of 2.5% – 3.5%.
California’s AI-Powered Wildfire Prevention Efforts Contend With Data Challenge
Southern California Edison, San Diego Gas & Electric Co. and PG&E Corp. say they see promise in AI algorithms that use images captured by drones and other means to detect anomalies in infrastructure that could lead to wildfires. However, they say it will likely take years to gather enough data to deploy the algorithms at scale across their infrastructure, where they would augment ongoing manual inspections.
San Diego Gas & Electric said it has 75 working models designed to detect specific conditions or damages on company assets or third-party equipment. Gabe Mika, senior group product manager, said each is trained on anywhere from 100 to 5,000 images. SDG&E has leveraged several of Amazon Web Services’ machine-learning and computer vision tools to help build the models, the company said.
Visual search: how to find manufacturing parts in a cinch
In the modern world, advanced recognition technologies play an increasingly important role in various areas of human life. Recognizing the characteristics of vehicle tires is one such area where deep learning is making a valuable difference. Solving the problem of recognizing tire parameters can help to simplify the process of selecting tire replacements when you don’t know which tires will fit. This recognition can be useful both for customer-facing ecommerce and in-store apps used by associates to quickly read necessary tire specs.
During the research process, we decided that online stores and bulletin boards would be the main data sources, since there were thousands of images and, most importantly, almost all of them had structured descriptions. Images from search engines could only be used for training segmentation, because they did not contain the necessary structured features.
In this blog post we have described the complete process of creating a tire lettering recognition system from start to finish. Despite the large number of existing methods, approaches and functions in the field of image recognition and processing, there remains a huge gap in available research and implementation for very complex and accurate visual search systems.
Connecting an Industrial Universal Namespace to AWS IoT SiteWise using HighByte Intelligence Hub
Merging industrial and enterprise data across multiple on-premises deployments and industrial verticals can be challenging. This data comes from a complex ecosystem of industrial-focused products, hardware, and networks from various companies and service providers. This drives the creation of data silos and isolated systems that propagate one-to-one integration strategy.
HighByte Intelligence Hub does just that. It is a middleware solution for universal namespace that helps you build scalable, modern industrial data pipelines in AWS. It also allows users to collect data from various sources, add context to the data being collected, and transform it to a format that other systems can understand.
Koch Ag & Energy High Value Digitalization Deployments Leverages AWS
This application uses existing plant sensors, Monitron sensors, Amazon Lookout and SeeQ software to implement predictive maintenance on more complex equipment. The goal accomplished was successfully implementing predictive maintenance requires data from thousands of sensors to gain a clear understanding of unique operating conditions and applying machine learning models to achieve highly accurate predictions. In the past modeling equipment behavior and diagnosis issues requiring significant investment in time money inhabiting scaling this capability across all assets.
AWS Announces AWS IoT TwinMaker
Industrial companies collect and process vast troves of data about their equipment and facilities from sources like equipment sensors, video cameras, and business applications (e.g. enterprise resource planning systems or project management systems). Many customers want to combine these data sources to create a virtual representation of their physical systems (called a digital twin) to help them simulate and optimize operational performance. But building and managing digital twins is hard even for the most technically advanced organizations. To build digital twins, customers must manually connect different types of data from diverse sources (e.g. time-series sensor data from equipment, video feeds from cameras, maintenance records from business applications, etc.). Then customers have to create a knowledge graph that provides common access to all the connected data and maps the relationships between the data sources to the physical environment. To complete the digital twin, customers have to build a 3D virtual representation of their physical systems (e.g. buildings, factories, equipment, production lines, etc.) and overlay the real-world data on to the 3D visualization. Once they have a virtual representation of their real-world systems with real-time data, customers can build applications for plant operators and maintenance engineers that can leverage machine learning and analytics to extract business insights about the real-time operational performance of their physical systems. Because of the work required, the vast majority of organizations are unable to use digital twins to improve their operations.
Apollo Tyres Moves to AWS to Build Smart, Connected Factories
Apollo Tyres needed to upgrade its infrastructure to develop new ways of engaging with fleet operators, tyre dealers, and consumers, while delivering tyres and services efficiently at competitive prices. The company’s first step was to create a data lake on AWS, which centrally stores Apollo Tyres’ structured and unstructured data at scale. This data lake provides the foundation for an integrated data platform, which enables Apollo Tyres’ engineers around the world to collaborate in developing cloud-native applications and improve enterprise-wide decision making. The integrated data platform enables Apollo Tyres to innovate new products and services, including energy-efficient tires and remote warranty fulfillment.
AWS, Google, Microsoft apply expertise in data, software to manufacturing
As manufacturing becomes digitized, Google’s methodologies that were developed for the consumer market are becoming relevant for industry, said Wee, who previously worked in the semiconductor industry as an industrial engineer. “We believe we’re at a point in time where these technologies—primarily the analytics and AI area—that have been very difficult to use for the typical industrial engineer are becoming so easy to use on the shop floor,” he said. “That’s where we believe our competitive differentiation lies.”
Meanwhile, Ford is also selectively favoring human brain power over software to analyze data and turning more and more to in-house coders than applications vendors. “The solution will be dependent upon the application,” Mikula said. “Sometimes it will be software, and sometimes it’ll be a data analyst who crunches the data sources. We would like to move to solutions that are more autonomous and driven by machine learning and artificial intelligence. The goal is to be less reliant on purchased SaaS.”
AWS IoT SiteWise Edge Is Now Generally Available for Processing Industrial Equipment Data on Premises
With AWS IoT SiteWise Edge, you can organize and process your equipment data in the on-premises SiteWise gateway using AWS IoT SiteWise asset models. You can then read the equipment data locally from the gateway using the same application programming interfaces (APIs) that you use with AWS IoT SiteWise in the cloud. For example, you can compute metrics such as Overall Equipment Effectiveness (OEE) locally for use in a production-line monitoring dashboard on the factory floor.
Seeq Accelerates Chemical Industry Success with AWS
Seeq Corporation, a leader in manufacturing and Industrial Internet of Things (IIoT) advanced analytics software, today announced agreements with two of the world’s premier chemical companies: Covestro and allnex. These companies have selected Seeq on Amazon Web Services (AWS) as their corporate solution, empowering their employees to improve production and business outcomes.
Amazon Lookout For Equipment – Predictive Maintenance Is Now Mature
Amazon Lookout for Equipment is designed for maintainers, not data scientists, and it comes from a place of knowledge. Incorporating expertise and insight gleaned from maintaining its own assets, Amazon aims to make it as easy as possible for users to get started and begin seeing value, addressing potential issues around usability and configurability.
In terms of technical abilities, it currently only covers simple assets like motors, conveyors, and servos – essentially, the kind of assets Amazon itself uses. It doesn’t yet monitor more sophisticated assets like robots or CNC machinery, although, in time, I do not doubt that these, too, will also be covered. As it stands, though, it will be competent for a lot of standard factory equipment.
How to build a predictive maintenance solution using Amazon SageMaker
Run Semiconductor Design Workflows on AWS
This implementation guide provides you with information and guidance to run production semiconductor workflows on AWS, from customer specification, to front-end design and verification, back-end fabrication, packaging, and assembly. Additionally, this guide shows you how to build secure chambers to quickly enable third-party collaboration, as well as leverage an analytics pipeline and artificial intelligence/machine learning (AI/ML) services to decrease time-to-market and increase return on investment (ROI). Customers that run semiconductor design workloads on AWS have designed everything from simple ASICs to large SOCs with tens of billions of transistors, at the most advanced process geometries. This guide describes the numerous AWS services involved with these workloads, including compute, storage, networking, and security. Finally, this paper provides guidance on hybrid flows and data transfer methods to enable a seamless hybrid environment between on-premises data centers and AWS.
Introducing Amazon SageMaker Reinforcement Learning Components for open-source Kubeflow pipelines
Woodside Energy uses AWS RoboMaker with Amazon SageMaker Kubeflow operators to train, tune, and deploy reinforcement learning agents to their robots to perform manipulation tasks that are repetitive or dangerous.
AWS Announces General Availability of Amazon Lookout for Vision
AWS announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called “few-shot learning,” Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service.
AWS Predictive Quality Industrial Demo
Facilitating IoT provisioning at scale
Whether you’re looking to design a new device or retrofitting an existing device for the IoT, you will need to consider IoT provisioning which brings IoT devices online to cloud services. IoT provisioning design requires decisions to be made that impact user experience and security for both network commissioning and credential provisioning mechanisms which configure digital identities, cloud end-points, and network credentials so that devices can securely connect to the cloud.
AI Solution for Operational Excellence
Falkonry Clue is a plug-and-play solution for predictive production operations that identifies and addresses operational inefficiencies from operational data. It is designed to be used directly by operational practitioners, such as production engineers, equipment engineers or manufacturing engineers, without requiring the assistance of data scientists or software engineers.
Unchain the ShopFloor through Software-Defined Automation
But, what happens as soon as insight is generated and the status of the physical process needs to be changed to a better state? In manufacturing for discrete and process industries, the process is defined by fixed code routines and programmable parameters. It has its own world of control code languages and standards to define the behavior of controllers, robot arms, sensors and actuators of all kinds. This world has remained remarkably stable over the past 40-plus years. Control code resides on a controller and special tools, as well as highly skilled automation engineers, who define the behavior of a specific production system. Changing the state of an existing and running production system changes the programs and parameters required to physically access the automation equipment—OT equipment needs to be re-programmed, often on every single component locally. To give a concrete example, let’s assume we can determine from field data, using applied machine learning (also referenced as Industrial IoT), that a behavior of a robotic handling process needs to be adapted. In the existing world, production needs to stop. A skilled engineer needs to physically re-teach or flash the robot controller. The new movement needs to be tested individually and in context of the adjacent production components. Finally, production can start again. This process can take minutes to hours depending on the complexity of the production system.
Production systems will optimize themselves based on simulated and real experiment. Improvements will rapidly be propagated around the globe. Labor will optimize the learning, not the system. This could also differ over time or by external influence. In times where renewable energy was cheap, output could have been one of the core drivers for optimization, while the minimization of input factors could have been paramount in other circumstances.