LLM-based Control Code Generation using Image Recognition
LLM-based code generation could save significant manual efforts in industrial automation, where control engineers manually produce control logic for sophisticated production processes. Previous attempts in control logic code generation lacked methods to interpret schematic drawings from process engineers. Recent LLMs now combine image recognition, trained domain knowledge, and coding skills. We propose a novel LLM-based code generation method that generates IEC 61131-3 Structure Text control logic source code from Piping-and-Instrumentation Diagrams (P&IDs) using image recognition. We have evaluated the method in three case study with industrial P&IDs and provide first evidence on the feasibility of such a code generation besides experiences on image recognition glitches.
AI for industry: Schaeffler and Siemens bring Industrial Copilot to shopfloor
To support engineers with various automation tasks, the AI-powered assistant is connected to Siemens’ engineering framework Totally Integrated Automation (TIA) Portal via the open API TIA Portal Openness. The Industrial Copilot helps Schaeffler’s automation engineers to generate code faster for programmable logic controllers (PLC), the devices that control most machines throughout the world’s factories. Engineering teams can significantly reduce time, effort, and the probability of errors by generating PLC code through natural language inputs.
Siemens Industrial Copilot has access to all relevant documentation, guidelines and manuals to assist shopfloor workers with identifying possible errors. These capabilities enable maintenance teams to identify errors and generate step-by-step solutions more quickly. This will help to significantly reduce machine downtime, make industrial companies more efficient and thus support sustainability efforts.
TwinCAT Chat integrates LLMs into the automation environment
Generative AI for Process Systems Engineering
Unleashing the Potential of Large Language Models in Robotics: RoboDK’s Virtual Assistant
The RoboDK Virtual Assistant is the first step towards a comprehensive generalized assistant for RoboDK. At its core is OpenAI’s GPT3.5-turbo-0613 model. The model is provided with additional context about RoboDK in the form of an indexed database containing the RoboDK website, documentation, forum threads, blog posts, and more. The indexing process is done with LlamaIndex, a specialized data framework designed for this purpose. Thanks to this integration, the Virtual Assistant can swiftly provide valuable technical support to over 75% of user queries on the RoboDK forum, reducing the time spent searching through the website and documentation via manual methods. Users can expect to have an answer to their question in 5 seconds or less.
Fast and efficient PLC code generation and more with artificial intelligence
TwinCAT Chat was developed to offer users a clear advantage over the conventional use of, for example, ChatGPT in the web browser. The key added value lies in its deep integration, especially with regard to the specialized requirements of the automation industry. The core features include the direct integration of the chat function into the development environment (IDE). This greatly simplifies the development process, as communication and code exchange are seamlessly integrated. Furthermore, the basic initialization of our model has been tailored specifically to TwinCAT requests. This way you can ask your specific questions directly and don’t have to tell the model that you are using TwinCAT and expect the code examples in Structured Text. Another highlight is the ability to easily adopt generated code. This not only saves developers time, but also reduces human errors that can occur during manual transfers. Interaction with TwinCAT Chat has been designed in such a way that the need to type commands is reduced to a minimum. Instead, the user can simply click on pre-tested requests that are specifically designed to improve their workflow. These requests include actions such as:
- Optimize: The system can make suggestions to increase the performance or improve the efficiency of the code.
- Document: TwinCAT Chat helps to create comments and documentation so that the code is easier for other team members to understand.
- Complete: If code fragments are missing or incomplete, our system can generate suggestions to complete them to ensure functionality.
- Refactoring: TwinCAT Chat can refactor code according to certain guidelines and policies so that it is more in line with company guidelines.
Overall, this system provides an efficient and intuitive user interface that greatly facilitates the development process.
Silicon Volley: Designers Tap Generative AI for a Chip Assist
The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.
The paper details how NVIDIA engineers created for their internal use a custom LLM, called ChipNeMo, trained on the company’s internal data to generate and optimize software and assist human designers. Long term, engineers hope to apply generative AI to each stage of chip design, potentially reaping significant gains in overall productivity, said Ren, whose career spans more than 20 years in EDA. After surveying NVIDIA engineers for possible use cases, the research team chose three to start: a chatbot, a code generator and an analysis tool.
On chip-design tasks, custom ChipNeMo models with as few as 13 billion parameters match or exceed performance of even much larger general-purpose LLMs like LLaMA2 with 70 billion parameters. In some use cases, ChipNeMo models were dramatically better.
Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can. The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.
New Foundations: Controlling robots with natural language
The integration of Large Language Models (LLMs) in robotics is a rapidly evolving field, with numerous projects pushing the boundaries of what’s possible. These projects are not just isolated experiments, but pieces of a larger puzzle that collectively paint a picture of a future where robots are more intelligent, adaptable and interactive.
SayCan and Code as Policies are two early papers that indicate how an LLM can understand a task in natural language and create actions from it. “Code as Policies” leverages the ability of LLMs to output code and demonstrate how the language model can produce the actual code to perform a robotic action.
Instruct2Act connects the sense-making ability with vision capabilities. This way the robotic application (in this case a simulation) can identify, localize and segment (define object outlines for the best grabbing position) known or unknown objects according to the task. Similarly, NL-MAP connects the “SayCan” project with a mapping step, where the robot scans a room for objects before it can output tasks. The TidyBot research project focuses on a real world application for LLMs and robotics. A team at Princeton university developed a robot that can tidy up a room. It adapts to personal preferences (”socks in 3rd drawer on the right”) and benefits from general language understanding. For example, it knows that trash should go into the trash bin because it was trained on internet-scale language data.
Interactive Language achieves robotic actions from spoken commands by training a neural network on demonstrated moves connected with language and vision data.
While much of the work related to this technology is still in its early stages and limited to lab research, some applications such as PickGPT from logistics company Sereact’s, are starting to show the vast commercial potential.
Making Conversation: Using AI to Extract Intel from Industrial Machinery and Equipment
What if your machine could talk? This is the question Ron Di Carlantonio has grappled with since he founded iNAGO 1998. iNAGO was onboard when the Government of Canada supported a lighthouse project led by the Automotive Parts Manufacturers’ Association (APMA) to design, engineer and build a connected and autonomous zero-emissions vehicle (ZEV) concept car and its digital twin that would validate and integrate autonomous technologies. The electric SUV is equipped with a dual-motor powertrain with total output of 550 hp and 472 lb-ft of torque.
The general use of AI-based solutions in the automotive industry stretches across the lifecycle of a vehicle, from design and manufacturing to sales and aftermarket care. AI-powered chatbots, in particular, deliver instant, personalized virtual driver assistance, are on call 27/7 and can evolve with the preferences of tech-savvy drivers. Di Carlantonio now sees an opportunity to extend the use of the intelligent assistant platform to the smart factory by making industrial equipment—CNC machines, presses, conveyors, industrial robots—talk.
Solution Accelerator: LLMs for Manufacturing
In this solution accelerator, we focus on item (3) above, which is the use case on augmenting field service engineers with a knowledge base in the form of an interactive context-aware Q/A session. The challenge that manufacturers face is how to build and incorporate data from proprietary documents into LLMs. Training LLMs from scratch is a very costly exercise, costing hundreds of thousands if not millions of dollars.
Instead, enterprises can tap into pre-trained foundational LLM models (like MPT-7B and MPT-30B from MosaicML) and augment and fine-tune these models with their proprietary data. This brings down the costs to tens, if not hundreds of dollars, effectively a 10000x cost saving.
The treacherous path to trustworthy Generative AI for Industry
Despite the awesome first impact ChatGPT showed and the already significant efficiency gain programming copilots are delivering to developers as users2, making LLMs serve non-developers – the vast majority of the workforce, that is – by having LLMs translate from natural language prompts to API or database queries, expecting readily usable analytics outputs, is not quite so straightforward. Three primary challenges are:
- Inconsistency of prompts to completions (no deterministic reproducibility between LLM inputs and outputs)
- Nearly impossible to audit or explain LLM answers (once trained, LLMs are black boxes)
- Coverage gap on niche domain areas that typically matter most to enterprise users (LLMs are trained on large corpora of internet data, heavily biased towards more generalist topics)
Lumafield Introduces Atlas, an AI Co-Pilot for Engineers
Lumafield today unveiled Atlas, a groundbreaking AI co-pilot that helps engineers work faster by answering questions and solving complex engineering and manufacturing challenges using plain language. Atlas is a new tool in Voyager, Lumafield’s cloud-based software for analyzing 3D scan and industrial CT scan data. Along with Atlas, Lumafield announced a major expansion of Voyager’s capabilities, including the ability to upload, analyze, and share data from any 3D scanner.
🦾 Doosan Robotics to develop GPT-based collaborative robots
Doosan Robotics, a subsidiary of South Korea’s Doosan Group specializing in robot solutions, is venturing into the development of collaborative robot solutions using AI-based GPT (generative pre-trained transformer) technology to enhance its software capabilities.
Doosan Robotics announced it has entered into a business agreement with Microsoft and Doosan Digital Innovation to develop a GPT-based robot control system” utilizing Microsoft’s Azure OpenAI service. Azure OpenAI provides cloud services that include cutting-edge open AI systems, including GPT.
Doosan Robotics plans to apply GPT to its collaborative robots, enabling them to autonomously correct errors and perform tasks. Once the solution is developed, programming time will be reduced, leading to improved operational efficiency and utility.
🖨️ AI and 3D printing: Ai Build’s Daghan Cam and Luke Rogers on simplifying large-format 3D printing with AI
Ai Build has already partnered with a number of leading 3D printer hardware manufacturers, including Hans Weber Maschinenfabrik, Meltio, KUKA, Evo3D, CEAD, and Massive Dimension. Through these partnerships, the company incorporates a wide range of large-format 3D printers into their Ai Lab workshop. Here, the hardware is used to test, develop, verify, and integrate Ai Build’s software for a growing range of applications. Whilst Cam could not disclose too many names, global engineering solutions firm Weir Group and aerospace manufacturer Boeing were pinpointed as key customers employing AiSync software.
Ai Build’s key product is its AiSync software, an AI-driven toolpath optimization and quality control platform. Regarding toolpath optimization, it was announced earlier this year that Ai Build had developed a process which allows users to create advanced 3D printing toolpaths using natural language prompts. This feature, called Talk to AiSync, allows users to input simple text, such as “slice the part with 2mm layer height.” This text is then translated into machine instructions to produce the desired 3D printed part.
Key to this feature is large language AI models. AiSync uses OpenAI on the back end, with GPT-4 running the software’s natural language processing. “With the addition of large language models, we are able to translate simple English words, plain sentences, into a stack of workflow that we create on our software,” explained Cam. “The goal is to make it super accessible to inexperienced users by making the user experience really smooth.”
Retentive Network: A Successor to Transformer for Large Language Models
In this work, we propose Retentive Network (RetNet) as a foundation architecture for large language models, simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost O(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models.
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
Training ChatGPT on Omniverse Visual Scripting Using Prompt Engineering
Palantir AIP | Defense and Military
What does it take to talk to your Industrial Data in the same way we talk to ChatGPT?
The vast data set used to train LLMs is curated in various ways to provide clean, contextualized data. Contextualized data includes explicit semantic relationships within the data that can greatly affect the quality of the model’s output. Contextualizing the data we provide as input to an LLM ensures that the text consumed is relevant to the task at hand. For example, when prompting an LLM to provide information about operating industrial assets, the data provided to the LLM should include not only the data and documents related to those assets but also the explicit and implicit semantic relationships across different data types and sources.
An LLM is trained by parceling text data into smaller collections, or chunks, that can be converted into embeddings. An embedding is simply a sophisticated numerical representation of the ‘chunk’ of text that takes into consideration the context of surrounding or related information. This makes it possible to perform mathematical calculations to compare similarities, differences, and patterns between different ‘chunks’ to infer relationships and meaning. These mechanisms enable an LLM to learn a language and understand new data that it has not seen previously.
How Large-Language Models Can Revolutionize Military Planning
What happens when you give military planners access to large-language models and other artificial intelligence and machine-learning applications? Will the planner embrace the ability to rapidly synthesize diffuse data streams or ignore the tools in favor of romanticized views of military judgment as a coup d’œil? Can a profession still grappling to escape its industrial-age iron cage and bureaucratic processes integrate emerging technologies and habits of mind that are more inductive than deductive?
A team that includes a professor from Marine Corps University and a portfolio manager from Scale AI share our efforts to bridge new forms of data synthesis with foundational models of military decision-making. Based on this pilot effort, we see clear and tangible ways to integrate large-language models into the planning process. This effort will require more than just buying software. It will require revisiting how we approach epistemology in the military professional. The results suggest a need to expand the use of large-language models alongside new methods of instruction that help military professionals understand how to ask questions and interrogate the results. Skepticism is a virtue in the 21st century.
Will Generative AI finally turn data swamps into contextualized operations insight machines?
Generative AI, such as ChatGPT/GPT-4, has the potential to put industrial digital transformation into hyperdrive. Whereas a process engineer might spend several hours performing “human contextualization” (at an hourly rate of $140 or more) manually – again and again – contextualized industrial knowledge graphs provide the trusted data relationships that enable Generative AI to accurately navigate and interpret data for Operators without requiring data engineering or coding competencies.
Can Large Language Models Enhance Efficiency In Industrial Robotics?
One of the factors that slow down the penetration of industrial robots into manufacturing is the complexity of human-to-machine interfaces. This is where large language models, such as ChatGPT developed by OpenAI, come in. Large language models are a cutting-edge artificial intelligence technology that can understand and respond to human language at times almost indistinguishable from human conversation. Its versatility has been proven in applications ranging from chatbots to language translation and even creative writing.
It turns out that large language models are quite effective at generating teach pendant programs for a variety of industrial robots, such as KUKA, FANUC, Yaskawa, ABB and others.
ChatGPT for Robotics: Design Principles and Model Abilities
ChatGPT unlocks a new robotics paradigm, and allows a (potentially non-technical) user to sit on the loop, providing high-level feedback to the large language model (LLM) while monitoring the robot’s performance. By following our set of design principles, ChatGPT can generate code for robotics scenarios. Without any fine-tuning we leverage the LLM’s knowledge to control different robots form factors for a variety of tasks. In our work we show multiple examples of ChatGPT solving robotics puzzles, along with complex robot deployments in the manipulation, aerial, and navigation domains.
Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance
Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.