Transformer

Assembly Line

AI Transformer Models Enable Machine Vision Object Detection

đź“… Date:

đź”– Topics: Machine Vision, Transformer


Machine vision is another key technology, and today AI and machine vision interact in a few ways. “First, machine vision output is fed to an AI engine to perform functions such as people counting, object recognition, etc., to make decisions,” said Arm’s Zyazin. “Second, AI is used to provide better quality images with AI-based de-noising, which then assists with decision-making. An example could be an automotive application where a combination of AI and machine vision can recognize a speed limit sign earlier and adjust the speed accordingly.”

“There are a few main directions for machine vision, including cloud computing to scale deep-learning solutions, automated ML architectures to improve the ML pipeline, transformer architectures that optimize computer vision (a superset of machine vision), and mobile devices incorporating computer vision technology on the edge,” Synopsys’ Andersen said.

Read more at Semiconductor Engineering

Inferring material properties from FRP processes via sim-to-real learning

đź“… Date:

✍️ Authors: Simon Stieber, Niklas Schröter, Ewald Fauster, Marcel Bender, Alexander Schiendorfer, Wolfgang Reif

đź”– Topics: Materials Science, Fiber Reinforced Polymers, Transformer

🏢 Organizations: University of Augsburg, University of Leoben, Technical University Ingolstadt


Fiber reinforced polymers (FRP) provide favorable properties such as weight-specific strength and stiffness that are central for certain industries, such as aerospace or automotive manufacturing. Liquid composite molding (LCM) is a family of often employed, inexpensive, out-of-autoclave manufacturing techniques. Among them, resin transfer molding (RTM), offers a high degree of automation. Herein, textile preforms are saturated by a fluid polymer matrix in a closed mold. Both impregnation quality and level of fiber volume content are of crucial importance for the final part quality. We propose to simultaneously learn three major textile properties (fiber volume content and permeability in X and Y direction) presented as a three-dimensional map based on a sequence of camera images acquired in flow experiments and compare CNNs, ConvLSTMs, and Transformers. Moreover, we show how simulation-to-real transfer learning can improve a digital twin in FRP manufacturing, compared to simulation-only models and models based on sparse real data. The overall best metrics are: IOU 0.5031 and Accuracy 95.929 %, obtained by pretrained transformer models.

Read more at The International Journal of Advanced Manufacturing Technology

Retentive Network: A Successor to Transformer for Large Language Models

đź“… Date:

✍️ Authors: Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei

đź”– Topics: Retentive Network, Transformer, Large Language Model, Generative AI


In this work, we propose Retentive Network (RetNet) as a foundation architecture for large language models, simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost O(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models.

Read more at arXiv

LongNet: Scaling Transformers to 1,000,000,000 Tokens

đź“… Date:

✍️ Authors: Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei

đź”– Topics: Transformer, Large Language Model, Generative AI


Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.

Read more at arXiv

Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance

đź“… Date:

đź”– Topics: Large Language Model, Transformer

🏢 Organizations: Google


Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.

Read more at Google AI Blog