Robot Arm

Assembly Line

Anyware Robotics’ Pixmo Takes Unique Approach to Trailer Unloading

📅 Date:

✍️ Author: Evan Ackerman

🔖 Topics: Robot Arm, Trailer Unloading

🏢 Organizations: Anyware Robotics, Fanuc


While it’s likely true that there’s enough room for a whole bunch of different robotics companies in the trailer-unloading space, a given customer is probably going to only pick one, and they’re going to pick the one that offers the right combination of safety, capability, and cost. Anyware Robotics thinks they have that mix, aided by a box-handling solution that is both very clever and so obvious that I’m wondering why I didn’t think of it myself.

The overall design of Pixmo itself is fairly standard as far as trailer-unloading robots go, but some of the details are interesting. We’re told that Pixmo is the only trailer-unloading system that integrates a heavy-payload collaborative arm, actually a fairly new commercial arm from Fanuc. This means that Anyware Robotics doesn’t have to faff about with their own hardware, and also that their robot is arguably safer, being ISO-certified safe to work directly with people. The base is custom, but Anyware is contracting it out to a big robotics original equipment manufacturer.

That conveyor system in front of the boxes is an add-on that’s used in support of Pixmo. There are two benefits here: First, having the conveyor add-on aligned with the base of a box minimizes the amount of lifting that Pixmo has to do. This allows Pixmo to handle boxes of up to 65 pounds with a lift-and-slide technique, putting it at the top end of a trailer-unloading robot payload. And the second benefit is that the add-on system decreases the distance that Pixmo has to move the box to just about as small as it can possibly be, eliminating the need for the arm to rotate around to place a box on a conveyor next to or behind itself. Lowering this cycle time means that Pixmo can achieve a throughput of up to 1,000 boxes per hour—about one box every 4 seconds, which the Internet suggests is quite fast, even for a professional human.

Read more at IEEE Spectrum

Design for Robotic Assembly

📅 Date:

✍️ Author: John Sprovieri

🔖 Topics: Industrial Robot, Design for X, Robot Arm

🏢 Organizations: SCHUNK, Bosch Rexroth


In reality, equating the abilities of robots and human assemblers is risky. What’s easy for a human assembler can be difficult or impossible for a robot, and vice versa. To ensure success with robotic assembly, engineers must adapt their parts, products and processes to the unique requirements of the robot.

Reorienting an assembly adds cycle time without adding value. It also increases the cost of the fixtures. And, instead of a SCARA or Cartesian robot, assemblers may need a more expensive six-axis robot.

Robotic grippers are not as nimble as human hands, and some parts are easier for robots to grip than others. A part with two parallel surfaces can be handled by a two-fingered gripper. A circular part can be handled by its outside edges, or, if it has a hole in the middle, its inside edges. Adding a small lip to a part can help a gripper reliably manipulate the part and increase the efficiency of the system. If the robot will handle more than one type of part, the parts should be designed so they can all be manipulated with the same gripper. A servo-driven gripper could also help in that situation, since engineers can program stroke length and gripping force.

Read more at Assembly Magazine

🧠🦾 RT-2: New model translates vision and language into action

📅 Date:

🔖 Topics: Robot Arm, Transformer Net, Machine Vision, Vision-language-action Model

🏢 Organizations: Google


Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.

High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.

In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities.

Read more at Deepmind Blog

This 3D Printed Gripper Doesn’t Need Electronics To Function

Rocsys wants to automate EV charging, starting in ports and yards

📅 Date:

✍️ Author: Rebecca Bellan

🔖 Topics: Robot Arm, Funding Event

🏢 Organizations: Rocsys, Hyster, Taylor Machine Works


Rocsys has created a robotic arm that can transform any electric vehicle charger into an autonomous charger. In yards and ports, where vehicle uptime is crucial and the margin for error is slim, being able to plug in a charger and remove it without manual intervention is not only attractive to logistics operators, but it has a use case today.

Aside from partnerships with companies like electric forklift company Hyster, industrial equipment supplier Taylor Machine Works and port operator SSA Marine, Rocsys claims to have a commercial partnership in the works with one of the largest Big Box retailers in North America.

Rocsys doesn’t intend to stop with heavy duty, industrial logistics. The startup closed a $36 million Series A made up of half equity and half debt. The funds will help the startup build out its North American division and support R&D into the automotive sector, which would include both mainstream consumer vehicles and self-driving robotaxi fleets.

Read more at TechCrunch

🧠🦾 RoboCat: A self-improving robotic agent

📅 Date:

🔖 Topics: Robot Arm, Transformer Net

🏢 Organizations: Google


RoboCat learns much faster than other state-of-the-art models. It can pick up a new task with as few as 100 demonstrations because it draws from a large and diverse dataset. This capability will help accelerate robotics research, as it reduces the need for human-supervised training, and is an important step towards creating a general-purpose robot.

RoboCat is based on our multimodal model Gato (Spanish for “cat”), which can process language, images, and actions in both simulated and physical environments. We combined Gato’s architecture with a large training dataset of sequences of images and actions of various robot arms solving hundreds of different tasks.

The combination of all this training means the latest RoboCat is based on a dataset of millions of trajectories, from both real and simulated robotic arms, including self-generated data. We used four different types of robots and many robotic arms to collect vision-based data representing the tasks RoboCat would be trained to perform.

Read more at Deepmind Blog

We 3D Printed End-of-Arm Tools with Rapid Robotics

SRI Robotics: BACH–Belt-Augmented Compliant Hand

How a robotic arm could help the US Army lift artillery shells

📅 Date:

✍️ Author: Kelsey Atherton

🔖 Topics: Robot Arm

🏭 Vertical: Defense

🏢 Organizations: US Army, Sarcos Robotics


To fire artillery faster, the US Army is turning to robotic arms. On December 1, Army Futures Command awarded a $1 million contract to Sarcos Technology and Robotics Corporation to test a robot system that can handle and move artillery rounds.

An automated system, using robot arms to fetch and ready artillery rounds, would function somewhat like a killer version of a vending machine arm. The human gunner could select the type of ammunition from internal stores, and then the robotic loader finds it, grabs it, and places it on a lift. Should the robot arm perform as expected in testing, it will eliminate a job that is all repetitive strain. The robot, lifting and loading ammunition, is now an autonomous machine, automating the dull and menial task of reading rounds to fire.

Read more at Popular Science

How a universal model is helping one generation of Amazon robots train the next

📅 Date:

✍️ Author: Sean O'Neill

🔖 Topics: Robot Arm, Machine Learning, Warehouse Automation

🏢 Organizations: Amazon


In short, building a dataset big enough to train a demanding machine learning model requires time and resources, with no guarantee that the novel robotic process you are working toward will prove successful. This became a recurring issue for Amazon Robotics AI. So this year, work began in earnest to address the data scarcity problem. The solution: a “universal model” able to generalize to virtually any package segmentation task.

To develop the model, Meeker and her colleagues first used publicly available datasets to give their model basic classification skills — being able to distinguish boxes or packages from other things, for example. Next, they honed the model, teaching it to distinguish between many types of packaging in warehouse settings — from plastic bags to padded mailers to cardboard boxes of varying appearance — using a trove of training data compiled by the Robin program and half a dozen other Amazon teams over the last few years. This dataset comprised almost half a million annotated images.

The universal model now includes images of unpackaged items, too, allowing it to perform segmentation across a greater diversity of warehouse processes. Initiatives such as multimodal identification, which aims to visually identify items without needing to see a barcode, and the automated damage detection program are accruing product-specific data that could be fed into the universal model, as well as images taken on the fulfillment center floor by the autonomous robots that carry crates of products.

Read more at Amazon Science

How Soft Robotics Enables Peeps Packaging