Dexterous Manipulation
Assembly Line
DexterityGen: Foundation Controller for Unprecedented Dexterity
Teaching robots dexterous manipulation skills, such as tool use, presents a significant challenge. Current approaches can be broadly categorized into two strategies: human teleoperation (for imitation learning) and sim-to-real reinforcement learning. The first approach is difficult as it is hard for humans to produce safe and dexterous motions on a different embodiment without touch feedback. The second RL-based approach struggles with the domain gap and involves highly task-specific reward engineering on complex tasks. Our key insight is that RL is effective at learning low-level motion primitives, while humans excel at providing coarse motion commands for complex, long-horizon tasks. Therefore, the optimal solution might be a combination of both approaches. In this paper, we introduce DexterityGen (DexGen), which uses RL to pretrain large-scale dexterous motion primitives, such as in-hand rotation or translation. We then leverage this learned dataset to train a dexterous foundational controller. In the real world, we use human teleoperation as a prompt to the controller to produce highly dexterous behavior. We evaluate the effectiveness of DexGen in both simulation and real world, demonstrating that it is a general-purpose controller that can realize input dexterous manipulation commands and significantly improves stability by 10-100x measured as duration of holding objects across diverse tasks. Notably, with DexGen we demonstrate unprecedented dexterous skills including diverse object reorientation and dexterous tool use such as pen, syringe, and screwdriver for the first time.
Shadow Robot DEX-EE hand takes manipulation to next level
Google DeepMind requested the inclusion of a high number of sensors to prioritize data collection, so Shadow Robot set about designing a hand with, as Walk put it, โfar more sensors than would be sensible in any other context.โ
The goal was to create a robot hand with high dexterity, sensitivity, and robustness for real-world learning tasks, without replicating the appearance of a human hand. To best achieve these needs, the design relies on three robust fingers and a hand around 50% larger than that of a human hand.
The result is DEX-EE, a robotic hand replete with high-speed sensor networks that provide rich data including position, force, and inertial measurement. This is augmented with hundreds of channels of tactile sensing per finger, optimizing pressure sensitivity to a dizzying level of magnitude, almost akin to that of a human hand.
Human hands are astonishing tools. Here's why robots are struggling to match them
Human sensory systems are so complex and our perceptive abilities so adept that reproducing dexterity at the same level as the human hand remains a formidable challenge. But the level of sophistication is rapidly increasing. Enter the DEX-EE robot. Developed by the Shadow Robot Company in collaboration with Google DeepMind, itโs a three-fingered robotic hand that uses tendon-style drivers to elicit 12 degrees of freedom. Designed for โdexterous manipulation researchโ, the team behind DEX-EE hope to demonstrate how physical interactions contribute to learning and the development of generalised intelligence.
Roboticists have long dreamed of automata with anthropomorphic dexterity good enough to perform undesirable, dangerous or repetitive tasks. Rustam Stolkin, a professor of robotics at the University of Birmingham, leads a project to develop highly dexterous AI-controlled robots capable of handling nuclear waste from the energy sector, for example. While this work typically uses remotely-controlled robots, Stolkin is developing autonomous vision-guided robots that can go where it is too dangerous for humans to venture.
During the course of a day, however, human hands undertake thousands of different tasks, adapting in order to handle a variety of different shapes, sizes and materials. And robotics has some way to go to compete with that. One recent test of a robotic hand using open-source components costing less than $5,000 (ยฃ4,000) found that it could be trained to reorientate objects in the air. However, when confronted with a challenging object โ a rubber duck shaped toy โ the robot still fumbled and dropped the rubber duck around 56% of the time.