AI-Guided Robots Are Ready to Sort Your Recyclables
So how much of the material that goes into the typical bin avoids a trip to landfill? For countries that do curbside recycling, the number—called the recovery rate—appears to average around 70 to 90 percent, though widespread data isn’t available. That doesn’t seem bad. But in some municipalities, it can go as low as 40 percent.
Getting AI into the recycling business means combining pick-and-place robots with accurate real-time object detection. Pick-and-place robots combined with computer vision systems are used in manufacturing to grab particular objects, but they generally are just looking repeatedly for a single item, or for a few items of known shapes and under controlled lighting conditions. Recycling, though, involves infinite variability in the kinds, shapes, and orientations of the objects traveling down the conveyor belt, requiring nearly instantaneous identification along with the quick dispatch of a new trajectory to the robot arm.
Robots learn how to shape Play-Doh
Meet the Robotiq Screwdriving Solution
A Step by Step Guide to Robot Arm Demo
Assuming we are operating a smart warehouse optimized for an e-commerce company. In the warehouse, we employ several, “intelligent robots mover” to help us to move objects from spot to spot. In this demonstration, we have used a miniaturized, “intelligent robots mover” powered by Qeexo AutoML to determine whether the robot griped an object.
This blog is intended to show you how to use Qeexo AutoML to build your own, “intelligent robots mover” from end to end, including data collection, data segmentation, model training and evaluation, and live testing.
Robots Handling Cables?! Automated Cable Applications with MIRAI
Ford rolls out autonomous robot-operated 3D printers in vehicle production
Leveraging an in-house-developed interface, Ford has managed to get the KUKA-built bot to ‘speak the same language’ as its other systems, and operate them without human interaction. So far, the firm’s patent-pending approach has been deployed to 3D print custom parts for the Mustang Shelby GT500 sports car, but it could yet yield efficiency savings across its production workflow.
“This new process has the ability to change the way we use robotics in our manufacturing facilities,” said Jason Ryska, Ford’s Director of Global Manufacturing Technology Development. “Not only does it enable Ford to scale its 3D printer operations, it extends into other aspects of our manufacturing processes – this technology will allow us to simplify equipment and be even more flexible on the assembly line.”
At present, the company is utilizing its setup to make low-volume, custom parts such as a brake line bracket for the Performance Package-equipped version of its Mustang Shelby GT500. Moving forwards though, Ford believes its program could be applied to make other robots in its production line more efficient as well, and it has filed several patents, not just on its interface, but the positioning of its KUKA bot.
Real-world robotic-manipulation system
So the next phase of the project was to teach the robot to use video feedback to adjust trajectories on the fly. Until now, Tedrake’s team had been using machine learning only for the robot’s perceptual system; they’d designed the control algorithms using traditional control-theoretical optimization. But now they switched to machine learning for controller design, too.
To train the controller model, Tedrake’s group used data from demonstrations in which one of the lab members teleoperated the robotic arm while other members knocked the target object around, so that its position and orientation changed. During training, the model took as input sensor data from the demonstrations and tried to predict the teleoperator’s control signals.
This requires a combination of machine learning and the more traditional, control-theoretical analysis that Tedrake’s group has specialized in. From data, the machine learning model learns vector representations of both the input and the control signal, but hand-tooled algorithms constrain the representation space to optimize the control signal selection. “It’s basically turning it back into a planning and control problem, but in the feature space that was learned,” Tedrake explains.
Flexible robotic arm put to work with AR
According to Imperial, the flexible arm can twist and turn in all directions, making it customisable for applications in manufacturing, spacecraft maintenance, and injury rehabilitation. In use, people working with the robot would manually bend the arm into the precise shape needed for each task, a level of flexibility made possible by layers of mylar sheets inside, which slide over one another and can lock into place. So far, configuring the robot into specific shapes without guidance has presented challenges.
To enhance the robot’s user-friendliness, researchers at Imperial’s REDS (Robotic manipulation: Engineering, Design, and Science) Lab designed a system for users to see in AR how to configure their robot. Wearing mixed reality smartglasses and through motion tracking cameras, users see templates and designs in front of them superimposed onto their real-world environment. They then adjust the robotic arm until it matches the template, which turns green on successful configuration so that the robot can be locked into place.
Evaluation Criteria for Trajectories of Robotic Arms
This paper presents a complex trajectory evaluation framework with a high potential for use in many industrial applications. The framework focuses on the evaluation of robotic arm trajectories containing only robot states defined in joint space without any time parametrization (velocities or accelerations). The solution presented in this article consists of multiple criteria, mainly based on well-known trajectory metrics. These were slightly modified to allow their application to this type of trajectory. Our framework provides the methodology on how to accurately compare paths generated by randomized-based path planners, with respect to the numerous industrial optimization criteria. Therefore, the selection of the optimal path planner or its configuration for specific applications is much easier. The designed criteria were thoroughly experimentally evaluated using a real industrial robot. The results of these experiments confirmed the correlation between the predicted robot behavior and the behavior of the robot during the trajectory execution.
Industrial Automation for Spraying and Shot Peening
Spraying and shot peening are processes used to increase the life span of metal parts in machinery. This procedure is commonly used in the aerospace and marine industry where metal structures need to cope with harsh environmental conditions. As companies try to maximise profits, industrial automation for spraying and shot peening that helps minimise repair costs have gained popularity.
The Augmentus Platform eliminates the need for coding and robot teaching even for complex surface treatment applications. Our technology allows users to accurately 3D scan parts and automatically generate optimized robot path planning. Therefore, companies are able to easily and rapidly deploy robots without the need for robotic experts, even in a high-mix production environment.
GITAI’s Autonomous Robot Arm Finds Success on ISS
In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).
Break Through Supply Chain Blocks with Automated Container Unloading
Boston Dynamics is beginning to deploy Stretch, an autonomous case-handling robot poised to change the way warehouses and ports operate. Expected to be available later in 2022, the robot can work up to 16 hours on a single battery charge, so companies can send Stretch to unload trucks or containers for full shifts both day and night.
Built on a compact, wheeled base, Stretch can travel easily to each point of activity in a distribution center. The robot is self-reliant, untethered by power cables or air lines. Its vacuum-based gripper, at the end of a robotic arm with long reach, is designed to grasp a wide variety of box types required for a truly valuable solution in the logistics industry. With its small, pallet-sized footprint and embedded smarts, Stretch needs no pre-programming or overhaul of existing warehouse equipment to begin working, and is ready to deploy in just days.
MiR+UR autonomous picking and transport
Machine Shop Creates Robot Machining Cell Before There was Work for It
This machine shop’s self-integrated robot was purchased without a project in mind. However, when a particular part order came in, the robot paired with the proper machine tool was an optimal fit for the job, offering consistency and an increase in throughput.
The M-10 is a six-axis robot that is designed specifically for small work cells and can lift up to 12 kg. Young purchased the robot with a force sensor, which he highly recommends. Force sensors enable robots to detect force and torque applied to the end effector. This provides it with an almost human sense of touch. Surprising to Young and his team, the force sensor was not difficult to set up and use.
After the robot purchase and the order came in, it was time to search for the right machine tool for the job. The Hardinge Bridgeport V480 APC VMC was attractive to Young because of its pallet changing system that maximizes spindle uptime.
Custom Tool’s automated data collection and reporting system developed by company president, Gillen Young, uses a web-based, Industrial Internet of Things (IIoT) platform to pull data from machines that have agents for the open-source MTConnect communication protocol as well as the company’s JobBoss enterprise resource planning (ERP) software. The platform is Devicewise for Factory from Telit, a company that offers IIoT modules, software and connectivity services and software.
How DeepMind is Reinventing the Robot
To train a robot, though, such huge data sets are unavailable. “This is a problem,” notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What’s more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you’ll have a badly dented robot, if not worse.
There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones. “One of our classic examples was training an agent to play Pong,” says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, “then the performance will—boop!—go off a cliff.” Suddenly it will lose 20 to zero every time.
There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network’s weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.
But that strategy is limited. For one thing, it’s not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you’d have to train it on every single one of them. And if the environment is unstructured, you won’t even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn’t let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.
Hadsell’s preferred approach is something called “elastic weight consolidation.” The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights.
Cable-path optimization method for industrial robot arms
The production line engineer’s task of designing the external path for cables feeding electricity, air, and other resources to robot arms is a labor-intensive one. As the motions of robot arms are complex, the manual task of designing their cable path is a time-consuming and continuous trial-and-error process. Herein, we propose an automatic optimization method for planning the cable paths for industrial robot arms. The proposed method applies current physics simulation techniques for reducing the person–hours involved in cable path design. Our method yields an optimal parameter vector (PV) that specifies the cable length and cable-guide configuration via filtering the candidate PV set through a cable-geometry simulation based on the mass–spring model.
Plug-and-Play Robot Ecosystems on the Rise
Robot ecosystems are bringing plug-and-play ease to compatible hardware and software peripherals, while adding greater value and functionality to robots. Some might argue that the first robot ecosystem was the network of robot integrators that has expanded over the last couple decades to support robot manufacturers and their customers. Robot integrators continue to be vital to robotics adoption and proliferation. Yet an interesting phenomenon began to take shape a few years ago with the growing popularity of collaborative robots and the industry’s focus on ease of use.
Campbell describes the typical process for engineering a new gripping solution for a robot: “You have to first engineer a mechanical interface, which may mean an adapter plate, and maybe some other additional hardware. If you’re an integrator, it must be documented, because everything you do as an integrator you have to document. You have to engineer the electrical interface, how you’re going to control it, what kind of I/O signals, what kind of sensors. And then you have to design some kind of software.
“When I talk to integrators, they say it’s typically 1 to 3 days’ worth of work just to put a simple gripper on a robot. What we’ve been able to do in the UR+ program is chip away at time and cost throughout the project.”
Amazon’s robot arms break ground in safety, technology
Robin, one of the most complex stationary robot arm systems Amazon has ever built, brings many core technologies to new levels and acts as a glimpse into the possibilities of combining vision, package manipulation and machine learning, said Will Harris, principal product manager of the Robin program.
Those technologies can be seen when Robin goes to work. As soft mailers and boxes move down the conveyor line, Robin must break the jumble down into individual items. This is called image segmentation. People do it automatically, but for a long time, robots only saw a solid blob of pixels.