Consultancy : Research : Academic
Committed to doing groundbreaking work in computing, CSAIL have played key roles in developing innovations like the World Wide Web, RSA encryption, Ethernet, parallel computing and much of the technology underlying the ARPANet and the Internet.
Using artificial intelligence to control digital manufacturing
MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time. They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.
The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.
Robots learn how to shape Play-Doh
Simple, Cheap, and Portable: A Filter-Free Desalination System for a Thirsty World
A group of scientists from MIT has developed just such a portable desalination unit; it’s the size of a medium suitcase and weighs less than 10 kilograms. The unit’s one-button operation requires no technical knowledge. What’s more, it has a completely filter-free design. Unlike existing portable desalination systems based on reverse osmosis, the MIT team’s prototype does not need any high-pressure pumping or maintenance by technicians.
At Amazon Robotics, simulation gains traction
“To develop complex robotic manipulation systems, we need both visual realism and accurate physics,” says Marchese. “There aren’t many simulators that can do both. Moreover, where we can, we need to preserve and exploit structure in the governing equations — this helps us analyze and control the robotic systems we build.”
Drake, an open-source toolbox for modeling and optimizing robots and their control system, brings together several desirable elements for online simulation. The first is a robust multibody dynamics engine optimized for simulating robotic devices. The second is a systems framework that lets Amazon scientists write custom models and compose these into complex systems that represent actual robots. The third is what he calls a “buffet of well-tested solvers” that resolve numerical optimizations at the core of Amazon’s models, sometimes as often as every time step of the simulation. Lastly, is its robust contact solver. It calculates the forces that occur when rigid-body items interact with one another in a simulation.
Neuro-symbolic AI could provide machines with common sense
Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems.
“We’re trying to bring together the power of symbolic languages for knowledge representation and reasoning as well as neural networks and the things that they’re good at, but also with the idea of probabilistic inference, especially Bayesian inference or inverse inference in a causal model for reasoning backwards from the things we can observe to the things we want to infer, like the underlying physics of the world, or the mental states of agents,” Tenenbaum says.
There are several attempts to use pure deep learning for object position and pose detection, but their accuracy is low. In a joint project, MIT and IBM created “3D Scene Perception via Probabilistic Programming” (3DP3), a system that resolves many of the errors that pure deep learning systems fall into.
Real-world robotic-manipulation system
So the next phase of the project was to teach the robot to use video feedback to adjust trajectories on the fly. Until now, Tedrake’s team had been using machine learning only for the robot’s perceptual system; they’d designed the control algorithms using traditional control-theoretical optimization. But now they switched to machine learning for controller design, too.
To train the controller model, Tedrake’s group used data from demonstrations in which one of the lab members teleoperated the robotic arm while other members knocked the target object around, so that its position and orientation changed. During training, the model took as input sensor data from the demonstrations and tried to predict the teleoperator’s control signals.
This requires a combination of machine learning and the more traditional, control-theoretical analysis that Tedrake’s group has specialized in. From data, the machine learning model learns vector representations of both the input and the control signal, but hand-tooled algorithms constrain the representation space to optimize the control signal selection. “It’s basically turning it back into a planning and control problem, but in the feature space that was learned,” Tedrake explains.
Toward smart production: Machine intelligence in business operations
Our research looked at five different ways that companies are using data and analytics to improve the speed, agility, and performance of operational decision making. This evolution of digital maturity begins with simple tools, such as dashboards to aid human decision making, and ends with true MI, machines that can adjust their own performance autonomously based on historical and real-time data.
Machine Learning Improves Fusion Modeling
If researchers hope to control fusion for energy production, they need a better understanding of the turbulent motion of ions and electrons in plasmas moving through fusion reactors. The field lines of toroidal structures known as tokamaks force the plasma particles; the intent is to confine them long enough to produce significant net energy gains, but that’s a challenge with extraordinarily high temperatures but also small spaces.
In a couple of recent publications, MIT researchers have begun to directly test the accuracy of this reduced model by combining physics with machine learning. According to MIT’s researchers, the model examines the dynamic relationship of physical variables such as density, electric potential, and temperature and, at the same time, quantities such as the turbulent electric field and electron pressure. The researchers discovered that the turbulent electric fields associated with pressure fluctuations predicted by the reduced fluid model are compatible with high-fidelity gyrokinetic predictions in plasmas relevant to existing fusion devices.
Tiny machine learning design alleviates a bottleneck in memory usage on internet-of-things devices
Researchers are working to reduce the size and complexity of the devices that these algorithms can run on, all the way down to a microcontroller unit (MCU) that’s found in billions of internet-of-things (IoT) devices. An MCU is memory-limited minicomputer housed in compact integrated circuit that lacks an operating system and runs simple commands. These relatively cheap edge devices require low power, computing, and bandwidth, and offer many opportunities to inject AI technology to expand their utility, increase privacy, and democratize their use — a field called TinyML.
Teaching Robots Dexterous Hand Manipulation
Roboat III: A Robotic Boat Transportation System
Machine-learning system accelerates discovery of new materials for 3D printing
The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses.
A material developer selects a few ingredients, inputs details on their chemical compositions into the algorithm, and defines the mechanical properties the new material should have. Then the algorithm increases and decreases the amounts of those components (like turning knobs on an amplifier) and checks how each formula affects the material’s properties, before arriving at the ideal combination.
The researchers have created a free, open-source materials optimization platform called AutoOED that incorporates the same optimization algorithm. AutoOED is a full software package that also allows researchers to conduct their own optimization.
Using blockchain technology to protect robots
The use of blockchain technology as a communication tool for a team of robots could provide security and safeguard against deception, according to a study by researchers at MIT and Polytechnic University of Madrid. The research may also have applications in cities where multi-robot systems of self-driving cars are delivering goods and moving people across town.
A blockchain offers a tamper-proof record of all transactions — in this case, the messages issued by robot team leaders — so follower robots can eventually identify inconsistencies in the information trail. Leaders use tokens to signal movements and add transactions to the chain, and forfeit their tokens when they are caught in a lie, so this transaction-based communications system limits the number of lies a hacked robot could spread, according to Eduardo Castelló, a Marie Curie Fellow in the MIT Media Lab and lead author of the paper.
Giving robots better moves
At the core of the RightHand Robotics solution is the idea of using machine vision and intelligent grippers to make piece-picking robots more adaptable. The combination also limits the amount of training needed to run the robots, equipping each machine with what the company equates to hand-eye coordination.
RightHand Robotics also utilizes an end-of-arm tool that combines suction with novel underactuated fingers, which Odhner says gives the robots more flexibility than robots relying solely on suction cups or simple pinching grippers. “Sometimes it actually helps you to have passive degrees of freedom in your hand, passive motions that it can make and can’t actively control,” Odhner says of the robots. “Very often those simplify the control task. They take problems from being heavily over-constrained and make them tractable to run through a motion planning algorithm.”
The data the robots collect are also used to improve reliability over time and shed light on warehouse operations for customers.
Classify This Robot-Woven Sneaker With 3D-Printed Soles as 'Footware'
For athletes trying to run fast, the proper shoe can be essential to achieving peak performance. For athletes trying to run as fast as humanly possible, a runner’s shoe can also become a work of individually customized engineering.
This is why Adidas has married 3D printing with robotic automation in a mass-market footwear project it’s called Futurecraft.Strung, expected to be available for purchase as soon as later this year. Using a customized, 3D-printed sole, a Futurecraft.Strung manufacturing robot can place some 2,000 threads from up to 10 different sneaker yarns in one upper section of the shoe.
Using tactile-based reinforcement learning for insertion tasks
A paper entitled “Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry” was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.