Robots Learn to Use Kitchen Tools by Watching YouTube Videos

Imagine having a personal robot prepare your breakfast every morning. Now, imagine that this robot didn't need any help figuring out how to make the perfect omelet, because it learned all the necessary steps by watching videos on YouTube. It might sound like science fiction, but a team at the University of Maryland has just made a significant breakthrough that will bring this scenario one step closer to reality.

Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) partnered with a scientist at the National Information Communications Technology Research Centre of Excellence in Australia (NICTA) to develop robotic systems that are able to teach themselves. Specifically, these robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can "think" for themselves, determining the best combination of observed motions that will allow them to efficiently accomplish a given task.

The work will be presented on Jan. 29, 2015, at the Association for the Advancement of Artificial Intelligence Conference in Austin, Texas. The researchers achieved this milestone by combining approaches from three distinct research areas: artificial intelligence, or the design of computers that can make their own decisions; computer vision, or the engineering of systems that can accurately identify shapes and movements; and natural language processing, or the development of robust systems that can understand spoken commands. Although the underlying work is complex, the team wanted the results to reflect something practical and relatable to people's daily lives.

"We chose cooking videos because everyone has done it and understands it," said Yiannis Aloimonos, UMD professor of computer science and director of the Computer Vision Lab, one of 16 labs and centers in UMIACS. "But cooking is complex in terms of manipulation, the steps involved and the tools you use. If you want to cut a cucumber, for example, you need to grab the knife, move it into place, make the cut and observe the results to make sure you did them properly."

One key challenge was devising a way for the robots to parse individual steps appropriately, while gathering information from videos that varied in quality and consistency. The robots needed to be able to recognize each distinct step, assign it to a "rule" that dictates a certain behavior, and then string together these behaviors in the proper order.

"We are trying to create a technology so that robots eventually can interact with humans," said Cornelia Fermüller, an associate research scientist at UMIACS. "So they need to understand what humans are doing. For that, we need tools so that the robots can pick up a human's actions and track them in real time. We are interested in understanding all of these components. How is an action performed by humans? How is it perceived by humans? What are the cognitive processes behind it?"

Aloimonos and Fermüller compare these individual actions to words in a sentence. Once a robot has learned a "vocabulary" of actions, they can then string them together in a way that achieves a given goal. In fact, this is precisely what distinguishes their work from previous efforts.

"Others have tried to copy the movements. Instead, we try to copy the goals. This is the breakthrough," Aloimonos explained. This approach allows the robots to decide for themselves how best to combine various actions, rather than reproducing a predetermined series of actions.

The work also relies on a specialized software architecture known as deep-learning neural networks. While this approach is not new, it requires lots of processing power to work well, and it took a while for computing technology to catch up. Similar versions of neural networks are responsible for the voice recognition capabilities in smartphones and the facial recognition software used by Facebook and other websites.

While robots have been used to carry out complicated tasks for decades--think automobile assembly lines--these must be carefully programmed and calibrated by human technicians. Self-learning robots could gather the necessary information by watching others, which is the same way humans learn. Aloimonos and Fermüller envision a future in which robots tend to the mundane chores of daily life while humans are freed to pursue more stimulating tasks.

"By having flexible robots, we're contributing to the next phase of automation. This will be the next industrial revolution," said Aloimonos. "We will have smart manufacturing environments and completely automated warehouses. It would be great to use autonomous robots for dangerous work--to defuse bombs and clean up nuclear disasters such as the Fukushima event. We have demonstrated that it is possible for humanoid robots to do our human jobs."

In addition to Aloimonos and Fermüller, study authors included Yezhou Yang, a UMD computer science doctoral student, and Yi Li, a former doctoral student of Aloimonos and Fermüller from NICTA.

This research was supported by the European Union (project POETICON++), the National Science Foundation (Award No. SMA 1248056) and the U.S. Army (Award No. W911NF-14-1-0384 - MSEE DARPA). The content of this article does not necessarily reflect the views of these organizations.

The study, "Robot Learning Manipulation Action Plans by 'Watching' Unconstrained Videos from the World Wide Web," Yezhou Yang, Yi Li, Cornelia Fermüller and Yiannis Aloimonos, will be presented on Jan. 29, 2015, at the Association for the Advancement of Artificial Intelligence Conference in Austin, Texas.

Most Popular Now

Most Advanced Artificial Touch for Brain…

For the first time ever, a complex sense of touch for individuals living with spinal cord injuries is a step closer to reality. A new study published in Science, paves...

Predicting the Progression of Autoimmune…

Autoimmune diseases, where the immune system mistakenly attacks the body's own healthy cells and tissues, often have a preclinical stage before diagnosis that’s characterized by mild symptoms or certain antibodies...

Major EU Project to Investigate Societal…

A new €3 million EU research project led by University College Dublin (UCD) Centre for Digital Policy will explore the benefits and risks of Artificial Intelligence (AI) from a societal...

Using AI to Uncover Hospital Patients�…

Across the United States, no hospital is the same. Equipment, staffing, technical capabilities, and patient populations can all differ. So, while the profiles developed for people with common conditions may...

New AI Tool Uses Routine Blood Tests to …

Doctors around the world may soon have access to a new tool that could better predict whether individual cancer patients will benefit from immune checkpoint inhibitors - a type of...

New Method Tracks the 'Learning Cur…

Introducing Annotatability - a powerful new framework to address a major challenge in biological research by examining how artificial neural networks learn to label genomic data. Genomic datasets often contain...

Picking the Right Doctor? AI could Help

Years ago, as she sat in waiting rooms, Maytal Saar-Tsechansky began to wonder how people chose a good doctor when they had no way of knowing a doctor's track record...

From Text to Structured Information Secu…

Artificial intelligence (AI) and above all large language models (LLMs), which also form the basis for ChatGPT, are increasingly in demand in hospitals. However, patient data must always be protected...

AI Innovation Unlocks Non-Surgical Way t…

Researchers have developed an artificial intelligence (AI) model to detect the spread of metastatic brain cancer using MRI scans, offering insights into patients’ cancer without aggressive surgery. The proof-of-concept study, co-led...

Deep Learning Model Helps Detect Lung Tu…

A new deep learning model shows promise in detecting and segmenting lung tumors, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA)...

One of the Largest Global Surveys of Soc…

As leaders gather for the World Economic Forum Annual Meeting 2025 in Davos, Leaps by Bayer, the impact investing arm of Bayer, and Boston Consulting Group (BCG) announced the launch...

New Study Reveals AI's Transformati…

Intensive care units (ICUs) face mounting pressure to effectively manage resources while delivering optimal patient care. Groundbreaking research published in the INFORMS journal Information Systems Research highlights how a novel...