Toyota is now making AI-trained breakfast robots in a Kindergarten for robots. Well, you however should know that these robots from Toyota cannot whip up a frittata just yet, but one thing they can do right now is that they can whip some eggs.
Toyota AI-Trained Breakfast Robots
Yeah, so Toyota Research Institute (TRI) made use of generative AI in a “kindergarten for robots” in teaching robots how to make breakfast — or at the least, the individual tasks that are needed to do so, and just in case you don’t know, it didn’t take hundreds of hours of coding and errors as well as bug fixing. Researchers instead, accomplished this simply by giving robots a sense of touch, plugging them into an AI model, and then, as you would do a human being, showing them just how.
The sense of touch here is “one key enabler,” researchers reveal. By giving the robots the big, pillowy thumb (my term, and not theirs) that you get to see in the video shared, the model in question can “feel” what it’s doing, thus giving it more information. That in question makes difficult tasks very much easier to carry out than with just the sight alone.
What The Lab’s Manager of Dexterous Manipulation Has to Say about the Development
The lab’s manager of dexterous manipulation, Ben Burchfiel, reveals that it’s “exciting to see them engaging with their environments.” First, a “teacher” gets to demonstrate a set of skills, and then, “over a matter of hours,” the model in question then learns in the background. He also adds that “it’s common for us to teach a robot in the afternoon, let it learn overnight, and then come in the next morning to a working new behavior.”
The Researchers Are Attempting To Create Large Behavior Models
The researchers in question say that they are attempting to create “Large Behavior Models,” or LBMs (yes, I also want this term to stand for Large Breakfast Models), for robots. And very much similar to how LLMs are trained just by noting patterns in human writing, the LBMs of Toyota would relatively learn simply by observation, and then “generalize, performing a new skill that they’ve never been taught,” Russ Tedrake, MIT robotics professor and VP of robotics research at TRI stated.
Making use of this process, the researchers stated that they have trained more than 60 challenging skills, such as “pouring liquids, using tools, and manipulating deformable objects.” They however want to up that very number to 1,000 by the end of the year 2024.
Other Tech Companies Making Similar Moves
Google has for some time now been doing similar research with its Robotic Transformer, RT-2, as has Tesla on the other hand. And very much similar to the approach of Toyota’s researchers, their robots in question make use of the experience that they have been given to infer on how to do things. AI-trained robots theoretically could eventually get to carry out tasks with little to no instruction other than just the kind of general direction that you would give a human being (“clean that spill,” for instance).
But the robots of Google, at least, have a long way to go, as The New York Times noted when writing in regards to the research of the search giant. The Times however writes that this sort of work is usually “slow and labor-intensive,” and then providing just enough training data is very much harder than just feeding an AI model gobs of data that you downloaded from the internet, as the article in question demonstrates when describing a robot that identified the color of a banana as white.
MORE RELATED POSTS