Be Tech Ready!!
Artificial Intelligence

AI can mimic human social skills in real-time, should we be scared?

Human smarts mostly rely on picking up information from other folks, gathering it over time as our culture evolves. This social learning, called cultural transmission in books, lets us copy actions and behaviors on the spot. Now, can AI pick up social learning skills in a similar fashion?

Copying how humans do things has been a go-to method in teaching AI for a while. The idea is to make algorithms watch people perform a task and then have them give it a shot. However, AI usually requires loads of examples and exposure to tons of data to really nail down the imitation.

A groundbreaking study from DeepMind researchers suggests that AI agents can show off their social learning chops in real-time. They can imitate humans in new situations “without using any pre-collected human data,” according to the findings. Before we dive into the details, let’s understand imitation learning.

Imitation Learning explained

Imitation Learning, or Learning from Demonstration (LfD), is a machine learning technique where the goal is for the learning agent to copy human behavior. Unlike traditional machine learning, where an agent learns through trial and error in an environment with a reward function, imitation learning involves the agent learning from a dataset of demonstrations provided by an expert, usually a human. The aim is to reproduce the expert’s actions in similar, if not identical, situations.

Imitation learning is about watching an expert do something and picking up how to copy those moves. Usually, it goes through three main steps:

Gathering Data: The expert shows how to do the task, like maneuvering a robot arm or driving a car. The stuff the expert does, all the actions and decisions, get recorded as data.

Training: After collecting the data, it’s time to teach a machine learning model. The model learns a policy, which is basically a way to map what it sees in the environment to the actions it takes, all in an effort to mimic the expert’s moves.

Testing: The trained model gets put to the test in the real world to see how good it is at doing the task, comparing it to the expert. The aim is to make the agent’s performance as close as possible to the expert’s.

Also Read: AI’s influence on music might not go down well with everyone

What are the applications of imitation learning?

Imitation learning finds use in many areas, especially in situations where it’s tough to nail down a reward system or where human know-how is crucial:

Self-driving vehicles: Imitation learning proves valuable in instructing self-driving cars, allowing them to glean insights from human driving behavior. This assists them in mastering intricate maneuvers and comprehending real-world driving dynamics.

Robotics: When it comes to instructing robots in tasks that are straightforward for humans but challenging to articulate in code, imitation learning is the preferred approach. Tasks such as cooking or folding clothes fall into this category.

Gaming: In both video games and board games, imitation learning is a common technique for training AI agents to play at a level comparable to humans. Their learning process involves observing and emulating skilled players in action.

Healthcare: Imitation learning proves beneficial in the realm of surgical robotics, allowing robots to learn from skilled surgeons. This capability assists them in executing complex operations with the aid of robotic assistance.

DeepMind researchers used reinforcement learning

DeepMind researchers focused on a form of cultural transmission known as observational learning or (few-shot) imitation. This entails replicating body movements. DeepMind carried out its experiment in GoalCycle3D, a simulated environment with uneven terrain, footpaths, and obstacles, where the AI agents navigated through the digital landscape.

To aid the AI in its learning process, the researchers utilized reinforcement learning. For those less acquainted with Pavlov’s contributions to this field, this method entails rewarding behaviors that contribute to learning and achieving the desired outcome—in this case, finding the correct path. In the subsequent phase, the team introduced expert agents, whether hard-coded or human-controlled, who were already adept at navigating the simulation. The AI agents quickly grasped that the most efficient way to reach their destination was by learning from these seasoned experts.

The researchers noticed two key things. First off, they saw that the AI not only picked up things quicker when copying the experts but also put that knowledge to use on different virtual routes. Secondly, DeepMind found out that the AI agents could still use their newfound skills even when the experts weren’t around, which the study’s authors see as a case of social learning.

The authors acknowledge the need for more research, but they’re optimistic that their approach could open the door “for cultural evolution to play an algorithmic role in the development of artificial general intelligence.” They’re also excited about fostering more collaboration between the realms of AI and cultural evolutionary psychology.

Even though it’s in the early stages, DeepMind’s discovery could shake things up in the AI industry. This breakthrough might cut down the usual resource-heavy training for algorithms while boosting their problem-solving skills. Plus, it sparks the question of whether AI could eventually grasp the social and cultural aspects of human thinking.

Also Read: AI hallucinations could be a huge threat to science, experts warn

Limitations of Imitation Learning

Even though imitation learning looks promising, it’s not without its share of challenges:

Data Quality: How good the learned policy turns out depends a lot on how good the demonstrations are. If the demonstrations are lousy, the behaviors the AI picks up might not work well or could even be wrong.

Distribution Shift: The AI might come across situations that weren’t part of the training demos, causing it to act uncertainly. This is what’s called the distribution shift problem.

Scalability: Getting demos from experts can be a real hassle, not to mention expensive and time-consuming, especially for tricky tasks. This makes it tough to scale up the process.

Generalization: Making the AI generalize the learned behavior to handle new situations is a big challenge, especially in environments that are dynamic and unpredictable.

Imitation learning is like a game-changer in machine learning. It lets agents pick up tricky behaviors without relying on clear reward rules. This approach is super versatile, working in various areas. It opens the door to automating tasks that usually need human smarts. As research keeps moving forward, it seems like imitation learning will become even more crucial in shaping smart systems.

Vishal Kawadkar
About author

With over 8 years of experience in tech journalism, Vishal is someone with an innate passion for exploring and delivering fresh takes. Embracing curiosity and innovation, he strives to provide an informed and unique outlook on the ever-evolving world of technology.