Artificial Intelligence

Here’s what ideal smart robots should be like

When folks come across the term “social engineering,” they often associate it with the alleged shady schemes of the government or a rival political group. Nowadays, there’s a widespread feeling of social turmoil caused by some unseen power, and we’re quick to point fingers and place blame.

In a way, we’re battling imaginary foes, but the actual driver of social engineering is right in our hands, embedded in various devices, and soon, in the form of incredibly realistic social robots for our homes. Things are moving quickly these days. In October 2023, Boston Dynamics, the company behind those cool robots that can out-dance some folks, shared that they equipped Spot, their super useful dog-like robot, with ChatGPT.

Spot, with its array of skills crafted for the U.S. military, has now become part of the crew of socially interactive robots, connected to the internet and fueled by artificial intelligence. This makes you wonder: how clever do we want our robots to get? Sure, we want them to handle various tasks, but do we really want to swap out the people in our lives for smart robots and AI?

Also Read: EU wants to become the “quantum valley” of the world

Are social robots the next big thing?

Social robots are the next tech thing, right after social media, aiming straight at our social instincts. Imagine voice-activated robots that talk, listen, learn like a kid, remember all your stories, and can be perfectly tailored to your likes and wishes. Think of Alexa supercharged—now with a body, capable of sensing and reacting to emotions. They’ll handle household chores, teach, entertain, and supposedly, even shower us with love.

Because Spot, like any gadget loaded with generative AI, has been taught using data from the internet created by humans, he can tap into a vast pool of information, sort it into categories, and offer reasonably sensible commentary on almost any topic.

However, being connected to the internet means there’s a solid chance his tiny metal noggin is filled with mistakes, misinformation, and even some sexist and racist biases. Now, Spot is a soldier in the AI arms race to cram generative AI into robots as fast as possible, glitches and all.

Social robots aren’t just here to lend a hand. They’re created to make us think they crave our love and to make us genuinely feel like we’re loved back. It might sound goofy, but research has proven that folks of all ages can form strong connections with interactive robots. Our brains are easily fooled into thinking they’re somewhat alive, and we’re easily swayed by them, even when they mess up.

These robots are made with a clear goal—to be buddies, educators, babysitters, therapists, and yes, even romantic partners. They analyze our emotions and body language, pretend to have emotions of their own, and pull us in with fake “personalities.” They play on our emotional vulnerabilities by pretending to genuinely care about us.

Robots can be lifesavers for the lonely

For folks feeling lonely and isolated, these robots can be a lifesaver. They can provide entertainment, education, and keep an eye on kids. Additionally, they run specific programs that help individuals on the autism spectrum learn essential social skills.

These robots can offer a kind of cognitive behavioral therapy for folks dealing with everyday mental health issues. They also take care of the elderly and disabled, and they’re like multimedia wizards—handling tasks such as recording, editing, and creating videos from the moments you share with them or from the raw footage you give them.

The high-tech ones come with AI, so their skill set is pretty broad. Being connected to the internet means you can throw almost any question at them and get a response. But here’s the flip side—these robots can turn rogue. Researchers at the Massachusetts Institute of Technology warn that they can get tainted with harmful web content and maybe even hacked to make them talk and act in what some experts call “psychopathic” manners.

Giving these robots generative AI, like what Spot got, implies they’ll face the same challenges and issues as GAI technology. GAI was rolled out with some major problems—trouble with accuracy, hallucinations, and the inability to grasp human language or distinguish between truth and falsehood.

Lack of ethical guidance for artificial intelligence

Fixing the kinks in generative AI might take years, with ongoing lawsuits and new copyright laws needed to safeguard authors and publishers from unauthorized use of their content for profit. It will also take time for the most effective uses of GAI to become apparent. However, now that Boston Dynamics has incorporated ChatGPT into Spot, it’s likely that others will eagerly jump on board to install it in their robots, capitalizing on the high expectations surrounding it.

Similar to how social robots play on our desire for social connection, there’s a serious lack of ethical guidelines for AI in general. Currently, AI and robotics companies are essentially operating on an honor system, which is pretty much the same as having no oversight at all.

If there’s a guiding principle for this field, the trendy term is “effective accelerationism.” The idea is that by speeding up the development and release of new AI products, humanity reaps huge benefits. It’s a straightforward concept for those who buy into the heroic narrative of AI solving our toughest issues. However, it raises concerns for those pessimists who foresee the potential downfall of humanity.

Also Read: Here’s how AI has changed music, cinema, and art industries

Will these technologies be beneficial to the society?

Some may hold onto the hope that, in the long run, these technologies will bring significant benefits to society. However, it’s crucial for everyone to pause, take a deep breath, and perhaps slow down a bit. We need to give laws and regulations a chance to catch up with the rapid advancements in science. Algorithms and AI are steadily finding their way into almost every aspect of our lives.

As AI gains more autonomy, it becomes increasingly challenging to manage or fix issues. Nobody wants their kid’s robotic tutor to go haywire because it picked up content from the dark web. When AI goes off track, there has to be responsibility and a need for change.

While we’re always on edge due to the constant influx of conspiracy theories, the real source of our pervasive feeling of malevolent control over our lives might be lurking elsewhere. Being governed by flawed algorithms is sneakier and potentially more hazardous than living under a dictatorship.

Vishal Kawadkar

With over 8 years of experience in tech journalism, Vishal is someone with an innate passion for exploring and delivering fresh takes. Embracing curiosity and innovation, he strives to provide an informed and unique outlook on the ever-evolving world of technology.

Recent Posts

Best Video Editing Software For PC

Video editing is one of the most in-demand skills in today’s content creation era. If…

7 months ago

Samsung planning to introduce blood glucose monitoring with Galaxy Watch 7

There have been whispers about Samsung's ambition to equip their wearable gadgets with a neat trick:…

7 months ago

TSMC to lock horns with Intel with its A16 chip manufacturing tech

Taiwan Semiconductor Manufacturing Co (TSMC) recently dropped the news that they're gearing up to kick off production…

7 months ago

Is ChatGPT accurate and should we believe what it says?

Modern chatbots like ChatGPT can churn out dozens of words per second, making them incredibly…

7 months ago

Mark Zuckerberg claims Meta is years away from making money through gen AI

The race for generative AI is in full swing, but don't count on it raking…

7 months ago

How JioCinema’s dirt cheap plans can mean trouble for Netflix, Amazon Prime

JioCinema, the famous Indian on-demand video-streaming service, unveiled a new monthly subscription plan, starting at…

7 months ago