Hear us out: Human-robot relationships are totally a good thing

Hear us out: Human-robot relationships are totally a good thing

Make no mistake about it: here in 2017, relationships between humans and robots are a very real thing. Given the amount of time we spend with our devices, that’s no great surprise, either.

While the most attention-grabbing may be the plethora of stories about sex robots, there are plenty of other ways we’re interfacing with machines that’s totally different to any other point in history. Far from being some weird tech niche, developing relationships with our robots is going to be essential to us not only leading longer, happier lives, but also making smarter machines which can get the best out of both us and our machines.

Fortunately, it’s a challenge that robotics and AI researchers are already busy solving.

The lovability of robots

For starters, robots are made to be loved. That might sound trite, but there’s something fundamentally lovable about robots and computer software which attempts to approximate human intelligence, which is what a lot of AI is supposed to do.

If you saw the footage of Boston Dynamics’ Spot robot being kicked to test its balancing capabilities, there’s a good chance you felt a degree of sympathy for it. That’s because we tend to anthropomorphize and project emotions onto the more lifelike machines around us — in a way we’d never do with a regular desktop computer.

This effect was widely observed in the 1990s, when Tamagotchis and Furbies became the “must have” toys of the season. In one notable anecdote, one airplane passenger disembarked her flight, vowing never again to fly that same airline again, because the flight attendant told her to turn off her Tamagotchi for take-off — because she knew this would reset the device and thus “kill” her creature.

The effect isn’t just found in people who get strangely attached to toys, either. “U.S. soldiers have been known to form close bonds with their bomb-disposal robots, even though these robots are not particularly human-like in their features,” John Danaher, co-editor of MIT Press’ new book Robot Sex: Social and Ethical Implications, told Digital Trends.

New ways to build robots

This opens up entirely new opportunities when it comes to the deployment of new machines; applications which were unavailable at any other point in history. One example of how the emotional component of a robot relationship can be harnessed is in caregiving.

This opens up entirely new opportunities and  applications that were unavailable at any other point in history

While robots are never going to be an adequate replacement for emotional human relationships or contact, there are scenarios where they can have an invaluable role to play. For example, with a limited number of caregivers available, and a growing elderly population, robots or chatbots with natural language processing (NLP) abilities could prove to be excellent companions for older people.

They can remind folks to take their medication, help people with degenerative neurological disorders like Alzheimer’s by playing memory games, and even provide other forms of reassuring comfort.

The most famous therapeutic robot, more of a pet than an approximation of human company, is Paro the “therapeutic robot” seal. Designed with the elderly market in mind, Paro can make eye contact with users by sensing the direction of their voice, has a limited vocabulary of words for “understanding” people, and is able to fine-tune its behavior depending on how it is treated. Stroke it softly or more forcefully and its behavior will change to mirror that of the user. This provides comfort to its users by appearing to empathize with them.

Intuition Robotics’ ElliQ, meanwhile, is another robot designed with the elderly market in mind. A bit like a cross between a simple Amazon Echo and a robot version of the animated Pixar mascot, ElliQ uses AI and machine learning to maintain physical and mental health in its users, as well as offering companionship to older people.

“Given the general suspicions held by older members of the population surrounding AI, ElliQ is proactive rather than reactive,” Joe Lobo, a robot expert at AI and natural language processing company Inbenta, told Digital Trends. “It will learn the preferences and personality of the person to recommend relevant activities such as going for a walk, playing games or even calling members of the family. It will also remember important daily routines such as when to take medication or when there are upcoming hospital appointments.”

Helping teach social skills

In addition to the elderly population, robots could also prove invaluable communicative tools for people with autism. It’s easy to be cynical about the reductive relationship that a robot or AI can have with a person, but sometimes that is exactly what is needed.

Robots could also prove invaluable communicative tools for people with diseases like autism.

In 2014, three years after Apple’s AI assistant Siri debuted on the iPhone 4s, the New York Times published a story titled “To Siri, With Love,” describing how journalist Judith Newman’s 13-year-old autistic son had forged a relationship with Siri, which helped him develop his communication skills in the real world. Siri, Judith writes, is “wonderful for someone who doesn’t pick up on social cues: [the] responses are not entirely predictable, but they are predictably kind.” Is an AI that answers everything in a “predictably kind” way a realistic portrayal of human relationships? No way. Is it better in this case? You bet!

A few years on from 2014, there are now a number of companies focused on building AI tools which can help people with autism to develop real world communication skills. One company which has led this charge is the Wisconsin-based emotion tracking company Affectiva. Unlike Siri, which recognizes only words, tools developed by Affectiva can also incorporate other elements to cute effect — such as a video display which dispenses free chocolate samples when you smiles at the screen.

Recently, Affectiva’s technology was used in a smart glasses “life coach” which can gamify the social experience by helping identify people’s emotions, and provoke conversations with autistic users about what these might mean.

As chatbots and AI systems become smarter and more deeply embedded in our lives, the number of use-cases will explode. By forming bonds with these robots and AI systems, we can carry out tasks more effectively, while getting the same boost that we get from speaking to a likeminded person. Just like Paro mirrors its users, AI assistants could learn our speech patterns and which ones prompt the best responses, and learn to use them.

With AI increasingly used as a teaching aid, it may even be possible to introduce AI assistants early in a person’s life and have it grow and evolve with them — while transferring its personality from machine to machine, much as Siri is available on the iPhone, Mac, or Apple TV. The possibilities are endless.

The importance of reciprocity

So here’s one more billion dollar question: do robots need to be able to reciprocate in order for us to consider the relationships real? This is a difficult question. If we found out that our partner was a Truman show-style actor, being paid to act as though they were in love with us, it would be deeply traumatizing. The same thing would be true with our buddies.

But does it have to be true with robots? And does it matter?

Do robots have to act like they love us in order for us to consider the relationship real?

It’s a question that has been examined by people working in artificial intelligence for decades — and not just within the confines of science fiction stories. In the 1960s, researchers at MIT developed a computer psychotherapist named ELIZA, which was designed to carry out seemingly intelligent text-based conversations with users. By echoing fragments of a user’s language in a way that either seemed to support or question their statements, ELIZA was able to act like, well, a real psychotherapist.

So far, so good. But Joseph Weizenbaum, the researcher behind ELIZA, had an experience that’s not usually shared by people developing new technology: ELIZA worked too well. Although it had no actual “understanding” of what users were discussing, Weizenbaum was disturbed by the fact that it prompted people to reveal intimate details of their lives, including relationship difficulties or complex life issues. For Weizenbaum, there was something ethically troubling about it.

However, not everyone thinks this way. The Turing Test posits that, if a machine acts in an intelligent way, we should attribute intelligence to it. Is the same thing true with a robot relationship?

why human robot relationships are good for us 2017 global mobile internet conference in beijing

VCG via Getty Images

“I’m not entirely sure what reciprocity or mutuality really is and whether it is off-limits for machines,” said John Danaher. “I’m guessing people think of it as some inner mental state. But that’s problematic when it comes to human relationships. We never really know what another human being thinks of us. We only have their behavior to go on. If they consistently act as if they love us, we think there is mutual affection. I don’t see why robots couldn’t be behaviorally indistinguishable from human partners. It would require some pretty sophisticated technology – far more sophisticated than we currently have – but I don’t think it is impossible.”

On a technical level, it would be fascinating to see if we could ever develop a machine that cares for us in the same way that we might care for it. But it’s not necessary to create a meaningful relationship that can enrich our lives.

Even if it’s never going to be quite the same thing as a human relationship.

Published at Sat, 25 Nov 2017 11:15:15 +0000

Be the first to comment

Leave a Reply

Your email address will not be published.


*