News

The robots are coming! Time for us to understand how they work — and don’t

Ross Sowell tests a TurtleBot’s capabilities in the Robot Room at ScienceWriters2022 in Memphis, Tenn. (Photo by Lila Levinson).

Ross Sowell is being followed.

When he walks back and forth, his shadower walks back and forth. When he stops, it stops. When he turns, it turns. Suddenly, he jumps sideways. His shadower stalls, unsure what to do.

Sowell walks away, having eluded his follower.

A sudden move might not have deterred a human pursuer or a persistent pet. But this confused companion is a robot.

This TurtleBot, which looks like a Roomba vacuum cleaner with a small table on its back, is a commercially available robot designed to be both powerful and easy to use. Sowell, a computer scientist, and his team at Rhodes College use open-source software to program TurtleBots to follow moving targets at a distance of a few meters. They use them to help lawmakers, policymakers and others better understand the capabilities and limitations of robots. The goal is to bridge the gap between the people who create robots and those who will create laws that apply to them.

“Most people have really high expectations for what they think robots can do,” Sowell said Oct. 24 during the Council for the Advancement of Science Writing’s New Horizons in Science briefings at the ScienceWriters2022 conference in Memphis, Tenn. “Robots … in reality generally have rather narrow skill sets.”

This misunderstanding likely will cause problems once robots and humans start interacting more frequently, which Sowell and other experts predict will be quite soon. Humans don’t intuitively understand how robots work, so we —and our legal system—might not be prepared when they fail, he said.

Humans generally have good mental models of what other people can and can’t do, based on experience. They can imagine what someone might do if they encounter an unexpected obstacle—walk around it. But can they guess what a robot would do, what obstacles would be particularly hard for it to notice? What would it be programmed to do if it collided with something? And who should be held liable for damage it causes?

Sowell and his team develop interactive tools to help lawmakers and non-engineers get a better sense of how robots interact with the world. These tools include tabletop demonstrations and participatory games.

In the conference’s “Robot Room,” science writers tested the limits of the TurtleBot’s tailing abilities. Then, with an augmented reality tablet and a tabletop model of a building, they tested whether the bot’s laser sensor “eyes” could detect obstacles placed in its way.

Sowell also asked volunteers to play the role of robot or programmer. One programmer attempted to guide a robot to a chair across a stage. The twist? The programmer couldn’t look at the robot, and the robot could only answer yes-no questions. When the robot was getting near the chair, Sowell quietly moved the target to a different part of the stage. Robot and programmer both struggled to adjust to the new environment.

Learning “how not-human” robots are

“I was really surprised by the limitations of robots and their ability to sort of visualize their environments,” said Robot Room visitor Casey Westlake. “This was a really good demonstration of how not-human they are in a lot of ways, and how they function.”

Understanding what circumstances can cause a robot to fail is a key first step toward developing laws to guide robot-human interactions, Sowell said. Knowing a robot’s limitations makes it easier to understand how it can be expected to fit into human life.

Already, small robot vacuums, like Roombas, are widely used in private homes. Food delivery robots are being used in some cities and on some college campuses. In the near future, package delivery TurtleBots using the sidewalk could bump into humans—and some legal questions.

“If my three-year-old is driving their tricycle down the sidewalk and gets hit by a sidewalk delivery robot, who’s at fault?” Sowell asked. “What should the policy be? What should the robot be programmed to do?”

The answers should be figured out before cases are tested in court, Sowell said.

“I don’t want to make everyone have engineering degrees, but if we can do just enough so that people develop more accurate mental models, then I think that can have…a positive impact.”

Lila Levinson (she/her) is a PhD candidate in neuroscience at the University of Washington, where she studies the natural flexibility of the human brain. She can be reached at l.levinson.12@gmail.com. Levinson wrote this story as a participant in the ComSciCon-SciWri workshop at ScienceWriters2022.