What will it take for robot assistants to become more integrated in our daily lives? Assistant Professor Naomi Fitter thinks they’ll need to master the physical aspects of social interactions, while Associate Professor Cindy Grimm cautions against programming them to behave just like us.
Naomi Fitter, assistant professor of robotics, holds Cozmo, a workplace assistant robot programmed to encourage people to take breaks throughout their work day.
Cindy Grimm, associate professor of mechanical engineering, thinks through numerous issues, including privacy, liability, and implicit bias, that need to be considered as robots become a more integrated part of our daily lives.
[JANE JETSON]: Now, just make yourself at home, Rosey.
[ROSEY THE ROBOT]: Yes, ma'am. I'II get right to work, ma'am.
PALMER: That’s Rosey the robot from the classic cartoon, The Jetsons. She was both a maid and a trusted member of the family, teaching young Elroy how to play basketball and counseling teenage Judy about her love life. She barely looked human -- more like a tank wearing an apron. But what made Rosey an ingrained part of the Jetson household was her personality.
I’m your host Chris Palmer and in this episode we’re continuing the theme of robotics and AI by having a look at what it will take to successfully integrate robots into our daily lives.
[MUSIC: “The Ether Bunny,” by Eyes Closed Audio, used with permission of a Creative Commons Attribution License.]
PALMER: From the College of Engineering at Oregon State University, this is “Engineering Out Loud.”
PALMER: Roomba, Alexa, Tesla. Robots are appearing in more and more everyday places. Naomi Fitter, an assistant professor of robotics, is trying to endow our machine helpers with the social skills and physicality they’ll need to fit into our world.
FITTER: It's important that as these systems emerge in spaces where we're living and interacting and spending our day to day lives, that they're intelligent and ethical and uh, fun to interact with hopefully.
PALMER: That’s Naomi. She builds robots that encourage people to practice motor skills, keep connected with family and friends, remember to exercise, and stay on task to work more productively.
FITTER: Those are all things that maybe we sort of know we should do in our day to day lives anyway, but that it's often hard to push ourselves through and actually do them.
PALMER: Like many roboticists, Naomi’s first encounter with robots was an undergraduate robotics team.
[MUSIC: New Morning by Track Tribe, part of the YouTube Audio Library. Licensed under a Creative Commons license]
FITTER: The secret backstory is that they had free pizza. And that was an appealing, uh, enticement to get me to come to the first meeting. But once I was there, I also saw that it was really intriguing that you could write software and see it in action, acting on the actual world, seeing robots moving around, for example. We did a lot of work with unmanned ground vehicles. So, we tried to do a lot of navigating around the hallway safely, avoiding pedestrians and um, yeah, making sure not to run into walls or damage objects. And from there I enjoyed the general experience of designing robots, writing software for robots.
PALMER: Even early on, Naomi’s intuition told her that, as social creatures, humans expect that the robots they interact with on a regular basis, should be able to engage us on a social level, especially those meant to help people in spaces such as their homes or offices. Naomi’s quest to make robots more social took her out of the basement lab where her robotics team tested mobile robots and into the classroom.
FITTER: My mother teaches special education at a middle school level and I've always been interested in helping her and working with her, often coming to her classroom on days off from school and that experience trying to, sort of, design curricula and activities for students with learning differences, I think contributed to my current interest, which is now using robots for more social applications, assistive applications. How can we use robots to sort of take that desire to help different learners learn successfully to the next level and help folks in other ways, too?
PALMER: And some of those other ways have been pretty creative. Over the years, Naomi has built mobile teleconferencing robots, robots that make art, and, of course, Jon the Robot, whose jokes you’ve been enjoying throughout this season of the podcast.
[JON THE ROBOT]: Hello, I'm Jon. Of course, that is not my real name, but humans have trouble pronouncing [inaudible]
PALMER: One of Naomi’s more recent passions is a robot that assists infants with physical therapy.
FITTER: Often the state of the art is to wait until a later age for interventions or to maybe have very infrequent visits, infrequent sessions with clinicians. So, getting access to clinicians, being able to spend time with physical therapists, the time is very in demand, it can be expensive, it can be hard to come by. Where I think I've seen some promise that we can introduce a robot to help a child go through a physical therapy routine and we can make maybe make a difference
[MUSIC: New Morning by Track Tribe, part of the YouTube Audio Library. Licensed under a Creative Commons license]
or fill a gap in human needs that hasn't been able to be filled before.
PALMER: Naomi’s hopeful that her robot can deliver successful interventions to children with developmental delays. For a recent study, she programmed a robot to model leg-extension kicks in front of 6-month-old infants. She found the infants in the study imitated the leg motion more often when the robot engaged them with light and noise rewards.
FITTER: If we can teach young kids to do leg extension kicks, that's a sort of motion practice that can lead to taking your first steps or help children stay on track to hit various developmental milestones. And that's something we don't have many other technologies to do that I find pretty exciting. Watching infants interact with robots is also very cute and fulfilling to your soul. So, it's, it's nice in another way as well, watching the trials and just getting to interact with young babies. Always fun.
PALMER: Her next step is getting physical therapy robots, and other robots, to be, uh, more physical.
[MUSIC: Who’s using Who by The Mini Vandals, part of the YouTube Audio Library. Licensed under a Creative Commons license]
FITTER: We can sometimes get better outcomes with the physically embodied robots system. A robot that actually takes up space in the real world that can act on objects around it compared to when we're interacting with something like an application on our phone.
PALMER: And more than just a robot that takes up space and moves around, Naomi wants to build robots that can physically interact with people.
FITTER: If you look at human-human bonding, how young children learn as they explore the world or connect with caregivers, it's very touch-centric. So, the underlying sense of touch is super important to human-human interaction, human learning, and knowledge and connection. And I think it needs to be part of the equation as robots appear in human populated spaces as well. Even with robots that aren't powered on, as I've given a lot of lab tours and demonstrations in the past, people tend to want to reach out and robot. They want to high five the robot’s end-effector or give it a hug. So, some of my initial research in the space of physical human robot interaction has focused on trying to make robots better at that sort of thing. Can we make robots better high fivers? Can we make them safe and interesting and compelling to play physically interactive hand-clapping games with?
[MUSIC: Who’s using Who by The Mini Vandals, part of the YouTube Audio Library. Licensed under a Creative Commons license]
PALMER: While one could argue what the perfect high five looks like, Naomi’s taking a scientific approach.
FITTER: The perfect high five is still a slightly open question, but from my work on it so far, I do know that having good planning and good arm stiffness can be key to enhancing the experience.
PALMER: And it doesn’t hurt to program in some flair.
FITTER: If you've done a fist bump, maybe you do the exploding rock as the robot retreats from the person.
PALMER: But it will take more than spicing up high fives for people to bond with robots. Naomi says that other surprising behaviors can convince people to embrace their circuit-bound doppelgangers. Another way that robots can get our attention is to cheat.
FITTER: I personally haven't designed any studies that used deception, but there, there is some risk of misbehaving robots or malfunctioning robots being more fun or appealing to interact with. There's even some study results from the past in HRI, human robot interaction, where a robot that cheats at rock, paper, scissors is more socially appealing to people.
PALMER: Finally, a robot that I can relate to! And it’s just that aspect of relatability that Naomi believes may get people to listen to their robot assistants. A simple robot she is working on at the moment has a fairly challenging task: getting desk jockeys to take a break and move around.
FITTER: If you sit at a computer all day, you can face some negative ocular effects -- staring at a screen all day, not great for you. You can face some negative musculoskeletal effects. If you're anything like me right now I'm slump, slumping over slouching, not sitting with great posture. And we also see negative cardiovascular effects from not getting up being physically active enough. So, in my lab right now, we're looking at whether we can put small tabletop robots in the workplace and motivate people to take breaks more frequently and effectively. The robots that we're using for that workplace assistance project are currently the commercial Cozmo robot. So, they can drive around your desk in two dimensions. They can also tilt their head up and down to kind of look at folks with whom they're interacting. So, it can make different facial expressions and maybe even play on people's emotions, guilt them, annoy them, or compassionately egg them to get up.
PALMER: But there’s only so many times you’ll listen to your robot tell you to get up and stretch. Socially assistive robots will need to recognize when the same old prompts are losing their mojo and move on to more effective tactics.
FITTER: Maintaining interest and novelty and relevance is a really big challenge in robotics right now. I think it's something that we haven't fully solved, but by introducing more principles from artificial intelligence, modeling situations more effectively, and using state of the art computer science thinking to design intervention strategies and adaptation strategies, we're starting to do better. But I think there's still a lot to learn in that space.
PALMER: Naomi also anticipates our personal robot assistants will one day be able to track our mood to see how the interaction is going and maybe make adaptive changes.
FITTER: Affective computing is one tool that we can use in this type of robot application. It's a big umbrella that includes different ideas, but things from understanding facial expression to the meaning of human gaze to vocal tone So, I anticipate, or at least hope, that we'll find more opportunities to successfully use robots for this sort of positive nudging, supporting healthy habits,
[MUSIC: Frenchman Street by Otis McDonald, part of the YouTube Audio Library. Licensed under a Creative Commons license]
getting through therapy routines, having day-to-day check-ins and interventions that could help our physical and/or mental health and helping populations who need support, who currently aren't able to get it.
So, I'm hoping that robots are out there filling a role in helping us make these positive changes -- choose positive behaviors--because I think in human developmental psychology that’s something that really challenging that hasn’t been cracked yet. And I think there is a really big opportunity for robots to help out in that way.
[Clip from Futurama movie ‘Bender’s Game’ in which a faux Rosey the robot says: “Everything must be clean. Very clean. That's why the dog had to die. He was a dirty dog. Also, that boy Elroy. Dirty. Dirty.”]
PALMER: That’s Rosey the robot again, this time spoofed on another iconic cartoon, Futurama. While this clip gives us an extreme example of what can go wrong when our robot helpers are endowed with faulty programming, Cindy Grimm is concerned with more nuanced aspects of human-robot interactions. As an associate professor of mechanical engineering here at Oregon State, Cindy thinks through the numerous issues, ranging from privacy to liability to implicit bias, that need to be considered as we build robots to share our world.
GRIMM: We did a robot photography project many years ago and basically the robot runs around to take pictures. So, then we actually took it to a wedding. We actually took it to a couple events. It was a lot of fun. But one of the things that you don't realize when you have a camera on top of a robot is that camera can take pictures all the time. Right. And what was one of the interesting things was that people quit noticing that the robot was taking pictures pretty quickly. So, they would come in, they'd go ‘Oh look cool, the robot’s taking my picture,’ and then they'd stop and talk to their friends.
PALMER: That’s Cindy. She’s describing one of her first robot projects. She started her career as a graphic designer, so for her, the robot photographer was more of an art project. But the questions it raised about bringing robots into public spaces have motivated a large chunk of Cindy’s research.
GRIMM: It's a really interesting question about what's private and what's public. So, when you're in your house, your expectation is that nobody can see you because there's walls, right? When you're out on the sidewalk, people can see you. So you, you might behave a little bit differently. When you go to the park, you might bring your bicycle, right? You might bring a wagon with some toys for your kids. But now we suddenly have this ability to have robots, you know, drones or things that are wheeled go with you. So, you could potentially go to the park with your wheeled sidewalk robot following you, your three drones over your head that are taking videos of your kids. Who's got responsibility for you and your drone flying around in the public space? Is it you? Is it the robot manufacturer? Is it the other people in that space? We just really haven't answered any of these questions.
PALMER: Liability is just one of the many legal ramifications of sharing our lives with robots. As we off-load more and more of our daily tasks to robot assistants, we’ll have to make sure they follow the same rules as humans. As much as I might be tempted, I probably shouldn’t program my self-driving car to tear through city streets at 100 miles an hour, right?
GRIMM: So, traffic laws are one of these things that are called soft laws. So, if you have a law on the robot that says, I can actually go slightly faster than 25 miles an hour in a 25 mile an hour zone, I can point to that bit of code and say you broke the law without even having you drive that speed, right? So, this is really problematic to enforce these types of laws. But another way to look at it is that these laws are not written for robots and they are not written to actually be enforced as they're written. So, I think there's kinda going to be an adjustment period when we're asking what do we actually want for appropriate behavior? And then how do we turn that into a set of numbers that our robot can actually listen to. One of my favorite examples is, Google's cars, unlike every other person in the world, they actually stop at stop signs. So, they keep getting rear-ended.
PALMER: They’re obeying the law, but they aren't obeying normative behavior. That is, they're not driving the way people really drive in practice.
[MUSIC: Eighty Miles by VYEN, part of the YouTube Audio Library. Licensed under a Creative Commons license]
GRIMM: Now there's two ways this can go. Either we insist that everybody actually starts stopping at stop signs, or are we basically relax that stop sign law and have it actually put down that it's a little more fuzzy than we would like it to be. Uh, yeah. So, we're gonna have to make those decisions for a lot of laws that we have out there.
PALMER: These decisions about how to program robots to handle the relatively black and white letter of the law will pale in comparison to the challenges of dealing with social norms.
GRIMM: People say that, uh, everybody should be kind to each other, right? That's a statement that they make. And then you look at any daily interaction and people are not always kind to each other for various reasons. So now you take a robot and a robot can actually be programmed to be kind to everybody all the time, even when it's being abused, right? That might actually lead to people being more abusive because it never pushes back, right? So, there are a lot of really even just basic social interactions that are all based on these normative behaviors that as we start to put those onto robots, we have to ask ourselves, do we actually want that behavior of everybody being kind all the time? Or do we need to recognize that sometimes it's okay to be a little grouchy if somebody is not behaving kindly towards you. And I think that's going to just take a long time to sort out.
PALMER: But Cindy doesn’t think the answer is to program robots to behave just like real people.
GRIMM: This is actually one thing that worries me. It's like you don't want Alexa talking back to you or refusing to answer your questions, right? So, you really want the perfect butler. You really want the perfect maid. Um, so, I actually think we don't want robots to behave like us. We want them to behave like our ideal vision of what a butler or a maid would look like, which has interesting social implications as well.
PALMER: But what does ideal behavior look like? And who gets to decide? As English-speaking people colonized the world in the fifteenth and sixteenth centuries, they left the English language and customs behind wherever they went. Likewise, there’s a danger that robot manufacturers will dictate what appropriate human-robot interactions look like.
GRIMM: Yeah. So, this is a big concern of mine is that most of the people who are coding up robots and AI systems tend to be white males usually with, Europe European background with a European sentiment. And so they are actually coding their normative behavior into these robots. I mean the famous one is that they didn't include any black women in their photo recognition. So, you know, their photo recognition thought all black women were gorillas, right. Because they just didn't put any in there, cause they don't know any so they wouldn't think to put it in their dataset. So, I do think there is a risk, too, that what gets implemented has that bias from day one. Like, Alexa has this problem too. Like my husband's got a Scottish accent. Alexa has a really hard time with the Scottish accent because it's never been trained with it, despite the fact that there are millions of Scottish language speakers. So, I think this is a big risk and this is why we need this notion of data transparency. Like, like when you release Alexa, I want you to tell me, how many accents did you train this on? How many people did you train it on? Did you actually train it on people from all these different backgrounds? Did you actually go out and look at what the cultural norms are asking questions and getting answers back. We just haven't done that. And we should be doing that.
PALMER: While Cindy isn’t in the lab building or programming robots, she’s in a great position to influence the people making robots to better reflect the diversity of all of their potential users.
GRIMM: So, I like to teach the introduction to robotics class here at OSU and I always make sure that whenever I'm talking about technology, when I'm talking about AI or whatnot, I point out that there are these issues of bias. There are these issues of privacy. And if you just sort of put that in front of the technology people often enough, maybe they'll listen to you. You know, you kind of need to remind technology people that their technology is going to go out in the world and it's going to impact people in some way. And so just remember it might be easier to put cameras everywhere in the space so your robot has an easier time, but what happens when somebody hacks those cameras? Is there a solution you can do that doesn’t require cameras in every corner of the room? Can you keep the data locally? Can you process the data locally? So, actually thinking through those questions before you design your technology.
[MUSIC: “The Ether Bunny,” by Eyes Closed Audio, used with permission of a Creative Commons Attribution License.]
PALMER: This episode was produced by me, Chris Palmer, with help from Rachel Robertson and Steve Frandzel, and audio editing assistance by Molly Aton. Our intro music is “The Ether Bunny” by Eyes Closed Audio on SoundCloud and used with permission of a Creative Commons attribution license. Other music and effects in this episode were also used with appropriate licenses. You can find the links on our website. For more episodes, visit engineeringoutloud.oregonstate.edu or subscribe by searching “Engineering Out Loud” on your favorite podcast app.
I’m going to give Jon the Robot the last word. This clip is from the show “Singu-hilarity: A Robot Comedy Variety Show” organized by Naomi. Keep your eye out for upcoming shows at the Majestic Theatre in Corvallis.
[JON THE ROBOT]: I have updated my privacy policy in accordance with GDPR requirements. By being in the audience, you are agreeing to let me see your face. If you tell me to forget about you forever, I must comply, even if I love you. If you agree to these terms, please applaud now. [Applause]. Thank you for accepting my privacy policy. I have now signed you up for 10 new mailing lists.