Will swarms of unmanned aerial vehicles be able to aid humans in wildland firefighting or package delivery? Research summarized in a new paper in Field Robotics represents a big step towards realizing such a future. In this interview, Professor Julie A. Adams describes the research showing that one person can supervise more than 100 autonomous ground and aerial robots.
RACHEL ROBERTSON: Hey podcast friends, I’m Rachel Robertson, and in this episode, I’m going to do things little a different from our usual style. Today you will hear me interview Julie A. Adams about a paper just published in Field Robotics called: Can a Single Human Supervise a Swarm of 100 Heterogeneous Robots?
At Oregon State University, Professor Adams is the associate director of research for the Collaborative Robotics and Intelligent Systems Institute and the director of the Human-Machine Teaming Laboratory.
You can find the transcript of this interview and related content at engineeringoutloud.oregonstate.edu.
MUSIC: “The Ether Bunny,” by Eyes Closed Audio, licensed under CC by 3.0]
ROBERTSON: From the College of Engineering at Oregon State University, this is Engineering Out Loud.
ROBERTSON: Can you explain the overall project, including what your role was or the role of OSU?
ADAMS: Sure. So, this project was intended to demonstrate over the course of five years the ability to build a system of up to 250 ground and aerial robots that could be deployed in a 10-block urban area for an extended mission time of about three to three and a half hours. So, our team was led by Raytheon BBN the PI was Shane Clark, and I was a co-PI on the grant from Oregon State University, along with our collaborators who were also a co-PI from Smart Information Flow Systems. We played many roles over the course of the five years of the project. Everything from, you know, charging batteries and putting systems out to actually doing some of the basic research for the project. So, this project required really taking off the shelf technologies and building the autonomy needed for them to be deployed by a single human that we call the swarm commander. That work also required developing not just the systems and the software to do what we needed, but also to develop the interface for that swarm commander to allow a single human to really deploy these grounded and aerial systems in a three-dimensional environment. And for that, we developed a virtual reality interface called I3. And so my role was to contribute to the interface. We also contributed autonomous algorithms called tactics and to analyze the ability for the human to deploy the system in these real-world environments.
ROBERTSON: So, tell me about the interface.
ADAMS: So, the interface I3 was developed specifically to allow someone who is deploying this many robots to be able to do so with what we call high level commands. So, they aren't physically driving each individual vehicle, because if you're deploying that many vehicles, you can't, a single human can't do that. So, the system is designed to have a plan that would be executed. And then there are something called plays, which are high level plays similar to plays that a quarterback would use during a US football game. And so, the idea is that the swarm commander can select a play to be executed and has to make minor adjustments to it just like a quarterback would in the NFL, you know, the quarterback's looking at what's the weather? What, what are my teammates that are on the field? What is the defense doing in order to adjust that play appropriately?
And that's exactly what we wanted the swarm commander to be able to do. We want to minimize those interactions, but we want to be able to assign that task to large numbers of the vehicle simultaneously. And so, we had all these predefined plays in addition to the plan. And so, when the plan was executed, if there were some additional plays that needed to be executed, the swarm commander would select the plays and make the necessary modifications in order to send that out to the vehicle so that they would go do the tasks. The swarm commander is also responsible for monitoring what's going on. And in these 3D environments, three-dimensional environments with this many vehicles, you are going to necessarily have vehicles that lose communication with the system. And that would happen if you've ever been in a downtown area with tall buildings and your cell phone, you can't get the GPS signal. That's a similar kind of thing for the vehicles when they're, for example, the UAVs are flying down into that built environment to gather information, or the ground robots are driving down a road you know, you're going to lose communication with them and you have to be able to understand that that's happening. So, the swarm commander's responsibility is to really ensure that the system is deployed safely in this built environment and that we're achieving the things that we're supposed to be achieving in the plan.
ROBERTSON: Alright, so let's talk about this particular study. What, what did you find?
ADAMS: So, over the course of the five years, there were multiple field experiments in which the team would take their systems out and test them. And with each field exercise, we increased the number of vehicles. But consistent through that was that the swarm commander was still using the interface and the system to deploy these vehicles. So, this is something that really hadn't been done in a built environment before with a single operator. And over the course of the time period, we would collect what I call subjective data. So, it was asking the swarm commander to provide information about their workload and their stress level or their fatigue level. And I would do that every 10 minutes during each deployment. and during a field exercise, they’re multiple days they're very long days and we would have at least one deployment a day. So, I was collecting that information usually over about five to 10 days, depending upon the total duration of the field exercise.
So, this study went a step further, and we wanted to be able to collect the objective data using physiological sensors to objectively assess the human's workload and whether or not they could handle this number of vehicles. So, in the final field exercise, we deployed over a hundred vehicles simultaneously, and I collected data over the course of about 10 days. Each field, each trial that we go out and deploy, there's differences. There may be different number of vehicles, there may be weather conditions. So, during this final field exercise, we actually had one day where we had 29 mile an hour winds. We'd never flown our UAVs and wind gusts that high. So, you can't really control this like you would in the laboratory. So, prior to this paper there's a lot of concern about whether or not a single human could actually deploy this many vehicles simultaneously.
And if you think about, for example, delivery drones, the way companies would like to do this, and that's not the case right now, but what companies would like to do in the future is to be able to have a single human responsible for a hundred or more drones. And you know, there are regulations right now that limit that, but we have to be able to demonstrate, for example, to the FAA that this is feasible. And so, these results, even though they're in a different scenario and also include ground vehicles, the majority of the vehicles were unmanned aerial vehicles or UAVs. And we are able to show that the humans can do this. And what we find is that with the objective data that yes, the humans workload will spike and become what we call overloaded too high for a few minutes, but it's not sustained.
And if you have this really sustained overload, the human's performance goes down they're gonna become fatigued very quickly, their stress is probably gonna go up, and their ability to really monitor and understand and deploy the system safely is going to diminish. And our results over the course of the multiple days, what we found was you would see a few minutes, three to five minutes of high workload, typically in very high stress situations. For example, that day when the wind was very high and causing issues with our vehicles. But it would go down. And when you, when I interviewed them afterwards, I would ask them, you know, did you ever feel that you were beyond … that you couldn't continue. And they said no. And throughout those same data collection periods, I also collected those subjective ratings that I had been collecting over the course of the prior field exercises. I do need to make it clear that this system, these field experiments occurred in military airspace and not in the national airspace. Which would require specific approvals by the FAA.
ROBERTSON: Okay, great. Were there, are there any other findings that you feel like would be important to add?
ADAMS: The most important thing is, you know, demonstrating objectively that humans can do this task under different conditions in a real-world environment. And this was also an environment with diminished communications compared to what we would find in the commercial world, for example. In the commercial world and in the national airspace, there's going to be trackers on the unmanned aerial vehicles just like we have on crude aircraft. And you're going to have better communication systems. You're going to have cellular connectivity, which we didn't have. So, you know, the ability to understand that the human can do this is one key thing. I think another key aspect is I mentioned the plays and the mission plan, which reduce the work that the human has to do. But another really important aspect is how are you visualizing that information and providing it back to the human so that they can understand what's going on when they're monitoring you know, these vehicles being deployed over a large area with multiple buildings and the vehicles are dispersed and it's visually somewhat difficult to track them.
And it's all in how do you actually display that information as well? And understanding what information has to be displayed and what information doesn't. One of the things is developing what we call glyphs. Other people would call them icons that have the representative meaningful information so that the operator can very quickly see the status of the vehicle if something's not going well. Also, creating an appropriate representation of the most high-priority information for the current context. So, getting rid of the clutter information that's not necessary in order to help that human focus. Also, helps maintain their workload. And we were able to demonstrate … so you might think that deploying a mission plan or creating a play would be the highest workload segments. And we do see spikes in workload as the play is being prepared. So that massaging of the play that I was talking about, but really it drops significantly once the play is issued. And with a mission plan, we really don't see any change in workload because the mission plan's already set, all the operator or the swarm commander is doing is hitting a button to deploy the vehicles. So, yeah.
ROBERTSON: Okay, great. So, a hundred seems like a lot. I mean, did it surprise you that one person would be able to control that many?
ADAMS: It absolutely did not surprise me. I had, so it was more than a hundred. And it, you know, I have hypothesized for years now because this is an area that I've worked in for a very long time as human multiple robot interaction, that with an appropriate level of autonomy with systems that are able to recover on their own, that we can allow humans to supervise more vehicles and have them do so correctly with low workload. And so, if you have systems that are resilient and are able to handle the different unexpected events that will occur then… and they're able to do that autonomously and the human doesn't have to interact with them on a, you know, even a 10-minute basis, then you're going to be able to manage the workload and deploy more vehicles. So, when I say that, you know, if you think about a commercial off-the-shelf unmanned aerial vehicle, the battery lives are about 20 minutes.
So, if you're deploying these vehicles over the course of three hours, clearly you need those vehicles to come back. You need batteries changed. So, one of the things that we developed, my student, Grace Diehl developed was an algorithm tactic that allows the vehicles to automatically swap when the battery is low. So, if you have a task that is not a persistent task, the vehicle that's deployed can say, “Help. I need a replacement.” That replacement flies out while that vehicle's flying back to get its battery changed. And if it's a persistent task where a vehicle has to stay on task, it would request a replacement, that vehicle would arrive, and then the prior vehicle would go back so the battery could be changed. That type of activity we can do autonomously. And that's exactly what the kinds of things that we need to have happen, and that's what enables the human to be able to deploy this many vehicles simultaneously. Those kinds of behaviors. Not that specific one, but that's just one example of a behavior.
ROBERTSON: Okay. Cool. What do you feel like, and if you feel like you've already said this, you don't have to answer, but what do you feel like were the key accomplishments of the research?
ADAMS: The most important accomplishment for this paper. There's a lot of accomplishments for the project, but for the focusing on this particular manuscript the key accomplishment is demonstrating this objective data that really shows that a single human can deploy these systems in built environments. And that has very broad implications beyond this project. So, I mentioned delivery drones earlier, we don't see a lot of those yet in the United States, but there are companies such as Wing and Zipline who have been deploying delivery drones in other countries. And, you know, in order for those systems to be deployed at scale, that makes business sense, you have to have a single person responsible for very large numbers of these drones in the national airspace. So, this set of data really demonstrates that this is something that is viable. I'm not saying it's a final solution that really shows everything is OK and that, you know, but it is the first step towards getting additional data that would facilitate that kind of a system. And this isn't just for delivery. I mean, even when we talk about trying to deploy, for example multiple drones for, say, wildland fire response or disaster response, same thing. Right now, you are not allowed to have a single person really monitor multiple drones simultaneously unless you have very specific approvals from the FAA. So, these kinds of results really help build that, so that in the future we can have these systems that really support not only the responders, but also civilians and, and help save property.
ROBERTSON: Okay. Great. What do you want the public to know about this research?
ADAMS: I may sound like a broken record, but I think it's important for the public to realize that when you have these systems and they have been developed properly and they've been tested so that the autonomy and the artificial intelligence, not necessarily machine learning, large language models, et cetera are developed and tested appropriately, that they can be safely deployed and they can have a big impact on basically our lives, right?
If you want to get your package in, you know, like they show in the videos online, in a few minutes, then, you know, using an unmanned aerial vehicle to do that makes sense. Rather than putting more cars on the road, for example, I think the bigger impact comes in places like wildland fire response. And since we're in Oregon, that's something that I've been focusing on. And when you can look at deploying both larger unmanned aerial systems and smaller unmanned aerial systems, for example, there's some systems that have been developed for doing ridge line ignition. So, trying to ignite the underbrush in order to keep the fire from going over the ridge line and spreading. And that is typically in a very dangerous situation. It has to be done at night. And currently you've got firefighters on the ground doing this in the dark, in very rugged areas. And if you could have multiple unmanned aerial vehicles doing that, it would be more effective as long as you've got good systems that are able to do that. So, these are examples of where you could get that information and even post-fire for example, deploying multiple vehicles to understand what is the level of damage how many trees are potentially unsafe that have to be dealt with, things of that nature is really important.
ROBERTSON: So why is this kind of research and maybe machine or human machine teaming in general of interest to you?
ADAMS: I've been doing human machine teaming research for well over 30 years. I think originally when there was a grant that was funded at University of Pennsylvania where I was a grad student there was a piece of the project focused on the human interacting with the system. And that was pretty novel in the early nineties. In the early 1990s, really, what the robotics community wanted were fully autonomous robots. They didn't want a human in the loop. Clearly, we've come to the recognition at this point that humans have to be in the loop with these systems and, and inevitably will always be in the loop for various reasons. One is just to ensure safety, but also in a lot of scenarios you need to make sure that there's a human that has some ability to provide feedback to the system to ensure that it's doing the right thing for ethical reasons, for safety issues or something else.
So, with that project … that project involved five faculty and five students … and the reason I became interested in that project was it allowed me to bring together all the things that I was really interested in. It allowed me to have to understand the robot autonomy and the sensors and the sensor fusion as well as the system development. And to bring that together into how we could actually use these systems. So that broader picture is something that has been an important key aspect of my research throughout my entire career. That project was for robots in basically a warehouse type of domain, which, you know, today we have those kinds of robots, right? And so, you solve some of those problems, but then you've got to solve the bigger problems because you always got to move to the next thing.
This particular project with the number of vehicles I've worked, you know, human swarm interaction for many years now, and this particular project, to be able to take the system from simulation to a real environment was really a key thing. So, you know, it's, it's all the pieces. Because I have to, in order to develop the interaction, you have to understand how the systems work and what their limitations are and what the human actually has to know. A lot of people want to provide the human with every piece of information. You're going to overload them. You're giving them information that doesn't matter. If you think about going and getting money out of an ATM machine, do you really need to know what's happening in the background in order for that money to get transferred from your bank to the machine? No, all you really care about is entering your PIN number and getting your cash out of that machine, right? And probably what the bank fee's going to be
ROBERTSON: Perfect. That was a really great description. Okay, so you mentioned Grace Diehl, who did some work related to this project. Did you want to talk a little bit about what like student involvement in these projects?
ADAMS: So, Grace Diehl basically came to Oregon State and three, about three weeks after she started the term, we, I took her on her first field exercise for the DARPA offset program. At that time, she and I were the only female teammates. She was the only … and she was the only student on the grant because with our collaborators at Raytheon BGN as well as our, the Smart Information Flow Technologies, they're companies, they don't, you know, they didn't even have interns, I don't think that first field exercise. So could be a very intimidating environment. When we reached the final field exercise, Shane Clark, the PI on the grant at Raytheon, BBN commented that, aside from the PIs on the grant, the primary investigators, Grace was the only team member that had been to every field exercise. So really great experience, I think for a robotics graduate student to get their hands on all the different things that you have and understand all the different things you have to do if you're going to deploy these systems, right?
You've got to be able to put the hardware together. You have to be able to test the systems and do it in a structured way. You have to be able to just do whatever's needed, whether it is putting a tarp over the robots quickly or pulling robots off the field because it's starting to rain, or charging batteries, or writing code on the fly to fix things. Right? Getting that experience is really key. And for Grace in particular, she is actually got her job that she's starting this … later this year due to the project. So, at one point she went and did an internship with Shane and some of the other team members at their new company, and she was offered a job from that and has accepted that job and will be going there. So that's great. She's going to be working on some of the same things that she worked on.
Over the course of the project, there's been a lot of students on the project. Most recently the other student was Robert Brown, who's now at SpaceX. Robert came to Oregon State as a PhD student and helped work on the project, but then decided that he wanted to finish with a master's degree because he got an opportunity to go work at his dream company. So, he did that. But, you know, the students really get this true field work experience and while it is stressful and at times you're just so fatigued and drained, you know, that's an experience that most students don't get at this level. And that's why for this type of project, the majority of the of the participants are corporate employees as opposed to grad students and university faculty. So, it was a very unique experience that way.
ROBERTSON: And maybe not. So Grace was, she got a dual degree, right? In computer science and robotics. Yes. So it's not, it's a little different than what you normally think of for computer science, like what you would do with computer science.
ADAMS: That's absolutely true. So, if you think of, you know, more traditional computer science departments you could argue that many students, the majority of students would not do field work. Having said that if you are in one of the top schools doing robotics, you do have some of those opportunities in computer science to do that. But it, it very much depends on the school. But with the dual degree, you know, our robotics program does require students to have worked on real robots before they graduate, whether it's a master's degree or a Ph.D. Grace definitely got that. And, and all of the students on the project got that.
ROBERTSON: Okay. Great. Is there anything else that you want to say about the project?
ADAMS: Yeah, so I do need to say that this was a DARPA funded project and the things that I've stated were either parts of the paper or other things that have been already been approved by DARPA for public release. And they reflect my opinions and not those of DARPA.
Thanks for listening. Let me know what you think of this Q and A style podcast at firstname.lastname@example.org.