195

Co-Evolution

by
Scope Correspondent

I sit in the small windowless room at the back of the MIT Humans and Automation Laboratory staring excitedly at the computer screen in front of me. The tutorial for the simulation begins. I will have four drones at my disposal, or unmanned vehicles as those in the know call them. Three are aerial vehicles. The fourth is a watercraft that navigates its way on a river cutting through the center of the digital battle ground. A series of controls will allow me to direct these drones, but there is a catch.

I cannot micromanage the drones the way one would move pieces on a chessboard. Instead, the interface asks me to pick from a variety of priorities, such as seeking out potential targets, babysitting ones I have already found, and destroying hostile ones. A complicated algorithm, I am told, will ensure my priorities are doled out to my machines in the most efficient way possible. In other words, my puny mind can’t fly three planes at once, but a computer can. The question is, how?

At first my vehicles and I seem to be getting along quite swimmingly. A performance graph in the lower-left corner tells me that my human/drone relationship is going better than that of most operators. We have covered quite a lot of search area. A window pops up asking me to rate my trust for my little drones and the system. Quite well, I tell the simulation. Thanks for asking.

The next simulation does not go so smoothly. I have ordered my drones to search and identify new targets, but we’ve covered very little ground and my performance graph is plummeting into the netherworld. Why are they babysitting old targets when I’ve asked them to look for new ones? I am getting frustrated. The trust window pops up again, and I swat it away with the lowest rating it allows me to give it. Bad drones. Baaaad drones, I mentally tell my new pets. A chat window activates in the bottom right corner. My commander wants to talk: How many hostiles have you destroyed? I try to think back but can’t seem to hold a steady tally in my mind. How could I not know how many targets I have destroyed? I don’t remember, I type. The researcher giggles behind me.

I am admittedly not the best candidate to be pondering the future of drones. I have a horrible relationship with machines, even when they come in packs of one. I audibly scold my car when it beeps at me to put on my seatbelt, and I have attempted to wrestle a vending machine more than once. I am that girl in the restroom waiving her hand wildly in front of the paper towel machine and walking back and forth in front of the toilet, trying to get it to flush. Even my own computer begs me to update its programs regularly.

The simulation ends and I look at the researcher guiltily, like a house-trained puppy who has peed on the rug. But, the researcher soothes my shame. The point, he tells me, is to learn how humans and machines interact in a system where the operator is dealing with multiple vehicles. The results from my test will be one small data point in a larger goal: finding an unmanned vehicle interface and algorithm that provides both the vehicles and the operator (me) the perfect balance between freedom and autonomy. Humans have more or less figured out how to interact trustingly with one drone, but when it comes to many, the issues seem to multiply exponentially.

Solving this problem may very well be the key to creating drone swarms or armies. And in a world where people consume science fiction films like chewing gum, it is difficult not to wonder what it would be like if drones took over. Are researchers afraid of this dystopian future? “No, cause I know how bad this stuff is. I know how fragile and incompetent it is,” a researcher tells me. One time he watched a UAV (unmanned aerial vehicle) spend ten minutes trying to fly through the ceiling until its batteries ran dead. Another drone crossed over a busy Nevada highway, all of its own accord and was never seen again. The perfection of drone swarms (and therefore a symbiotic drone/human society) may be a long way off, but in a world that has been evolving towards increasing automation, it seems like an inevitable future all the same.

Wars have already become somewhat automated. The famous example is, of course, the predator drones, helping us fight battles overseas from the confines of Nevada base stations. But the process of military automation began long before that. The introduction of radars, radios, and recorders into the fighter pilot repertoire, beginning in the 1950s, increasingly exported his job to machines and those on the ground. However, it did not just change the pilot’s role, it changed his identity. He could no longer be the lonely hero soaring through the sky.

If technology can turn even the job of sky cowboy into a desk job, how might daily human/drone symbiosis change the role of the everyday person? Though multi-drone technology is not ready yet, many things are more automated than people realize. Our commercial airplanes practically fly themselves, and we already have self-driving cars roaming the streets. I shudder to think of the future, getting into an argument with the milk delivery drone. I ordered 2%, not skim! I tell it. If there is nothing colder in this world than the meter-maid, what will an automated meter-maid be like?

A dystopian future may be unlikely, but the effect drones will have on our role as humans, is still unclear. Will this new brand of automation free people up for more important pursuits—the way agriculture freed up early humans to pursue more intellectual pastimes? Or, like the industrial revolution, will it serve to further separate us from our means of production? Either way, it may be time to redefine co-evolution.

Comments

0 Comments