Something remarkable happened yesterday, not remarkable good but remarkable crazy. I was riding in one of the new group taxis that have taken over New York City, and we were traveling from midtown West to midtown East. I was the next to be dropped off and there were umpteen routes that we could take to get to where I was headed. The Black Suburban’s GPS, which had the singing voice of a chirping bird, pointed us to cross the island of Manhattan, not through the park, but via a particular commercial street. And so we did.
The problem is that anyone with a brain who knew anything about Manhattan would also know that the street the GPS was telling us to cross was a terrible option and the last street on earth one would want to choose in good conditions, much less the conditions on that particular day. A human brain with intelligence and life experience, that could factor in the context of rush hour, pouring rain, construction, and a bridge set at the east end of exactly that street, would know that any other path would be a better option to get to where I was going. But alas, technology told us to go that way—and so we did.
After sitting in entirely stopped traffic for ten minutes and then crawling bumper to bumper for another ten, just to travel half a city block, I asked the driver if he could get off this particular street and take a different route, to which he replied, “But the GPS tells me that this is my path,” “But what happens if we know better than what it tells you to do?” I asked. While I don’t remember his exact words, the message was that regardless of what we in the car know to be true, he has to follow the directions of the computer. If the computer chirps it, we do it.
The fact that this path might be the shortest physical distance between the two points was irrelevant at this time of day, with this particular weather, and with the reality of urban planning. Nonetheless, we honored the computer’s determinant, geographical distance, as if it were the only important element in making this decision.
Five minutes later, still moving an inch at a time, I asked the driver if would be possible for him to text the company and tell them that unforeseen (by the computer) conditions had rendered its usual genius inaccurate, and to inquire whether we humans could override its intelligence and take another route. He told me at this point, 25 minutes into the street crossing, that only the passenger could text the office to tell them that real life dictated a route other than what the computer indicated. But he certainly couldn’t do that. When I then asked him why he had not suggested that I text the company earlier, when we were talking about the traffic, he looked confused and reiterated that he had to do what the computer told him to do.
I didn’t say anything after that, but I did get out of the van and walk in the pouring rain the rest of the way. What I knew about traffic and my city didn’t matter, but what I knew about myself did matter, and that was that I needed to be out of that black Suburban as soon as possible.
Have we gone mad as a species? Are we so anxious to surrender our authority, to not have to think, not be in charge, that we will follow any computer that tells us what to do—even when we know better? Do we really want to be passive lab rats? What has happened to our respect for and trust in our own intelligence, and our ability to figure things out for ourselves?
While algorithms can decide a lot of things, they cannot substitute for human intelligence, which can factor in the wisdom of experience, context, circumstance, psychology and a whole lot of other factors too, all at once. To make wise decisions we need a lot more than just facts, and yet, we are behaving as if data is the central key to a good life.
In truth, the expression on my driver’s face when I asked him if he could take another route, was the spookiest thing I encountered, and what made me feel most hopeless. This grown man, who I am sure has lived a life filled with experience, and who probably has a tremendous amount of wisdom, looked like someone who had been vacuumed of his own life force, his basic humanness. He looked, dare I say it, like a robot.
How can we regain authority in our own lives?’ This is the question that is not just interesting, but existentially urgent. How can we stop ourselves from becoming robots, handing over our intelligence and life force to the computer? How far are we from a time when the computer chirps us a message that is not just inconvenient, but actually destructive?
The human brain has the capacity not just to gather, store, and link data, but also to bring to that data an intelligence and wisdom of experience that is not just profoundly important, but also changes that data into something else. We need more than information to live a good life, we need the ability to process and to make meaning, which (still) only humans can provide.
In the meanwhile, use the computer to text the head office and tell them that the human on board knows better. Grab the reins back in your own life. And remember, we humans, at least for now, are still the ones in charge—if we decide to be.
Copyright 2015 Nancy Colier