Skip to main content

Where’s the human touch?

By Illustrations by

Last Thanksgiving I went to Whole Foods to get some last-minute odds and ends. Some months prior to that, a self-checkout line had appeared as an option at the store. On that November day, there were maybe three human checkout people, and since each one had a little line of people queued up, I tried the self-checkout. I had trouble scanning the Brussels sprouts, and a man came over and showed me how to do it. At the CVS close to me there is only one checkout person and four self-checkout devices. There is also a person there to help train customers to use the self-checkout machines. When I first saw that person in the store I thought, “These folks are training us to adapt to the ways of the machines, and in so doing are eliminating their own jobs.” At Whole Foods there used to be maybe a dozen checkout people. Now, as I said, there were maybe three.

Now, scanning groceries may not be the most fulfilling job in the world, and it’s true that retail’s narrow profit margins incentivize businesses to cut costs. But the disappearance of these jobs is not just about a trend of vanishing employment opportunities; it’s also about one more little human interaction going away.

Many of us enjoy our ability to shop, watch movies, and work remotely. I have a Zoom meeting in about half an hour, with people in different parts of the country. Things like that would be difficult to do in person. But I do start to ask myself, what is the trade-off here? We’re getting some amazing convenience, but are we losing something else, something that might be at least as valuable as convenience?

This separation from our fellows is not new. Cars granted individuals the choice to go where they wanted, when they wanted — and many of them chose to do it alone. Television and radio allowed us to be entertained at home; we didn’t have to go out and mix with other people. The telephone allowed us to communicate (orally, at least) with folks without meeting them face to face. I am on the board of a nonprofit, and we did our meetings via Zoom during the pandemic. It worked, we stayed in touch, but the meetings were often enervating and drained one of energy. We communicate via a variety of signals — voice, body language, facial expressions — not all of which translate to picture-phone technology. When we started meeting in person again, the reverse happened: The meetings energized me, and I was excited by the ideas and proposals.

I am reminded of a famous quote by Margaret Thatcher, frequently (though incompletely) rendered as: “There is no such thing as society. There are only individuals.” I disagree with her: We are social animals, and that has not changed. But I can see that the rise of the concept of the autonomous “free” individual has become a commonly accepted idea, more or less, at least in the West. This assumption implies that the individual has primacy, and that the rights of the individual in many cases should trump those of the group. In extreme cases this means folks will run red lights and decide that the truth is up to them.

Connection is a cure

Rotary clubs, experts, and governments are tackling loneliness with meaningful interactions. Story

This world of individuals is not a new idea, but it’s not that old either. In her recent book, Magnificent Rebels: The First Romantics and the Invention of the Self, Andrea Wulf proposes that the idea of the primacy of the individual gained favor and was first promoted in the 1790s by a handful of intellectuals in Jena, a university town in Germany. So, maybe not coincidentally this emphasis on the individual emerged around the same time as the Industrial Revolution, when people were being herded into cities to perform repetitive, tedious jobs in concert with new machines. I wonder if something similar is happening today. Are new technologies atomizing us under the guise of convenience and choice? Are we becoming nations of individuals, as Thatcher viewed us, rather than a society?

Does all this technology have an agenda? Is it truly neutral, or could that espoused neutrality be hiding a bias toward a particular way of looking at the world, at people, and a way of being? Could this bias be unconscious, as most of our biases and assumptions are? Could the programmers and designers be leading us somewhere that they themselves are unaware of? Technology shapes us. It is not neutral. When you’re a hammer everything looks like a nail. But what if everything is not a nail? Could what is paraded as inevitable actually be just one of many ways things could be?

As someone who uses a bicycle as my main means of transportation, I’m all too aware of the assumption in previous decades that we must accommodate and prioritize private cars. But that outcome was one possible path among many. In New York, the urban planner Robert Moses decided that the middle class and its cars should be prioritized over neighborhoods and public transportation. This enabled folks to isolate themselves in the suburbs, where, too often, everyone is alone in their own bubble.

I look around and it seems that much tech development and innovation over the last couple of decades has had an unspoken overarching agenda. This agenda has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug. It's not an unintended side effect; it's a desired feature. We might think Amazon was about making books available to us that we couldn't find locally — and it was, and what a brilliant idea — but maybe it was also just as much about eliminating human contact in finding and purchasing books.

The consumer technologies I am talking about don't claim or acknowledge that eliminating the need to deal directly with humans is their primary goal, but it seems to be the outcome in a surprising number of cases. Maybe, given how often that is the outcome, it might actually be the primary goal, even if it was not aimed at consciously. Judging by the evidence, that conclusion seems inescapable.

This, then, is the new norm. I am not saying that these tools, apps, and other technologies are not efficient and convenient; this is not a judgment. I am simply noticing a pattern and wondering if, in recognizing that pattern, we might realize that the technology we adopt and that gets developed is only one possible trajectory of many. There are other roads we could be going down, and the one we're on is not inevitable or the only one; it has been (possibly unconsciously) chosen.

I realize I'm making some wild assumptions and generalizations with this proposal, but I can empathize with the programmers and creators. I grew up happy but also found many social interactions extremely uncomfortable. I often wondered if there were secret rules that would explain human expressions and interactions to me. I still sometimes have social niceties "explained" to me. So I believe I can claim some insight into where this unspoken urge to avoid social interaction might come from.

From an engineer's mindset, human interaction can sometimes be perceived as complicated, inefficient, noisy, and slow. Part of making something "frictionless" is getting the human part out of the way. The point is not that making a world to accommodate this engineer mindset is bad, but that when one has as much power over the rest of the world, as the tech sector does over folks who might not share that worldview, there is the risk of a strange imbalance. The tech world is predominantly male. Testosterone combines with a drive to eliminate as much interaction with real humans as possible for the sake of "simplicity and efficiency." Do the math, and there's the future.

The evidence

Here are some examples of fairly ubiquitous consumer technologies that allow for less human interaction.

Online ordering and home delivery: Online ordering is hugely convenient. But Amazon, Instacart, and Deliveroo have not just cut out interactions at bookstores and checkout lines; they have eliminated all human interaction from these transactions, other than the (sometimes paid) online recommendations.

Digital music: With downloads and streaming, there is no physical store, so there are no snobby, know-it-all clerks to deal with. Whew, you might say. Some services offer algorithmic recommendations, so you don't even have to discuss music with your friends to know what they like or might recommend to you. The service knows what they like, and you can know, too, without actually talking to them. Is the function of music as a kind of social lubricant also being eliminated?

Ride-hailing apps: There is minimal interaction — people don't even have to tell the driver the address, or interact at all, if they don't want to.

Friction-free in-person shopping: Amazon has stores — even grocery stores! — with automated shopping. They're called Amazon Go and Amazon Fresh. The idea is that sensors will know what you've picked up. You can simply walk out with purchases that will be charged to your account, without any human contact.

Artificial intelligence: AI is often, though not always, better at decision making than humans. In some areas, we might expect this. For example, AI will suggest the fastest route on a map, accounting for traffic and distance, while we as humans would be prone to taking our tried-and-true route. Computer programs will soon complete much routine legal work, and financial assessments are now being done by machines. But some less-expected areas where AI is better than humans are also opening up. For example, it is getting better at spotting melanomas than many doctors.

Robot workforce: Factories increasingly have fewer human workers, which means no personalities to deal with, no agitating for overtime, and no illnesses. Using robots avoids an employer's need to think about workers' comp, health care, Social Security, Medicare taxes, and unemployment benefits.

Personal assistants: With improved speech recognition, people can increasingly talk to a machine, such as Siri, Google Assistant, or Alexa, rather than a person. Amusing stories abound as the bugs get worked out. A child says, "Alexa, I want a dollhouse" ... and lo and behold, the parents find one on their doorstep.

Video games (and virtual reality): Yes, some online games are interactive, but they are typically played in a room by one person jacked into the game. The interaction is virtual.

Automated high-speed stock buying and selling: A machine crunching huge amounts of data can spot trends quickly and act on them faster than a person can.

MOOCs: Massive open online courses offer opportunities for education with limited direct teacher interaction.

Social media: This is social interaction that isn't really social. While Facebook and others frequently claim to offer connection, and do offer the appearance of it, the fact is that a lot of social media is a simulation of real connection.

The metaverse: It's a digital world with no in-person human interaction.

What are the effects of less interaction?

Minimizing interaction has some knock-on effects — some of them good, some not. These are the externalities of efficiency, one might say.

For us as a society, less contact and fewer real interactions would seem to lead to less tolerance and understanding of difference, as well as more envy and antagonism. As we've seen recently, social media can increase divisions by amplifying echo effects and allowing us to live in cognitive bubbles. We are fed what we already like or what our friends like — or what someone has paid for us to see in an ad that mimics content. The algorithms exploit human nature and direct us toward more extreme content in an effort to keep us engaged. Needless to say, the social result is devastating. In this way, we actually become less connected, except to those in our in-group. While these technologies claim to connect us, the surely unintended effect is that they also drive us apart and make us sad and envious. That's the price of growth and engagement.

We have evolved as social creatures; that's what we are, and our ability to cooperate is one of the largest factors in our success. I would argue that social interaction and cooperation are things our tools can augment, if so designed, but will never replace.

When interaction becomes a strange, uncomfortable, and unfamiliar thing, then our environment will have changed, but we will not have changed who and what we are as a species. We will have simply disconnected from who and what we are. Often our rational thinking convinces us that much of our interaction can be reduced to a series of logical decisions, but we are not even aware of many of the layers and subtleties of those interactions. As behavioral economists will tell us, we don't behave rationally, even though we think we do. And followers of the 18th-century English statistician Thomas Bayes will tell us that interaction is how we revise our picture of what is going on and what will happen next.

I'd argue that in this world of less interaction there is a danger to democracy as well. Less interaction, even casual interaction, means one inevitably lives in a tribal bubble — and we know where that leads.

Is it possible that less human interaction might save us?

Humans are capricious, erratic, emotional, irrational, and biased in what sometimes seem like counterproductive ways. It often seems that our quick thinking and selfish nature will be our downfall. There are, it would seem, lots of reasons why getting humans out of the equation in many aspects of life might be a good thing.

But I'd argue that while our various irrational tendencies might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that they will, more likely than not, offer the best way to deal with a situation.

Our irrational and malleable selves also mean that we are not fixed. We don't have to remain the racist homophobe we might have been at 12 years old. We change, we evolve — often due to having come in contact with other people.

What are we?

Antonio Damasio, a neuroscientist at the University of Southern California, wrote about a patient he called Elliot who had damage to his frontal lobe that made him unemotional. In all other respects he was fine — intelligent, healthy — but emotionally he was Spock. Elliot couldn't make decisions. He'd waffle endlessly over details. Damasio concluded that although we think decision making is rational and machinelike, it's our emotions that enable us to actually make decisions.

With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents, and unexpected connections and intuitions. Interaction, cooperation, and collaboration with others multiplies those opportunities.

We're a social species; we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot do alone. In his book Sapiens, Yuval Noah Harari suggests this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by an ability to believe in "fictions," such as nations, money, religions, and legal institutions. Machines don't believe in fictions — or not yet, anyway. That's not to say they won't surpass us. If less human interaction enables us to forget how to cooperate, then we lose our advantage.

Our random accidents and odd behaviors are fun — they make life enjoyable. I'm wondering what we're left with when there are fewer and fewer human interactions. Remove humans from the equation, and we are less complete as individuals and as a society.

"We" do not exist as isolated individuals. Sorry, Margaret. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive.

David Byrne is a musician and artist who lives in New York City. He is the author of several books, including How Music Works. Versions of this piece originally appeared on his website, davidbyrne.com, and in MIT Technology Review.

This story originally appeared in the April 2023 issue of Rotary magazine.