Stray Thought Adam Stone's Home on the Web

Good Technology Doesn't Pretend to Be Human

The patent illustration on the application for the Lego minifigure from 1979 showing four perspective views of the figure in walking and seated poses
USPTO public domain

Confessions of a Self-Checkout Devotee

I like using self-checkout at the store. This comes as no surprise to my mother. As she tells it, when I was a kindergartener, any time I was asked what I wanted to be when I grew up, I’d happily declare that I would be a cashier at the supermarket. Years later, a summer behind the register at the Almacs in town cured me of whatever remained of that ambition, but even today, most of my grocery and pharmacy shopping trips end with me ringing myself up.

That’s not to say that I enjoy all self-checkout experiences. I particularly dislike those that slow down the process with clumsy and frankly insulting anti-theft measures. Despite retailers’ claims, I just don’t believe that self-checkout increases shoplifting risk. In fact, it’s well established that retailers lie about the amount and causes of theft they experience. It seems to me that the traditional approaches to shoplifting, like concealing items in your clothes, remain far more practical than trying to perform self-checkout sleight of hand.

Even accounting for that caveat, I accept I’m in the minority. I suspect media coverage overstates the case when they claim that self-checkout is a “failed experiment” (Tasting Table) that “nobody likes” (CNN) and “hasn’t delivered” (BBC). But I do accept that, on the balance, people prefer human cashiers. Incidentally, my own experience as a human cashier does not suggest that this preference manifests as kindness.

So be it—I don’t mind being out of step with the mainstream. As an introvert, I don’t see paying for groceries or toilet paper as an experience that demands the human touch. I feel perfectly comfortable interacting with a tool to scan my items, calculate the total, and pay. My father falls into the opposite camp. I don’t know if he’s ever used a self-checkout lane, and he avoids ATMs so he can chat up the bank tellers.

A Waiter is More Than a Talking Order Form

Both of us, I submit, have reasonable preferences. It should be no problem for some of us to prefer interacting with people and others to prefer interacting with tools. I also don’t think these preferences are fixed. I might prefer the self-checkout lane, but I still value recommendations from a knowledgeable butcher or cheesemonger.

But my father and I both prefer to know when we’re dealing with a person and when we’re interacting with a tool. Until recently, this distinction was obvious, but we now live in the post-Turing-test age. Large language models (LLMs) give product designers the power to build conversational interfaces that mimic human interaction, and I think we are only beginning to grapple with the most effective and responsible ways to use these technologies.

This problem isn’t completely novel. Even before computers, people developed ways to address some of the shortcomings of natural language as an interface. Consider humble technologies such as invoices or order forms, perhaps like those used at sushi restaurants. These tools work in contexts where structured data is superior to narrative, even without computers involved, and their nature as tools is transparent.

Beyond Preference: A Question of Trust

Maintaining that transparency is going to be the real challenge. Judging by the current crop of LLM-powered products hitting the market, it seems many designers assume that people who prefer human interaction will also prefer tools with human-like interfaces. I believe they are gravely mistaken. Even worse, I think this misunderstanding suggests that some designers would deliberately deceive their users into believing they are interacting with a human being when they are not.

Not only does this deeply misread users’ preferences, I contend as a rule that it’s unethical for product designers to mislead their users.

True, not every user cares whether they are interacting with a person or a tool. In fact, I must admit that I’m somewhat surprised by how many people don’t. But many care profoundly about that distinction, seeing it at the heart of ethical reasoning.

This isn’t just a philosophical question. It’s an emerging challenge that should prompt us to rethink the way we design and use technology. The traditional divide between preferring human or automated service may soon be less relevant than a new question: Do you want to know whether you’re talking to a human or a machine?