Stray Thought Adam Stone's Home on the Web

Post feed: #ai

Why Every "AI" Conversation Feels Like Nonsense

Another “AI” Post?

I really didn’t want to write about artificial intelligence (AI) again so soon, and I promise this will be the last one for a little while, but I came away from my last two posts with a very unsettled feeling. It wasn’t the subject matter itself that bothered me, but the way I ended up writing about it.

In writing those posts, I made an intentional decision to be consistent with the vernacular use of the term “AI” when discussing a specific set of popular tools for generating text. Surely, you’ve heard of tools like ChatGPT, Claude, and Gemini. I did this because using more precise language is (1) awkward, (2) inconsistent with the terminology used in the podcast that prompted my posts, and (3) so unfamiliar to a layperson that accurate language comes off as pedantic and unapproachable. In the interest of making the posts simpler and more focused on the issues raised by the way these tools are being used, I ended up just accepting the common language used to talk about the tools.

But the truth is that I know better than to do that, and it bothered me. ChatGPT ≠ AI, even if it’s very common to talk and write about it that way. As time passed, I felt worse and worse about the possibility that by accepting that language, I gave the impression that I accept the conceptual framing that these tools are AI. I do not. In this post, I intend to make an addendum/correction to clarify my position and add some context.

A still image from the movie The Princess Bride showing Inigo Montoya looking at Vizzini. A caption in meme font reads: You keep using that word. I do not think it means what you think it means.

What’s Wrong with the Term “Artificial Intelligence”?

People have been confused about what “artificial intelligence” means for as long as they’ve used the term. AI, like all of computer science, is very young. Most sources seem to point to the 1955 proposal for the Dartmouth Workshop as its origin. Right from the start, it seems to have been chosen in part to encompass a broad, uncoordinated field of academic inquiry.

By the time I was a computer science undergraduate (and, later, graduate student responsible for TAing undergrads) in the late 90s, little had changed. The textbook I used, Artificial Intelligence: A Modern Approach (AIMA) by Russell and Norvig, didn’t even attempt to offer its own definition of the term in its introduction. Instead, it surveyed other introductory textbooks and classified their definitions into four groups:

  • Systems that think like humans
  • Systems that act like humans
  • Systems that think rationally
  • Systems that act rationally

The authors describe the advantages and drawbacks of each definition and point out later chapters that relate to them, but they never return to the definitional question itself. I don’t think this is an accident. If you take AIMA as an accurate but incomplete outline, AI consists of a vast range of technologies including search algorithms, logic and proof systems, planners, knowledge representation schemes, adaptive systems, language processors, image processors, robotic control systems, and more.

Machine Learning Takes Over

So, what happened to all this variety? The very short answer is that one approach really started to bear fruit in a way nothing else had before. Since the publication of the first edition of AIMA, researchers produced a host of important results in the field of machine learning (ML) that led to its successful application across many domains. These successes attracted more attention and interest, and over the course of a generation, ML became the center of gravity of the entire field of AI.

If you think back to the technologies that were attracting investment and interest about 10 years ago, many (perhaps most) were being driven by advances in ML. One example is image processing—also called computer vision (CV)—which rapidly progressed to make object and facial recognition systems widely available. As someone who worked on CV back in the bad old days of the 90s, I can tell you these used to be Hard Problems. Another familiar example is the ominous “algorithm” that drives social media feeds, which largely refers to recommendation systems based on learning models. Netflix’s movie suggestions, Amazon’s product recommendations, and even your smartphone’s autocorrect all rely on ML techniques that predate ChatGPT.

Herein lies the first problem with the way people talk about AI today. Though much of the diversity in the field has withered away, AI originally encompassed an expansive collection of disciplines and techniques outside of machine learning. Today, I imagine most technologists don’t even consider things like informed search or propositional logic models to be AI any more.

The Emergence of Large Language Models

In the midst of this ML boom, something unexpected happened. One branch of the AI family tree advanced unexpectedly rapidly when ML techniques were applied: natural language processing (NLP). Going back to my copy of AIMA, the chapter on NLP describes an incredibly tedious formalization of the structure and interpretation of human language. But bringing ML tools to bear on this domain obviated the need for formal analysis of language almost entirely. In fact, one of the breakthrough approaches is literally called the “bag-of-words model”.

The first edition of AIMA open to the NLP chapter. Visible on these pages are a complicated parse chart for a simple sentence (I feel it), a table labelling the chart with parsing procedures and grammatical rules, and a parse tree for a slightly more complex sentence.
A sample of what the NLP chapter of AIMA looked like before everything became a bag of words.

What’s more, ML-based language systems demonstrated emergent behavior, meaning they do things that aren’t clearly explained by the behavior of the components from which they are built. Even though early large learning networks trained on language data contained no explicit reasoning functionality, they seemed to exhibit that behavior. This is the dawn of the large language model (LLM), the basis of all of the major AI products in the chatbot space. In fact, this technology is the core of all of the most talked-about products under the colloquial AI umbrella today.

Here’s the second problem: people often use the term “AI” when they really this specific set of products and technologies excluding everything else happening in the field of ML. When someone says “AI is revolutionizing healthcare,” they might be referring to diagnostic imaging systems, drug discovery algorithms, or robotic surgery assistance, or they could be talking about a system that processes insurance claim letters or transcribes and catalogs a provider’s notes. The uncertainty makes evaluating these claims almost meaningless.

The Generativity Divide

There’s another important term to consider: “generative AI.” It describes tools that produce content, like LLM chatbots and image generation tools like Midjourney, as opposed to other ML technologies, like image processors, recommendation engines, and robot control systems. Often, replacing the overbroad “AI” in casual use with “generative AI” captures the right distinction.

And that’s an important distinction to draw! One unfortunate result of current “AI” discourse is that the failings of generative tools, such as their tendency to bullshit, become associated with non-generative ML technologies. Analyzing mammograms to diagnose breast cancer earlier is an extraordinarily promising ML application. Helping meteorologists create better forecasts is another. But they get unfairly tainted by association with chatbots when we lump them all together under the “AI” umbrella.

Consider another example: ML-powered traffic optimization that adjusts signal timing based on real-time conditions to reduce congestion. Such systems don’t generate content and don’t lie to their users. But when the public hears “the city is using AI to manage traffic,” they naturally imagine the same unreliable systems that invent bogus sources to cite, despite the vast underlying differences in the technologies involved.

That said, we can’t simply call generative AI risky and other AI safe. “Generative AI” is a classification based on a technology’s use, not its structure. And while most critiques of AI, such as its impact on education, are specific to its use as a content generator, others are not. All learning models, generative and otherwise, require energy and data to train, and there are valid concerns about where that data comes from and whether it contains (and thus perpetuates) undesirable bias.

The Business Case for Vague Language

Why does this all have to be so confusing? The short answer is that the companies developing LLMs and other generative tools are intentionally using imprecise language. It would be easy to blame this on investors, marketing departments, or clueless journalists, but that ignores the ways technical leadership—people who should know better—have introduced and perpetuated this sloppy way of talking about these products.

One possible reason for this relates to another term floating around: artificial general intelligence (AGI). This is also a poorly-defined concept, but researchers who favor building it generally mean systems with some level of consciousness, if not independent agency. For better or worse, many of the people involved in the current AI boom don’t only want to create AGI, they believe they are already doing so. Putting aside questions of both feasibility and desirability, this may explain some of the laxity in the language used. AGI proponents may be intentionally using ambiguous, overgeneralized terminology because they don’t want to get bogged down in the specifics of the way the technology works now. If you keep your audience confused about the difference between what’s currently accurate and what’s speculative, they are more likely to swallow predictions about the future without objection.

But I think that’s only part of what’s happening. Another motivation may be to clear the way for future pivots to newer, more promising approaches. Nobody really understands what allows LLMs to exhibit the emergent behaviors we observe, and nobody knows how much longer we’ll continue to see useful emergent behavior from them. By maintaining a broad, hand-wavey association with the vague concept of “AI” rather than more specific technologies like LLMs, it’s easier for these companies to jump on other, unrelated technologies if the next breakthrough occurs elsewhere.

Speaking Clearly in a Messy World

That makes it all the more important for those of us who do not stand to gain from this confusion to resist it. Writing clearly about these topics is challenging. It’s like walking a tightrope with inaccuracy on one side and verbosity on the other. But acquiescing to simplistic and vague language serves the interests of the AI promoters, not the community of users (much less the larger world!).

From now on, I’m committing to being more intentional about my language choices when discussing these technologies. When I mean large language models, I’ll say LLMs. When I’m writing about generative tools specifically, I’ll use “generative AI.” When I’m talking about machine learning more generally, I’ll be explicit about that, too. It might make my writing a bit more cumbersome, but this is a case where I think precise language and clear thinking go hand in hand. And anyone thinking about this field needs to be very clear about the real capabilities, limitations, and implications of these tools.

The stakes are too high for sloppy language. How we talk about these technologies shapes how we think about them, how we regulate them, and how we integrate them into our lives and work. And those are all things that we have to get right.


Is The Thought Still What Counts?

Updated 7 June 2025:
I use the term "AI" in this post in its non-specific, colloquial form, which I regret. I'm leaving this post as written, but I encourage you to read my subsequent post where I clarify my thinking and discuss bettew ways to write and think about this topic.

Last week I posted some thoughts on the impact new artificial intelligence (AI) tools are having on education. As if on cue, this article came out a few days later. It should come as no surprise that an article subheaded, “ChatGPT has unraveled the entire academic project,” missed the mark by a wide margin in my view. The article largely blamed students and placed effectively no responsibility on educators or administrators for actually ensuring that students learn. The only educational alternatives mentioned in the article are found in a throwaway line about blue books and oral exams.

A photo collage depicting a birthday card divided in half between a traditional card with cartoon lettering and syntax highlighted javascript code over a black background

As it happens, this discourse touches on some additional thoughts I had about the Search Engine podcast episode (“Playboi Farti and his AI Homework Machine”) that prompted my previous post. In the middle of the episode, a guest relates to host PJ Vogt the story of a man who used ChatGPT to write a birthday card for his wife. She was moved to tears and told him it was the best card he’d ever given her. Both Vogt and his guest were disturbed by this (and, in the interest of full disclosure, I was too), but even more disturbed that teenagers who heard the story saw nothing wrong with it.

This story raises a couple of fascinating questions that I chose not to address in my previous post. First, when could AI be appropriate as a creative tool? And second, why do different people form such different moral judgments about its use?

The Classroom Context: A Different Issue

Before diving deeper, I want to clarify something I took for granted in my previous post about AI in education. Using AI to complete homework assignments is wrong because it defeats the purpose of education: providing students with experiences from which they learn and grow. When a student avoids writing an essay, regardless of exactly how they do it, they lose the opportunity to learn.

This is an age-old problem. For as long as there have been students, their short-term incentives haven’t been aligned with their long-term learning needs. From this perspective, asking what ought to be done about AI is akin to asking what ought to be done about essay mills or CliffsNotes. It’s a worthy question, but it doesn’t illuminate the larger ethical questions around responsible AI use in everyday life.

Getting Past Absolutism

There’s another argument I want to acknowledge without addressing in full. Many people consider all AI tools to be inherently unethical and strictly avoid their use. I’ve seen this view justified by concerns over the origins of the training data, the motives and behavior of the people and companies who profit from AI products, and environmental impacts.

For the purposes of this post, I’m going to put aside this valid perspective for a few reasons. First, it’s highly contingent on facts that are both disputable and subject to change. Models can be trained on legitimately obtained data, open source systems can mitigate concerns over profit-driven corporate actors, and environmental arguments overstate current resource consumption and underestimate efficiency gains.

Second, while I respect anyone who’s made the decision not to use AI themselves, this stance provides no guidance beyond withdrawal from a world where these tools are becoming commonplace. That’s not to say that everyone must accept any way any other person chooses to use AI. Rather, I think even those who find all AI use objectionable will find some uses more troubling than others. It’s worth exploring why those feelings differ, even if you have resolved to avoid these tools entirely.

Why Does the AI Birthday Card Feel Wrong?

What I find fascinating about the birthday card story is how differently people react to it. Let’s consider some thought experiments to clarify these differences.

Most people find store-bought greeting cards acceptable, even if they aren’t the most personal or thoughtful ways to express affection. What if instead, the man had asked his eloquent friend “Cyrano” to write the card for him? If he passed off Cyrano’s work as his own, I suspect most people would feel the same unease as with the AI-generated card. If, on the other hand, he was up front about commissioning Cyrano to write a special birthday message, it feels much less troubling.

I see three factors at work in my reactions to these scenarios:

  • Honesty vs. deception: Misleading someone about authorship feels wrong regardless of whether a source is human or AI. The teenagers who see no problem with the AI-generated card seem focused on the outcome rather than the deception. This may reflect a broader cultural shift regarding the importance of truth and authenticity that I find quite disturbing.
  • Authentic engagement: The husband’s own poorly written but heartfelt card would express authentic feelings. Even a card written by a friend who knows his wife could engage meaningfully with her and the relationship. The AI-generated card, on the other hand, seems more like the store-bought card: generic and lacking an authentic connection to the specific individuals involved.
  • Meaningful effort: We value gifts in part based on the sacrifice they represent, not just in terms of money but time and attention. Since AI tools are specifically designed to achieve better results with less effort, that sacrifice is explicitly reduced, diminishing the value of the gesture.

Of these three factors, honesty seems most fundamental, in that it’s difficult to imagine valuing work presented deceptively, no matter how much effort went into creating it or how well it engages with its subject. But all three factors contribute positively to the value and meaning of communication.

What Our Tools Say About Us

Perhaps this offers some insight into why some of us react so negatively to others’ use of AI. If you are more inclined to use it for work that you personally value less, then your decision to use AI communicates how little you value the task at hand. It should be no wonder that someone might take offense at receiving a supposedly personal token that signals it wasn’t worth your time or effort.

There’s another possibility that I think is worth considering. The teenagers who approved of the AI birthday card might be signalling something else entirely: feelings of incompetence. Growing up involves struggling with skills like self-expression while gradually developing mastery and confidence. Psychologist Erik Erikson describes this in the fourth stage of his developmental model as a crisis between “Industry and Inferiority.” According to Erikson, failing to successfully navigate that crisis could produce individuals who internalize a sense of inadequacy.

I wonder whether this reveals a cohort of students who, faced with the common struggle of learning how to express themselves (perhaps magnified by the trauma of pandemic-related isolation and educational disruption) have simply concluded that they aren’t capable of writing competently. If so, the resulting dynamic, where young people try to express themselves using AI because they feel incapable while others interpret that as insincerity or disengagement, would be tragic in more ways than one.

Finding a Way Forward Together

A world with AI in it is undoubtedly more ethically complicated than one without. All of us are navigating this new terrain together, even if we don’t see everything the same way.

In my own work, I have been applying three key questions when considering AI use:

  • Am I presenting my work honestly? Am I transparent about how I’ve used AI?
  • Am I engaging authentically with my subject? Does my use of AI allow me to maintain a genuine connection to the people and ideas involved?
  • Does my work meaningfully reflect my effort? Am I using AI as an enhancement or a substitute?

My thinking here is still in the formative stages, and I hesitate to call this even a framework. At best, I’m pulling together field notes that may some day mature into something more robust.

I am genuinely curious: What considerations beyond efficacy influence your decisions to use or avoid AI? Do the three factors I’ve identified resonate with you, or have you found other considerations more useful?

The birthday card anecdote may seem trivial, but it sits atop an iceberg of profound questions about the impact new technologies have on human relationships. Ethical questions that seemed academic until recently are now implicated in our everyday choices. A person moved to tears by a computer’s words under false pretenses is not a hypothetical—it is now a fact of life. The urgency with which we approach the task of developing ethical guardrails for these tools should reflect that.


AI and Homework in the Long Shadow of Long Division

Updated 7 June 2025:
I use the term "AI" in this post in its non-specific, colloquial form, which I regret. I'm leaving this post as written, but I encourage you to read my subsequent post where I clarify my thinking and discuss bettew ways to write and think about this topic.

I was catching up on my podcast backlog when I came upon an episode of Search Engine entitled “Playboi Farti and his AI Homework Machine” that raised some important questions about education in a world where students have access to artificial intelligence (AI) tools. In it, the podcast’s host, PJ Vogt, interviews a 13-year-old using the pseudonym Playboi Farti (😚🤌), who happily admits to using ChatGPT for his homework assignments. That conversation got me thinking about how AI tools are changing early education and led me to a key question: How can schools move beyond reactive policies and truly give students what they need?

A poster advertising the Pittsburg Visible-Writing machine from around 1900 illustrated in the art nouveau style
New York Public Library, The Miriam and Ira D. Wallach Division of Art, Prints and Photographs

It’s worth giving the entire episode a listen. In fact, there was a segment that sent me down another line of thinking that I’ll revisit in a later post. For now, I’ll focus on two segments from the podcast that illuminate the challenges of adapting pedagogy to changing technology.

The Unlearned Lessons of Long Division

In one early segment of the podcast, PJ Vogt recounts his childhood frustration with learning long division. As a student, he considered it “hard and stupid” since calculators could do the work more efficiently. As an adult, he feels justified in this assessment because he never uses long division in his daily life.

This anecdote highlights a fundamental problem with traditional math education that remains relevant today. When schools emphasize memorizing algorithms over developing conceptual understanding, students may never come to understand the point of learning math in the first place. Vogt’s experience is shared by generations of students (myself included) who encountered early math the same way: rote memorization, repetitive drills, and endless worksheets of identical problems. The result is as unfortunate as it is predictable: many students develop math anxiety, if not outright aversion, and never progress to understanding core mathematical concepts.

If you associate math with mechanical calculation, it makes sense to view a calculator as a math machine. In reality, calculators excel at computation but can’t contextualize a math problem, determine which operations to perform, or make meaning of results. In fact, people who don’t really understand math have an unnerving tendency to get wrong answers even with calculators at their disposal. In contrast, the everyday mathematical skill I use most often isn’t calculation but estimation, a feel for quantities and relations that helps me move through the world without reaching for my phone constantly.

America’s struggle with math education has created a vicious cycle that hinders reform. Both “new math” in the mid-20th century and more recent “common core” math standards faced resistance in part from adults who never developed comfort with basic arithmetic reasoning. Many of the parents who complain that they can’t help their kids with elementary school math homework are revealing gaps in their own understanding that their educational experiences should have addressed.

The underlying cause here is that memorizing tables and practicing algorithms does not by itself develop reasoning skills. Before students had access to calculators, it made more sense to build a math curriculum on the foundation of calculation. But schools adapted too slowly to the decreasing importance of calculation as a skill, and now the public is facing the consequences of that delay.

The lesson here is critical: new technologies can change the meaning and value of certain skills. When this happens, schools shouldn’t cling to outdated methods. Instead, they should focus on the underlying thinking skills that remain durably valuable. This lesson applies directly to the way schools should approach writing in the age of AI.

Beyond the Five-Paragraph Essay

The central question of the podcast—what should be done about school essay assignments—bears a striking parallel to the long division issue Vogt mentioned in passing. If educators simply ban AI tools and continue using traditional writing assignments, they’ll likely repeat the mistakes that have undermined early math education.

Treating the five-paragraph essay as inherently valuable is just as misguided as valorizing long division as a crucial mathematical skill. Both may have served purposes in their time, but their educational value is diminishing as new technology becomes widespread. If teachers continue to assign easily generated essays, I expect students will look back on them in later years as “hard and stupid” tasks that taught them little of lasting value.

Anyone who claims to know the best way to integrate AI tools into schools today should be viewed with extreme suspicion. But purely from a practical standpoint, AI detection tools simply don’t work well enough for teachers to enforce outright bans. Educators’ time is better spent rethinking the way writing is taught than policing its production.

In a world where a book report or informational essay can be generated in seconds, teachers need new methods to ensure that students learn how to take in information, think critically, and express themselves effectively. Just as elementary math education is gradually moving beyond calculation drills, writing instruction needs to decrease its reliance on formulaic, trivially automatable exercises. This will require creativity from educators, and flexibility and openness from parents.

One possible approach might be to tease apart the educational goals of traditional writing assignments. In the same way that older math curricula attempted to develop both calculation proficiency and numerical reasoning through arithmetic drills, a five-paragraph essay creates opportunities to develop multiple skills at once: research, reasoning, argumentation, and composition. Now that students can generate these essays automatically, these skills need more direct attention, perhaps in the form of structured class discussions and collaborative writing projects supported with continuous feedback.

I’m confident that creative, dedicated educators can develop effective new writing curricula if they aren’t forced to spend their limited time playing AI detective.

Teaching All Tomorrow’s Playboi Fartis

The concern over AI is just the latest example of an issue that has recurred throughout the history of education. Changes in the world, especially new technologies, have always disrupted established educational practices and forced schools to reconsider what and how they teach. The cultural anxiety and corporate hype surrounding AI make today’s stakes feel particularly high, but whether or not this is the greatest technological disruption in the history of education, it surely won’t be the last.

What’s most important is maintaining focus on the deeper purpose of education. Calculation shouldn’t overshadow mathematical reasoning. The five-paragraph essay can’t take priority over critical thinking and communication skills. Distinguishing between process, purpose, and outcome is essential when reevaluating longstanding practices.

Educational methods are themselves technologies, and like all technologies, they evolve over time. But education’s core mission—to develop intellectual capabilities and skills that help students thrive—remains constant. Holding fast to outdated approaches undermines that mission. Although adapting to technological change is challenging, it’s the only way to ensure that students get an education worth having.

It’s futile and counterproductive to engage in an arms race against the likes of Playboi Farti. Rather than focusing on preventing students from using AI tools, educators should develop teaching methods that can’t be undermined by those tools. Only then will they have any chance of finding ways to incorporate them constructively.


If You're Looking for Inspiring Technology, Grab a Guitar

It wasn’t long ago that computers were considered “high technology.” The “high” modifier mostly goes unspoken these days. When people say “tech,” they usually mean computing technology: networks, hardware, software, and the companies that build them. Other technological domains need to be distinguished with special monikers: biotech, cleantech, medtech, fintech, and the like. And increasingly, when people talk about tech, they don’t have much positive to say. This growing disenchantment stems from two sources: the business practices of tech companies and tech products themselves.

Google’s recent decision to force Gemini AI tools into all Workspace accounts exemplifies the way tech companies prioritize their business interests over user needs. The saying, “If you’re not the customer, you’re the product” may not be completely true, but it captures how it feels when companies alienate (if not outright exploit) their users. Thanks to this contentious relationship, the term “enshittification” has entered the lexicon to describe the gradual degradation of tech products once they have an established user base.

But bad business practices don’t tell the whole story. Many modern tech products disappoint by design. With a focus on capturing the attention of distracted users, designers pursue simplicity above all else, creating experiences where all meaningful choice has been stripped away. As users, we find ourselves senselessly plodding through predetermined flows or endlessly scrolling across a sea of worthless content. We’re missing out on more than just enjoyment; we’re losing our ability to imagine something better.

We’re Asking Too Little of Our Tools

A particularly apt expression of our diminished technological aspirations can be found in a viral tweet by author Joanna Maciejewska, who wrote:

You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.

This tweet resonated with so many people because the sentiment at its core is correct. Technology—not just AI—should free us from drudgery. But if that’s all we aspire to achieve, we’re aiming too low. Truly powerful technologies don’t just mitigate the tedium of daily life. Great technologies help us live in ways we never thought possible.

The patent illustration on the application for the Gibson Les Paul electric guitar from 1955 showing the shape of the guitar head, neck, and body with detail views of the bridge
USPTO public domain

Of Guitars and Chainsaws

Some of the best examples of transformative technology aren’t “high tech” at all. Consider the guitar. It might not be the first thing that comes to mind as a form of technology, but that’s exactly what it is: a human-designed tool created to achieve human ends.

What makes a guitar special is that it does more than merely enhance its user’s capabilities—it creates new ones. Put another way, a guitar doesn’t just improve the human body’s natural music-making ability; it creates distinct new modes of musical expression. Generations of musicians have explored those modes without exhausting them. And, importantly, a guitar player and a piano player are not exploring a shared space of musical possibilities with different tools. Instead, each one is engaging with a distinct set of complex interactions between player and instrument. The music is shaped by the instrument, and that relationship expands the musician’s creative potential.

Another example may help illustrate this point. When you use a chainsaw to cut branches, it’s simply a better tool for an existing task. But in the hands of a chainsaw sculptor, it’s far more than a faster chisel. It’s a distinctly different way to sculpt, as reflected in both the creative process and the final work.

Modern Tools, Traditional Principles

Nothing prevents software and other “high tech” products from offering this kind of expressive range, and some do. A graphics editor like Adobe Photoshop isn’t an improved paintbrush; it’s an entirely distinct tool for visual creation. A spreadsheet like Microsoft Excel, far from being merely a better calculator, gives users entirely new ways to model, analyze, and express data. The integrated development environments (IDEs) used by software engineers don’t just make coding faster, they fundamentally change the relationship between code and coder.

The world of digital music, which dates back to the earliest days of computing, offers still more examples. Today’s digital audio workstations let musicians mix and edit tracks in ways that were impossible with the studio tools of yesterday. Audio programming languages allow artists to express music in mathematical terms. Synthesizers create entirely new instruments, free from the limitations of the physical world. It would be a mistake to consider these simply better versions of traditional music tools. They are better thought of as wholly new tools that open new creative possibilities for artists to explore.

Building Tools Worth Mastering

To create more transformative technologies, those of us in the industry must fundamentally reconsider our approach. The push to build simplistic, restrictive products for minimally engaged users is self-defeating. The tech industry serves neither itself nor its users by building every product as if it were a social media app or a shopping website.

This misguided approach was recently highlighted when an AI company’s CEO claimed that people don’t enjoy making music. Satisfying as it would be, I’m not going to spend a lot of time dropping the elbow on these shallow, foolish statements. I bring them up because they so clearly articulate how poorly many industry leaders understand their users’ needs.

Conventional wisdom in our industry would consider guitars “bad” products. They take time to learn, allow for mistakes, and can be frustrating to master. But these are not flaws; they are common characteristics of profoundly empowering tools. To be clear, I am not arguing against user-friendly design. Rather, I’m saying that too often, we build products with a feeble grasp of who the user is and what genuinely serves their interests. After all, I’ve never met a guitarist who complained that their instrument isn’t user friendly enough.

The tech industry, driven by social and commercial forces, puts ever-increasing investment behind products that are easy to pick up but have limited value. That’s the right approach in some cases, but it shouldn’t narrow our broader understanding of what technology can be. We risk forgetting that the most powerful technologies are those that grow with us and challenge us to do and be more.

The people of the future deserve tools that expand human potential. The goal shouldn’t just be to make life easier—it should be to make new ways of living possible.


Good Technology Doesn't Pretend to Be Human

The patent illustration on the application for the Lego minifigure from 1979 showing four perspective views of the figure in walking and seated poses
USPTO public domain

Confessions of a Self-Checkout Devotee

I like using self-checkout at the store. This comes as no surprise to my mother. As she tells it, when I was a kindergartener, any time I was asked what I wanted to be when I grew up, I’d happily declare that I would be a cashier at the supermarket. Years later, a summer behind the register at the Almacs in town cured me of whatever remained of that ambition, but even today, most of my grocery and pharmacy shopping trips end with me ringing myself up.

That’s not to say that I enjoy all self-checkout experiences. I particularly dislike those that slow down the process with clumsy and frankly insulting anti-theft measures. Despite retailers’ claims, I just don’t believe that self-checkout increases shoplifting risk. In fact, it’s well established that retailers lie about the amount and causes of theft they experience. It seems to me that the traditional approaches to shoplifting, like concealing items in your clothes, remain far more practical than trying to perform self-checkout sleight of hand.

Even accounting for that caveat, I accept I’m in the minority. I suspect media coverage overstates the case when they claim that self-checkout is a “failed experiment” (Tasting Table) that “nobody likes” (CNN) and “hasn’t delivered” (BBC). But I do accept that, on the balance, people prefer human cashiers. Incidentally, my own experience as a human cashier does not suggest that this preference manifests as kindness.

So be it—I don’t mind being out of step with the mainstream. As an introvert, I don’t see paying for groceries or toilet paper as an experience that demands the human touch. I feel perfectly comfortable interacting with a tool to scan my items, calculate the total, and pay. My father falls into the opposite camp. I don’t know if he’s ever used a self-checkout lane, and he avoids ATMs so he can chat up the bank tellers.

A Waiter is More Than a Talking Order Form

Both of us, I submit, have reasonable preferences. It should be no problem for some of us to prefer interacting with people and others to prefer interacting with tools. I also don’t think these preferences are fixed. I might prefer the self-checkout lane, but I still value recommendations from a knowledgeable butcher or cheesemonger.

But my father and I both prefer to know when we’re dealing with a person and when we’re interacting with a tool. Until recently, this distinction was obvious, but we now live in the post-Turing-test age. Large language models (LLMs) give product designers the power to build conversational interfaces that mimic human interaction, and I think we are only beginning to grapple with the most effective and responsible ways to use these technologies.

This problem isn’t completely novel. Even before computers, people developed ways to address some of the shortcomings of natural language as an interface. Consider humble technologies such as invoices or order forms, perhaps like those used at sushi restaurants. These tools work in contexts where structured data is superior to narrative, even without computers involved, and their nature as tools is transparent.

Beyond Preference: A Question of Trust

Maintaining that transparency is going to be the real challenge. Judging by the current crop of LLM-powered products hitting the market, it seems many designers assume that people who prefer human interaction will also prefer tools with human-like interfaces. I believe they are gravely mistaken. Even worse, I think this misunderstanding suggests that some designers would deliberately deceive their users into believing they are interacting with a human being when they are not.

Not only does this deeply misread users’ preferences, I contend as a rule that it’s unethical for product designers to mislead their users.

True, not every user cares whether they are interacting with a person or a tool. In fact, I must admit that I’m somewhat surprised by how many people don’t. But many care profoundly about that distinction, seeing it at the heart of ethical reasoning.

This isn’t just a philosophical question. It’s an emerging challenge that should prompt us to rethink the way we design and use technology. The traditional divide between preferring human or automated service may soon be less relevant than a new question: Do you want to know whether you’re talking to a human or a machine?