It can be hard to perceive the subtle and gradual ways I change over time. Rationally, I know it’s happening, but until something specific tells me otherwise, I tend to think that whatever is true about me today was true about me in the past and vice versa. Evidence of these long, slow changes can be hard to come by. There are obvious things, of course: I have no shortage of pictures that prove I don’t look like I did when I was younger. But it’s not every day that I’m confronted with a way my moral intuitions have shifted.
It came up unexpectedly, as moments of self-realization often seem to. My daughter and I were watching an episode of Mythbusters. If you’re unfamiliar with the series, it aired on Discovery from 2003-2018, and its original idea was for a crew of special effects experts to film experiments confirming or disproving urban legends and other popular folklore. Early on in the show’s run, it expanded its focus to include movie scenes, internet rumors, news stories, and more.
The specific episode we watched was “Paper Crossbow” from 2006. At this point in the show’s run, most episodes have a two-story structure. The original hosts, Adam and Jamie, test one myth while the “build team” works on another, with the edit cutting back and forth between the two. In this episode, the hosts tested whether they could build a lethal crossbow from newspaper while the build team experimented with household uses for vodka.
As a side note, we watched this episode as a family not long after we ourselves used some cheap vodka to get the smoky smell out of some clothes and camping equipment that had been too close to a fire burning uncured firewood, so we already knew firsthand the result of one of the myths tested in that episode.
It was a different vodka myth that caught my attention, and it wasn’t the result of the test but the experimental setup that was striking. The build team tested whether vodka would act as an insecticide if sprayed on bees. They got bees from a beekeeper, divided them into two ventilated acrylic boxes, and sprayed the contents of one box with water and the other with vodka to see which would kill more bees. Now, with a little bit of thought, it’s clear that even if no bees were killed by the vodka or water (and, in fact, only two of the bees in the water box died during the experiment), all of the bees were going to die. Honeybees depend on their hives to survive; removed from the hive, they die.
Don't worry, they're only sleeping... for now
I have no doubt that when I originally watched this episode, I had no problem with it. But now, it strikes me as needlessly cruel to treat bees this way. Insofar as this change is part of a larger shift in my moral intuitions around the treatment of animals, it’s not a very large shift. I still eat meat, though perhaps less than I did 20 years ago, and I am not shy about killing bugs in my living space. But I can say with some confidence that if I had been part of the Mythbusters build team in 2006, I would have seen nothing wrong with the experiment, while I would object to running it in 2025.
It’s possible that some of this shift has to do with the bees themselves. Not long after the episode originally aired, I, like many other Americans, started to learn about colony collapse disorder and the ecological and agricultural importance of honeybees more generally. Being aware of those issues certainly makes me wonder what the beekeeper who provided the bees was thinking, but I don’t think it explains how my reaction changed. After all, I’m experiencing an intuitive response to the experiment on moral terms, not a calculated analysis of the benefits and drawbacks of killing important and valuable animals for a science education show.
Seriously, what were you thinking?
Instead, I think this is just an example of an imperceptibly gradual change at the heart of the way I see and judge the world. I hope it’s for the better: I would like to think that I grow more attuned to suffering and more willing to speak and act against it as I get older. Or, maybe the duty of care I feel for the world applies to a larger set of things than it did before. I don’t know for sure, but I can say that I’m clearly not the same as I was when I watched this show back in 2006. And that’s a bit of valuable self-understanding.
I encourage you, too, to consider how your moral orientation has changed over time, and how it hasn’t. Just as you might look at your picture in an old yearbook, consider how the passage of time has deflected your moral compass. Don’t just think about when you have been most comfortable—it’s almost a cliche that the worst people lose the least sleep over their behavior. Try to find these points of difference and explore them. Because the deeper truth about the way time changes us is that it reveals we are never finished products. There is always more change to come.
Updated 7 June 2025:
I use the term "AI" in this post in its non-specific, colloquial form, which I regret. I'm leaving this post as written, but I encourage you to read my subsequent post where I clarify my thinking and discuss bettew ways to write and think about this topic.
Last week I posted some thoughts on the impact new artificial intelligence (AI) tools are having on education. As if on cue, this article came out a few days later. It should come as no surprise that an article subheaded, “ChatGPT has unraveled the entire academic project,” missed the mark by a wide margin in my view. The article largely blamed students and placed effectively no responsibility on educators or administrators for actually ensuring that students learn. The only educational alternatives mentioned in the article are found in a throwaway line about blue books and oral exams.
As it happens, this discourse touches on some additional thoughts I had about the Search Engine podcast episode (“Playboi Farti and his AI Homework Machine”) that prompted my previous post. In the middle of the episode, a guest relates to host PJ Vogt the story of a man who used ChatGPT to write a birthday card for his wife. She was moved to tears and told him it was the best card he’d ever given her. Both Vogt and his guest were disturbed by this (and, in the interest of full disclosure, I was too), but even more disturbed that teenagers who heard the story saw nothing wrong with it.
This story raises a couple of fascinating questions that I chose not to address in my previous post. First, when could AI be appropriate as a creative tool? And second, why do different people form such different moral judgments about its use?
The Classroom Context: A Different Issue
Before diving deeper, I want to clarify something I took for granted in my previous post about AI in education. Using AI to complete homework assignments is wrong because it defeats the purpose of education: providing students with experiences from which they learn and grow. When a student avoids writing an essay, regardless of exactly how they do it, they lose the opportunity to learn.
This is an age-old problem. For as long as there have been students, their short-term incentives haven’t been aligned with their long-term learning needs. From this perspective, asking what ought to be done about AI is akin to asking what ought to be done about essay mills or CliffsNotes. It’s a worthy question, but it doesn’t illuminate the larger ethical questions around responsible AI use in everyday life.
Getting Past Absolutism
There’s another argument I want to acknowledge without addressing in full. Many people consider all AI tools to be inherently unethical and strictly avoid their use. I’ve seen this view justified by concerns over the origins of the training data, the motives and behavior of the people and companies who profit from AI products, and environmental impacts.
For the purposes of this post, I’m going to put aside this valid perspective for a few reasons. First, it’s highly contingent on facts that are both disputable and subject to change. Models can be trained on legitimately obtained data, open source systems can mitigate concerns over profit-driven corporate actors, and environmental arguments overstate current resource consumption and underestimate efficiency gains.
Second, while I respect anyone who’s made the decision not to use AI themselves, this stance provides no guidance beyond withdrawal from a world where these tools are becoming commonplace. That’s not to say that everyone must accept any way any other person chooses to use AI. Rather, I think even those who find all AI use objectionable will find some uses more troubling than others. It’s worth exploring why those feelings differ, even if you have resolved to avoid these tools entirely.
Why Does the AI Birthday Card Feel Wrong?
What I find fascinating about the birthday card story is how differently people react to it. Let’s consider some thought experiments to clarify these differences.
Most people find store-bought greeting cards acceptable, even if they aren’t the most personal or thoughtful ways to express affection. What if instead, the man had asked his eloquent friend “Cyrano” to write the card for him? If he passed off Cyrano’s work as his own, I suspect most people would feel the same unease as with the AI-generated card. If, on the other hand, he was up front about commissioning Cyrano to write a special birthday message, it feels much less troubling.
I see three factors at work in my reactions to these scenarios:
Honesty vs. deception: Misleading someone about authorship feels wrong regardless of whether a source is human or AI. The teenagers who see no problem with the AI-generated card seem focused on the outcome rather than the deception. This may reflect a broader cultural shift regarding the importance of truth and authenticity that I find quite disturbing.
Authentic engagement: The husband’s own poorly written but heartfelt card would express authentic feelings. Even a card written by a friend who knows his wife could engage meaningfully with her and the relationship. The AI-generated card, on the other hand, seems more like the store-bought card: generic and lacking an authentic connection to the specific individuals involved.
Meaningful effort: We value gifts in part based on the sacrifice they represent, not just in terms of money but time and attention. Since AI tools are specifically designed to achieve better results with less effort, that sacrifice is explicitly reduced, diminishing the value of the gesture.
Of these three factors, honesty seems most fundamental, in that it’s difficult to imagine valuing work presented deceptively, no matter how much effort went into creating it or how well it engages with its subject. But all three factors contribute positively to the value and meaning of communication.
What Our Tools Say About Us
Perhaps this offers some insight into why some of us react so negatively to others’ use of AI. If you are more inclined to use it for work that you personally value less, then your decision to use AI communicates how little you value the task at hand. It should be no wonder that someone might take offense at receiving a supposedly personal token that signals it wasn’t worth your time or effort.
There’s another possibility that I think is worth considering. The teenagers who approved of the AI birthday card might be signalling something else entirely: feelings of incompetence. Growing up involves struggling with skills like self-expression while gradually developing mastery and confidence. Psychologist Erik Erikson describes this in the fourth stage of his developmental model as a crisis between “Industry and Inferiority.” According to Erikson, failing to successfully navigate that crisis could produce individuals who internalize a sense of inadequacy.
I wonder whether this reveals a cohort of students who, faced with the common struggle of learning how to express themselves (perhaps magnified by the trauma of pandemic-related isolation and educational disruption) have simply concluded that they aren’t capable of writing competently. If so, the resulting dynamic, where young people try to express themselves using AI because they feel incapable while others interpret that as insincerity or disengagement, would be tragic in more ways than one.
Finding a Way Forward Together
A world with AI in it is undoubtedly more ethically complicated than one without. All of us are navigating this new terrain together, even if we don’t see everything the same way.
In my own work, I have been applying three key questions when considering AI use:
Am I presenting my work honestly? Am I transparent about how I’ve used AI?
Am I engaging authentically with my subject? Does my use of AI allow me to maintain a genuine connection to the people and ideas involved?
Does my work meaningfully reflect my effort? Am I using AI as an enhancement or a substitute?
My thinking here is still in the formative stages, and I hesitate to call this even a framework. At best, I’m pulling together field notes that may some day mature into something more robust.
I am genuinely curious: What considerations beyond efficacy influence your decisions to use or avoid AI? Do the three factors I’ve identified resonate with you, or have you found other considerations more useful?
The birthday card anecdote may seem trivial, but it sits atop an iceberg of profound questions about the impact new technologies have on human relationships. Ethical questions that seemed academic until recently are now implicated in our everyday choices. A person moved to tears by a computer’s words under false pretenses is not a hypothetical—it is now a fact of life. The urgency with which we approach the task of developing ethical guardrails for these tools should reflect that.
I first encountered Bryan Stevenson and his work nine years ago. Google, my employer at the time, invited him to speak about his nonprofit, the Equal Justice Initiative (EJI), which provides legal representation to people who have been subjected to unjust treatment during criminal proceedings or incarceration. His talk, which is posted on YouTube, moved me profoundly, and I went on to read his book, Just Mercy.
A core idea motivating Bryan Stevenson’s work, laid out both in that talk and in Just Mercy, is that “Each of us is more than the worst thing we’ve ever done.” In his context, that “worst thing” could be quite terrible. While its work includes representing the falsely accused, EJI often represents people who truly have committed heinous crimes.
It takes a special kind of moral clarity to represent and seek justice for a murderer who has been abused in prison. After reading Just Mercy, I found myself much more attuned to the many ways our society discards those deemed unworthy of fair treatment. We can’t call justice a foundational social value if it is contingent. If there are things we can do to have the protection of the law withdrawn from us, then that protection isn’t really meaningful. Everyone whose work touches criminal investigation, trial, or punishment should be held to the highest standards because of, not despite, their impact on those who may have done horrible things.
Lately, however, I’m troubled to hear this language of mercy and second chances voiced in some unexpected places.
When venture capital firm Andreessen Horowitz (a16z) hired a man freshly acquitted for choking Jordan Neely to death on the New York subway, they told their investors, “We don’t judge people for the worst moment in their life.” Notably, nobody disputes the fact that a16z’s new investment associate killed a man on the floor of a subway car. He was acquitted only because a jury did not deem that act to be a crime.
Similarly, when a 25-year-old staff member of the new “Department” of Government Efficiency resigned over vicious tweets endorsing racism and eugenics, the Vice President of the United States dismissed it as “stupid social media activity” that shouldn’t “ruin a kid’s life.” The staffer was promptly reinstated into his role as a “special government employee.”
These echoes of Stevenson’s words might sound familiar, but they deserve careful scrutiny. Have a16z or the current administration ever invoked mercy as a broader goal? Not so far as I can tell. Beyond a handful of podcast episodes posted five years ago, Andreessen Horowitz’s engagement with criminal justice seems limited to investing in drones and surveillance technology. Their founding partner made such large personal donations to the Las Vegas police that he felt the need to write a petulant response when investigated by journalists. Regardless of how worthwhile it may be to buy drones and cappuccino machines for the police, I think it’s fair to say that these investments do nothing to advance the cause of building a merciful society. As for the administration, its tough-on-crime rhetoric speaks for itself.
So what’s really happening here? One obvious answer is that hiring a killer and refusing to accept the resignation of an unapologetic bigot are ways to sanction their behavior. We like to believe we maintain social norms against killing someone when it could be avoided and against racist public statements. Rejecting these norms requires some affirmative signal, and that’s what a16z and the administration are providing.
This reveals the first crucial distinction: Stevenson calls on society to recognize the rights of those who do wrong. In contrast, these recent cases are declarations that certain acts are not wrong at all, and even more troublingly, they suggest that those who still maintain longstanding social norms are themselves the wrongdoers.
But there is a second important difference, one hidden by rhetorical sleight of hand. When EJI takes a case, they fight for fundamental civil rights: the right to a fair trial, to representation, to be sentenced fairly, and to humane punishment. By contrast, in these other cases, the talk of “ruined lives” is a misdirection disguising what’s really happening: the loss of positions of extraordinary influence and privilege.
When the Vice President claims a racist’s life has been ruined, he’s talking about someone losing a powerful role in the federal government, treating an immense privilege as though it were a basic right. And if we take a16z at their word, they seem to be claiming that it would be unfair not to elevate a killer to the investing team at one of Silicon Valley’s biggest venture firms. They’ll have to forgive my skepticism that you or I would get very far with that approach if we applied for such a coveted role.
I’m reminded of the 2018 Brett Kavanaugh Supreme Court confirmation hearings, when supporters claimed that sexual assault allegations against him were intended to “ruin his life.” Obviously, they didn’t, but even if they had prevented his appointment to the Supreme Court, was that his right? Would his life have been ruined had he instead returned to his previous role as a federal judge on the DC Circuit Court of Appeals? I’d wager many people would gladly accept that kind of “ruined” life.
But of course, this is nothing more than a flimsy, bad faith attempt to cloak approval for violence and eliminationism in lofty language. If you spend your time, like Bryan Stevenson, ensuring that the poor and marginalized are afforded basic rights and dignities, then I believe in your commitment to mercy. If, on the other hand, you spend your time granting vast privileges to people who harm the poor and marginalized, then you’re not showing mercy—you’re showing your hand.
One of the best things I did last year was to read Robert Caro’s classic 1974 biography The Power Broker: Robert Moses and the Fall of New York. I finally tackled all 1,162 pages of it (plus 80 more pages of notes), so it can be retired from the shelf of shame where I keep the massive books I want to read but never get around to. (Look out, Infinite Jest.)
I was inspired by the 99% Invisible podcast, which covered The Power Broker in a series of episodes that ran like a virtual book club. Now that all 12 episodes have been posted, you can follow along at your own pace, but I recommend taking your time with it, because there is so much to enjoy about the book.
Calling The Power Broker a biography doesn’t do it justice. More than a picture of one (admittedly significant) individual, it is an exploration of power, how power flows both through and around democratic institutions in America, and how power etches itself into the physical structure of our cities, highways, parks, and homes. It is also, in its way, a love letter to New York in all its multitudinous possibilities: the many faces of New York experienced by its residents as well as the many New Yorks that once existed or might have existed. It is a monumental work written with extraordinary grace.
Unlike readers in 1974, I approached the book already aware of Robert Moses’ reputation as a villain of urban planning. While I found ample evidence supporting that view, I also found something more profound: an invitation to understand a man who caused immense harm to the city he claimed to serve without excusing or minimizing that harm. Nowhere is this challenge more powerfully presented than in the book’s conclusion, where Robert Caro frames and reframes Moses, forcing readers to confront the complexity of this figure, contradictions and all.
The final chapter, simply entitled “Old,” finds Moses just after the end of his nearly 44-year-long career as a public official, a time during which he never once held elected office. By this point, readers have witnessed how Moses reshaped America’s largest city through an unprecedented campaign of public development. The New York we know today would be unrecognizable without his influence. But readers have also seen his utter contempt for many of the city’s people, including but not limited to his famous and flagrant racism. Again and again, Moses callously dismissed those he deemed unworthy simply for being in his way.
The cost New York paid to realize Moses’ vision is staggering. From the benighting of Sunset Park beneath the Gowanus Parkway to the way the Cross-Bronx Expressway chopped up East Tremont like a meat axe—Moses’ words, not Caro’s—the impact on these broken neighborhoods and displaced families is impossible to fully comprehend. The bill for all this so-called development has still not been fully paid. Today, it’s a cost paid by travelers and commuters wasting their time in traffic and by residents leading lives made shorter and sicker by pollution.
Yet in the final pages of the book, Caro achieves something remarkable. He presents us Moses at nearly 80 years old, stripped of power and fading from relevance. Thrust into retirement, Moses shrinks into a frustrated and disappointed figure. Despite receiving occasional recognition that falls short of his grandiose expectations, he finds himself increasingly criticized by the very public that once celebrated him. Somehow, Caro manages to inspire a flicker of sympathy for this vain, arrogant bully who deceived and mistreated so many for so long in so many ways.
And then, in the famous closing paragraphs of the book, Caro subtly, masterfully, fractures that sympathy.
In private, his conversation dwelt more and more on a single theme—the ingratitude of the public toward great men. And once, invited by the Church to speak at the dedication in Flushing Meadows Park of the Excedra, a huge, marble bench for reflection donated by the Roman Catholic Diocese of New York, he gave vent to his feeling in public. Turning to a high church official who was also an old friend, his voice booming out over the public address system, he said:
“Someday, let us sit on this bench and reflect on the gratitude of man.”
Down in the audience, the ministers of the empire of Moses glanced at one another and nodded in their heads. [Moses] was right as usual, they whispered. Couldn’t people see what he had done?
Why weren’t they grateful?
Caro spends the bulk of the last chapter reminding readers that, despite his posturing, Moses is merely a man, as pitiful and flawed as any. But this final anecdote is a warning that sympathy cannot excuse Moses. He is humiliated in his dotage, but unremorseful. He demands thanks, not forgiveness, for the damage he caused.
There is a powerful lesson here. It’s tempting to write off people who do harm as monsters, but at some level, this means falling into the same trap that enabled Moses’ worst acts: the failure to see others as fully human. Caro challenges readers to fully apprehend Moses—or, I’d argue, any historic wrongdoer—by maintaining moral clarity about the damage he did while resisting the urge to anathematize him as a caricature of evil.
In our current moment, when public discourse rewards superficial judgments that fit into tweet-sized declarations, The Power Broker is more valuable than ever. It demonstrates that embracing nuance does not demand surrender to moral ambivalence and shallow bothsidesism. Within this deeply researched and profoundly humane work, readers have room to stretch out and wrestle with complex questions about the public good while maintaining clear-eyed judgment about right and wrong. Even 50 years after its publication, it remains a unique and precious meditation on cities and the flawed humans who live in and shape them. I’ve thought about it every day since I finished it and will undoubtedly revisit it here in future.
I like using self-checkout at the store. This comes as no surprise to my mother. As she tells it, when I was a kindergartener, any time I was asked what I wanted to be when I grew up, I’d happily declare that I would be a cashier at the supermarket. Years later, a summer behind the register at the Almacs in town cured me of whatever remained of that ambition, but even today, most of my grocery and pharmacy shopping trips end with me ringing myself up.
That’s not to say that I enjoy all self-checkout experiences. I particularly dislike those that slow down the process with clumsy and frankly insulting anti-theft measures. Despite retailers’ claims, I just don’t believe that self-checkout increases shoplifting risk. In fact, it’s well established that retailers lie about the amount and causes of theft they experience. It seems to me that the traditional approaches to shoplifting, like concealing items in your clothes, remain far more practical than trying to perform self-checkout sleight of hand.
Even accounting for that caveat, I accept I’m in the minority. I suspect media coverage overstates the case when they claim that self-checkout is a “failed experiment” (Tasting Table) that “nobody likes” (CNN) and “hasn’t delivered” (BBC). But I do accept that, on the balance, people prefer human cashiers. Incidentally, my own experience as a human cashier does not suggest that this preference manifests as kindness.
So be it—I don’t mind being out of step with the mainstream. As an introvert, I don’t see paying for groceries or toilet paper as an experience that demands the human touch. I feel perfectly comfortable interacting with a tool to scan my items, calculate the total, and pay. My father falls into the opposite camp. I don’t know if he’s ever used a self-checkout lane, and he avoids ATMs so he can chat up the bank tellers.
A Waiter is More Than a Talking Order Form
Both of us, I submit, have reasonable preferences. It should be no problem for some of us to prefer interacting with people and others to prefer interacting with tools. I also don’t think these preferences are fixed. I might prefer the self-checkout lane, but I still value recommendations from a knowledgeable butcher or cheesemonger.
But my father and I both prefer to know when we’re dealing with a person and when we’re interacting with a tool. Until recently, this distinction was obvious, but we now live in the post-Turing-test age. Large language models (LLMs) give product designers the power to build conversational interfaces that mimic human interaction, and I think we are only beginning to grapple with the most effective and responsible ways to use these technologies.
This problem isn’t completely novel. Even before computers, people developed ways to address some of the shortcomings of natural language as an interface. Consider humble technologies such as invoices or order forms, perhaps like those used at sushi restaurants. These tools work in contexts where structured data is superior to narrative, even without computers involved, and their nature as tools is transparent.
Beyond Preference: A Question of Trust
Maintaining that transparency is going to be the real challenge. Judging by the current crop of LLM-powered products hitting the market, it seems many designers assume that people who prefer human interaction will also prefer tools with human-like interfaces. I believe they are gravely mistaken. Even worse, I think this misunderstanding suggests that some designers would deliberately deceive their users into believing they are interacting with a human being when they are not.
Not only does this deeply misread users’ preferences, I contend as a rule that it’s unethical for product designers to mislead their users.
True, not every user cares whether they are interacting with a person or a tool. In fact, I must admit that I’m somewhat surprised by how many people don’t. But many care profoundly about that distinction, seeing it at the heart of ethical reasoning.
This isn’t just a philosophical question. It’s an emerging challenge that should prompt us to rethink the way we design and use technology. The traditional divide between preferring human or automated service may soon be less relevant than a new question: Do you want to know whether you’re talking to a human or a machine?