It can be hard to perceive the subtle and gradual ways I change over time. Rationally, I know it’s happening, but until something specific tells me otherwise, I tend to think that whatever is true about me today was true about me in the past and vice versa. Evidence of these long, slow changes can be hard to come by. There are obvious things, of course: I have no shortage of pictures that prove I don’t look like I did when I was younger. But it’s not every day that I’m confronted with a way my moral intuitions have shifted.
It came up unexpectedly, as moments of self-realization often seem to. My daughter and I were watching an episode of Mythbusters. If you’re unfamiliar with the series, it aired on Discovery from 2003-2018, and its original idea was for a crew of special effects experts to film experiments confirming or disproving urban legends and other popular folklore. Early on in the show’s run, it expanded its focus to include movie scenes, internet rumors, news stories, and more.
The specific episode we watched was “Paper Crossbow” from 2006. At this point in the show’s run, most episodes have a two-story structure. The original hosts, Adam and Jamie, test one myth while the “build team” works on another, with the edit cutting back and forth between the two. In this episode, the hosts tested whether they could build a lethal crossbow from newspaper while the build team experimented with household uses for vodka.
As a side note, we watched this episode as a family not long after we ourselves used some cheap vodka to get the smoky smell out of some clothes and camping equipment that had been too close to a fire burning uncured firewood, so we already knew firsthand the result of one of the myths tested in that episode.
It was a different vodka myth that caught my attention, and it wasn’t the result of the test but the experimental setup that was striking. The build team tested whether vodka would act as an insecticide if sprayed on bees. They got bees from a beekeeper, divided them into two ventilated acrylic boxes, and sprayed the contents of one box with water and the other with vodka to see which would kill more bees. Now, with a little bit of thought, it’s clear that even if no bees were killed by the vodka or water (and, in fact, only two of the bees in the water box died during the experiment), all of the bees were going to die. Honeybees depend on their hives to survive; removed from the hive, they die.
Don't worry, they're only sleeping... for now
I have no doubt that when I originally watched this episode, I had no problem with it. But now, it strikes me as needlessly cruel to treat bees this way. Insofar as this change is part of a larger shift in my moral intuitions around the treatment of animals, it’s not a very large shift. I still eat meat, though perhaps less than I did 20 years ago, and I am not shy about killing bugs in my living space. But I can say with some confidence that if I had been part of the Mythbusters build team in 2006, I would have seen nothing wrong with the experiment, while I would object to running it in 2025.
It’s possible that some of this shift has to do with the bees themselves. Not long after the episode originally aired, I, like many other Americans, started to learn about colony collapse disorder and the ecological and agricultural importance of honeybees more generally. Being aware of those issues certainly makes me wonder what the beekeeper who provided the bees was thinking, but I don’t think it explains how my reaction changed. After all, I’m experiencing an intuitive response to the experiment on moral terms, not a calculated analysis of the benefits and drawbacks of killing important and valuable animals for a science education show.
Seriously, what were you thinking?
Instead, I think this is just an example of an imperceptibly gradual change at the heart of the way I see and judge the world. I hope it’s for the better: I would like to think that I grow more attuned to suffering and more willing to speak and act against it as I get older. Or, maybe the duty of care I feel for the world applies to a larger set of things than it did before. I don’t know for sure, but I can say that I’m clearly not the same as I was when I watched this show back in 2006. And that’s a bit of valuable self-understanding.
I encourage you, too, to consider how your moral orientation has changed over time, and how it hasn’t. Just as you might look at your picture in an old yearbook, consider how the passage of time has deflected your moral compass. Don’t just think about when you have been most comfortable—it’s almost a cliche that the worst people lose the least sleep over their behavior. Try to find these points of difference and explore them. Because the deeper truth about the way time changes us is that it reveals we are never finished products. There is always more change to come.
Recently, for family movie night, my wife and I watched the original Naked Gun (1988) with our not-quite-teenage daughter. The idea was somewhat prompted by all of the advertising we’ve seen around LA for the new Naked Gun reboot coming out this weekend. The experience spurred me to read about what makes things funny and reflect on how audiences engage with comedy over time and how younger generations react with fresh perspectives to old humor.
My first takeaway is simple: the movie is still pretty funny. I laughed at many of the jokes, and my daughter found it funny (in parts), too. Part of this is because the Zucker-Abrahams-Zucker filmmaking team is like what basketball fans call a “volume shooter”—if you take a lot of shots, you’re going to score some baskets. Or, to look at it another way, packing a movie as full of gags as possible means that they don’t all need to land.
According to my daughter, the funniest scene in the movie comes just after the credits, when the hapless Nordberg unsuccessfully attempts to singlehandedly bust a drug deal. I tend to agree that one of the true pleasures of this movie is the amount of punishment it dishes out to OJ Simpson’s character, whose limited screen time is unrelentingly humiliating.
I found the jokes she didn’t get just as interesting. It was no surprise that the cold open, featuring lookalikes playing a rogue’s gallery of foreign adversaries in the typical 80s-era American mind, didn’t connect. But I didn’t expect her to ask what was going on during the title sequence, shot from the siren’s-eye-view of a police cruiser’s roof. I guess police emergency lights no longer look anything like spinning reflectors.
Watching the movie got me thinking about what makes things funny in the first place. Researcher Peter McGraw has proposed the Benign Violation Theory, which might come as close as I’ve ever seen to a theory’s name explaining itself. The core idea is that humor arises from situations that an audience perceives simultaneously as both incorrect and acceptable—that is, both a violation and benign. There’s definitely room to argue with this theory, but I do think it explains a lot about what makes things funny.
Consider a few jokes from the Nordberg drug bust sequence through the lens of the Benign Violation Theory. Most viewers are familiar with the idea that a police officer might break down a door to apprehend a criminal. So, when Nordberg attempts this move only to get his leg stuck in the door, viewers experience it as a violation. When Nordberg demands a gang of armed criminals drop their weapons, and one dimwittedly obeys, both characters have violated our expectations of reasonable behavior. Importantly, the sequence only plays as benign because the film has already established (in the cold open) that it is operating in a world where violence does not harm its characters. The scene would not be funny if the audience didn’t already know that Nordberg will be okay no matter how many times he gets shot or how many bear traps he steps in.
Benign Violation Theory also explains why the cold open wasn’t funny to my daughter. For example, she had no reaction to seeing Leslie Nielsen’s character wipe Mikhail Gorbachev’s famous birthmark off his head. To a kid who may never have seen an image of him before, neither Gorbachev nor his birthmark carries any significance, so it doesn’t register as a violation.
This theory helps explain why some humor goes stale and how comedians can create humor that stays funny. While nobody would mistake me for a comedian, I think much of the work of being funny lies in creating the violation, which is more often than not a subversion of the audience’s expectations. The classic structure of a joke involves a setup, in which the comedian does something to trigger an expectation, and a punchline, which violates that expectation. Something purely surprising can be a good punchline, but relying on surprise is one way comedy becomes stale. True surprise is a one-time thing, and once the audience knows the surprise, the trick loses its magic.
Great comedy isn’t only funny the first time around. When you know the joke by heart—perhaps because it’s in a 37-year-old movie you’ve seen more times than you can count—and it remains funny, it’s because the comedian has managed to violate your expectations even though you knew what was going to happen. Sometimes the punchline is truly outstanding or delivered in a uniquely perfect way. But I think most enduring jokes owe their longevity to the way they are set up. A great setup leads audience members almost subconsciously into an expectation, which gets punctured by the violation no matter how familiar they may be with the joke.
Although the classic joke structure persists because it works, comedians can also subvert it to great effect. One thing I noticed while watching The Naked Gun was the number of jokes that have little or no setup at all. These jokes depend on the audience to come preloaded with expectations that the jokes violate. A running gag throughout the Naked Gun movies is how dangerous and violent the police officers are. This is one way the movie reads very differently in 2025, especially the line Nielsen delivers after his character is dismissed from Police Squad, “Just think, the next time I shoot someone, I could be arrested.” These jokes worked on the adolescent version of myself who walked into a movie theater with the expectation that police—especially the main characters of detective shows—are helpful authority figures worthy of unquestioned respect. I suppose I find these jokes less funny now for two reasons: my changed expectations diminish the impact of the violation, and my awareness of real police violence makes the joke less benign.
These setup-less jokes not only risk changes in audience expectations, but changes in the audience itself. When future audiences lack the expectations comedians assume they will have—as with my daughter and Gorbachev’s birthmark—the would-be punchlines become confusing nonsense. Comedy that relies too much on specific audience expectations is liable to lose its punch with each passing generation.
The passage of time can also add new context, altering humor beyond the comedian’s original intent. These context shifts can be dramatic, as with OJ Simpson, whose murder trial and resultant infamy could not have been predicted by The Naked Gun’s filmmakers. Seeing a wheelchair-bound OJ Simpson roll down a stadium staircase and tumble helplessly over the railing is funnier to me today than it could possibly have been in 1988. But I can just as easily understand someone who can’t laugh at Simpson no matter how much physical abuse his character endures on screen.
The experience of sharing comedy across generations fascinates me. Sharing something I found funny in the past with my daughter means confronting the ways I have changed, the ways the world has changed, and the ways the two of us are different. But it’s magical when some throwaway gag manages, despite all odds, to connect my past, my present, and my child, like a thread stretching across time. All culture is like this, I suppose, but comedy, with its dependence on shared meanings and expectations, seems especially susceptible to the passage of time. It somehow feels reassuring to know that I’m not the only one who finds “Hey! It’s Enrico Pallazzo!” as funny as I did when I first saw it in my own not-quite-teenage years.
I hope this little digression into humor was enjoyable, or at least interesting. As I think about the direction I want to take this blog, I’d like to make room to write about topics outside of my usual areas of expertise, and I hope I can address them in ways that express how much I enjoy exploring new ideas and learning new things. Explaining a joke may kill the humor, but understanding the universal human experience of laughter feels deeply worthwhile.
One final digression
In the extreme, a joke can be nothing more than its structure. This is what’s going on in one of my favorite species of joke, the shaggy dog story, which was perhaps perfected by the late Norm Macdonald over his many appearances on various Conan O’Brien shows. (Here’s a great example.) These jokes consist of the comedian telling a long story, often full of digressions and unnecessary details, culminating in an anti-climax: either a weak punchline or no punchline at all. The humor arises from the audience’s violated expectations around the structure of the joke itself: setups are supposed to be economical, or at least interesting, and punchlines are supposed to be unexpected and funny. The shaggy dog story does neither of these things, and the audience starts laughing as the realization gradually dawns on them that they’re not getting the joke they anticipated.
I was originally working on a longer post to finish off the month of June, but I couldn’t get it to a place where I was happy with it. There was, however, an element of that post that I thought was both important and worthwhile. So, instead of what I had originally planned, I’m polishing up and posting my notes on the topic of contemporary attention economics.
The concept of the “attention economy” was originally developed by the researcher Herbert A. Simon in the early 1970s, and in recent years it has attracted renewed interest including, among others, popular books by Jenny Odell and Chris Hayes. Simon studied how information affected organizational decision making, work which earned him the Nobel Prize in Economics in 1978. He observed that information had historically been a scarce resource, but that changes in computing and communication technologies were making information more abundant and accessible.
Simon’s critical insight was that information on its own is not all that valuable. To be useful, information must be combined with human attention. Intuitively, this makes sense. If you sit in a room and read a single book, the same amount of information has been transmitted whether you read the only book in the room or you were in a library surrounded by endless shelves of books. It’s your attention to the information that matters.
This key insight leads to an important secondary observation: attention is itself a scarce resource. In fact, attention isn’t merely scarce but fundamentally limited by the hard fact that we each have only 24 hours in a day. Simon theorized that as the supply of information grew, the scarcity of its necessary complement, attention, would become the limiting factor preventing organizations from successfully incorporating the available information.
What Simon predicted would be a concern for strategic decision makers within firms has become an issue faced by every individual on earth with a smartphone, but our challenge is complicated by conditions he never anticipated.
For one thing, the growth of companies whose business models depend on monetizing popular attention has created an incentive structure that would have been alien to Simon. These companies have innovated upon information itself, broadening and flattening it to invent the more flexible concept of “content,” a superset of information that encroaches on all aspects of human attention. Information bears some relationship to facts and data, but content can consist of anything that consumes attention regardless of its truth, meaning, or substance.
Not only do these actors have strong incentives to generate endlessly growing mountains of content, but they also face an economic imperative to capture more and more of their audience’s attention as directly as possible. This means that the amount of content is growing faster than ever, and that platforms actively drive audiences to spend more and more time consuming their content, without regard for its value to them.
The other condition Simon didn’t anticipate is the combination of global-scale social media platforms and ubiquitously-connected smartphones with push notifications, which are the technological changes that truly enabled the modern information environment. It’s characterized not only by an extreme abundance of content, but also by continuous, relentless efforts to seize the audience’s attention through distraction, interruption, heightened emotional intensity, and exaggeration of urgency, importance, and relevance. Innovations like infinite scrolling feeds, algorithmic recommendation, and engagement-based content ranking are all designed to manipulate individuals to consume more content than they would have otherwise.
Our good friend the logistic function, courtesy Wikimedia CC BY-SA 3.0: the basic mathematical model for a property that grows exponentially within a limited range.
Simon rightly predicted that informational abundance would cause an organization’s (or the public’s) ability to absorb new information to grow until it started to be limited by scarcity of attention. But nobody knew exactly how that limitation would manifest itself.
I suspect that what’s currently happening is the process of arriving at a new attention equilibrium as limits on content consumption become driven entirely by attention scarcity. Anecdotally at least, it feels like both online and offline, I encounter people who seem exhausted and troubled by the information overload they experience every day. If I’m right, and the world is starting to bump up against the upper limit of the public’s capacity to absorb content, it’s reasonable to be concerned about how that equilibration will happen.
Because the socioeconomic configuration that created these conditions is exploitative and unregulated, the process of arriving at this new equilibrium is most likely going to be chaotic, ugly, and painful. It would be naive to expect anything else, with platforms using any technique at their disposal to drive ever higher levels of content consumption, combining content of all types (e.g., current events, political opinion, entertainment, intentional misrepresentation, personal communications) into a single, undifferentiated stream, and competing viciously amongst themselves for an increased share of the audience’s limited time.
These changing conditions shape mass public opinion, which seems to be souring on technology in general, and they affect individual experiences as well. Consider a recent report from the American Psychiatric Association suggesting that diagnoses for adult ADHD have increased in recent years. Could this be an indication that people are feeling the effects of the fierce competitive struggle going on over their limited attentive capacity? Maybe people feel overwhelmed and conclude that the problem must be a deficit in their ability to pay attention, when what’s really happening is that their perfectly normal attention is being undermined by an information environment designed to manipulate and exploit it.
Speaking for myself, I’m looking for ways to reclaim my attention and be more intentional about how I spend it. This summer, I’m taking a more active role in choosing where I get my information from and resisting being led by algorithmic feeds. I’m also getting offline more, seeing people in person, and being more present in my community. These are small steps, to be sure, but critical ones. If you are also paying attention to your attention, I’d like to hear from you. Please let me know what steps you’re taking, what’s working, and what’s not.
Back in January when I launched this blog, I wrote that I planned to post content actively in “seasons,” recognizing that neither my inclinations nor my track record made it wise to expect consistent, uninterrupted posting over the long haul. The first season, I thought, would run through May, at which point I would figure out what to do over the summer.
Now that we’re well into June, it’s time to take stock of how this first season has gone and where things will go from here.
First and foremost, I learned that I was terribly out of practice at this kind of writing. Nearly all of the writing I’ve done since retiring my old blog 20-some years ago has been in either a business or an academic context. I think I got pretty good at both, but there is a bigger difference between that kind of writing and what I’m doing here than I anticipated.
The change has been a positive one. It’s good to practice a kind of writing that requires me to devote some effort to welcoming the audience and convincing them to keep reading. It’s also been worthwhile to put effort into cultivating a more aesthetically appealing writing style, a major departure from typical business writing, which usually aims to be generic and unadorned. I don’t know how successful I’ve been, but the mindset shift alone feels worthwhile.
Another thing I’ve learned is that my instincts are driving me toward longer posts. Most of my posts so far have exceeded 1,000 words (including this one), and a few drafts passed the 2,000 word mark before I edited them down. Although I did end up breaking one post into multiple parts, I decided to let my interests guide the writing, so I haven’t prioritized brevity. Instead, I’m writing as much as I feel I need to get my message across.
On my old blog, I experimented with microblogging avant la lettre, posting occasional tiny updates ranging from a phrase or two to a couple of sentences. This feels especially out of place today given the immense malign influence of Twitter on contemporary online culture. These days, the topics that draw my attention seem to demand substantially more space to articulate my ideas. In fact, I’ve determined that some of the topics I’m most eager to write about won’t fit into even a couple thousand-word posts. As I started to gather my thoughts on them, they felt like longer series that might stretch on for a half dozen or more entries. But given how much effort it took to complete more modest posts, I put those series ideas aside for the time being.
The Practical Challenges
Beyond the craft of writing itself, I’ve also encountered some practical challenges. Each post has taken me longer than I expected to complete. Much as I enjoy the process of outlining, drafting, reworking, and editing, each of these things has taken me more time than I had hoped. My rustiness and preference for longer posts certainly contributed to the slow pace, and I’m considering tracking my time more systematically to identify ways to speed things along.
Speaking of time, my other work has also reduced my writing productivity. Surprisingly, my consulting practice proved less disruptive than a side project I started. It has been quite fulfilling, drawing on my coding experience while giving me a chance to explore creative work that’s well outside my comfort zone. But a combination of factors caused that project to take up more of my time in March and April than I expected, which led to an unplanned two-month hiatus. The obvious remedy to that situation is just to post about my side projects, which I will probably start doing, perhaps once the current one is a little farther along.
Plans for the Summer
I’m thinking about the next phase of this project with these lessons in mind. Summer is coming, and between work, family stuff, a bit of travel, and projects on the side, I’ve got a lot going on. I’m going to use the next couple of months to regroup a bit. Taking an intentional break should feel a lot better than living with the angst of an unplanned posting drought hovering over my shoulder.
That said, I don’t mean to disappear entirely. My plan, after probably one more regular post this month, is to post one update a month for July and August, then pick up again with a second season in September. I might experiment with micro-posts as well, but I won’t be putting pressure on myself to post a ton of new content while I take some time to recharge and reorient myself.
A Preview of Next Season
Part of what I hope to accomplish over the next couple of months is to form a clearer picture of what the next season of this blog project will look like. But I do have a few ideas in mind that I can share now.
First, as I mentioned above, I am itching to write about a couple of topics that I think will each require a series of posts to cover in the depth I feel they deserve. Assuming I am able to complete my research and get myself organized, I would like to launch at least one of those series this autumn.
Second, I’d like to figure out what it would take for me to post weekly. That will require me to address at least one of the issues I described earlier around my preparation, pace, and post length. I’m working to resolve the tension between my inclination to write longer posts and my desire to post more often. It’s possible that I’ll come up with a plan for that over the summer, but since I intend to use the time primarily as a break, I will most likely aim for a more manageable posting schedule—something like a major post every other week—when things pick up again in September and try to make incremental improvements from there.
I remain energized by this project, and I encourage anyone who’s thinking about getting (back?) into blogging in 2025 to do it. Despite big platforms’ outsized influence over public attention, the Web as it was originally conceived—distributed, democratic, and a little bit weird—isn’t dead yet. It’s up to those of us who hope to revive it to create and maintain online spaces we control, and I couldn’t be happier to be playing my small part in that movement.
I really didn’t want to write about artificial intelligence (AI) again so soon, and I promise this will be the last one for a little while, but I came away from my last two posts with a very unsettled feeling. It wasn’t the subject matter itself that bothered me, but the way I ended up writing about it.
In writing those posts, I made an intentional decision to be consistent with the vernacular use of the term “AI” when discussing a specific set of popular tools for generating text. Surely, you’ve heard of tools like ChatGPT, Claude, and Gemini. I did this because using more precise language is (1) awkward, (2) inconsistent with the terminology used in the podcast that prompted my posts, and (3) so unfamiliar to a layperson that accurate language comes off as pedantic and unapproachable. In the interest of making the posts simpler and more focused on the issues raised by the way these tools are being used, I ended up just accepting the common language used to talk about the tools.
But the truth is that I know better than to do that, and it bothered me. ChatGPT ≠ AI, even if it’s very common to talk and write about it that way. As time passed, I felt worse and worse about the possibility that by accepting that language, I gave the impression that I accept the conceptual framing that these tools are AI. I do not. In this post, I intend to make an addendum/correction to clarify my position and add some context.
What’s Wrong with the Term “Artificial Intelligence”?
People have been confused about what “artificial intelligence” means for as long as they’ve used the term. AI, like all of computer science, is very young. Most sources seem to point to the 1955 proposal for the Dartmouth Workshop as its origin. Right from the start, it seems to have been chosen in part to encompass a broad, uncoordinated field of academic inquiry.
By the time I was a computer science undergraduate (and, later, graduate student responsible for TAing undergrads) in the late 90s, little had changed. The textbook I used, Artificial Intelligence: A Modern Approach (AIMA) by Russell and Norvig, didn’t even attempt to offer its own definition of the term in its introduction. Instead, it surveyed other introductory textbooks and classified their definitions into four groups:
Systems that think like humans
Systems that act like humans
Systems that think rationally
Systems that act rationally
The authors describe the advantages and drawbacks of each definition and point out later chapters that relate to them, but they never return to the definitional question itself. I don’t think this is an accident. If you take AIMA as an accurate but incomplete outline, AI consists of a vast range of technologies including search algorithms, logic and proof systems, planners, knowledge representation schemes, adaptive systems, language processors, image processors, robotic control systems, and more.
Machine Learning Takes Over
So, what happened to all this variety? The very short answer is that one approach really started to bear fruit in a way nothing else had before. Since the publication of the first edition of AIMA, researchers produced a host of important results in the field of machine learning (ML) that led to its successful application across many domains. These successes attracted more attention and interest, and over the course of a generation, ML became the center of gravity of the entire field of AI.
If you think back to the technologies that were attracting investment and interest about 10 years ago, many (perhaps most) were being driven by advances in ML. One example is image processing—also called computer vision (CV)—which rapidly progressed to make object and facial recognition systems widely available. As someone who worked on CV back in the bad old days of the 90s, I can tell you these used to be Hard Problems. Another familiar example is the ominous “algorithm” that drives social media feeds, which largely refers to recommendation systems based on learning models. Netflix’s movie suggestions, Amazon’s product recommendations, and even your smartphone’s autocorrect all rely on ML techniques that predate ChatGPT.
Herein lies the first problem with the way people talk about AI today. Though much of the diversity in the field has withered away, AI originally encompassed an expansive collection of disciplines and techniques outside of machine learning. Today, I imagine most technologists don’t even consider things like informed search or propositional logic models to be AI any more.
The Emergence of Large Language Models
In the midst of this ML boom, something unexpected happened. One branch of the AI family tree advanced unexpectedly rapidly when ML techniques were applied: natural language processing (NLP). Going back to my copy of AIMA, the chapter on NLP describes an incredibly tedious formalization of the structure and interpretation of human language. But bringing ML tools to bear on this domain obviated the need for formal analysis of language almost entirely. In fact, one of the breakthrough approaches is literally called the “bag-of-words model”.
A sample of what the NLP chapter of AIMA looked like before everything became a bag of words.
What’s more, ML-based language systems demonstrated emergent behavior, meaning they do things that aren’t clearly explained by the behavior of the components from which they are built. Even though early large learning networks trained on language data contained no explicit reasoning functionality, they seemed to exhibit that behavior. This is the dawn of the large language model (LLM), the basis of all of the major AI products in the chatbot space. In fact, this technology is the core of all of the most talked-about products under the colloquial AI umbrella today.
Here’s the second problem: people often use the term “AI” when they really this specific set of products and technologies excluding everything else happening in the field of ML. When someone says “AI is revolutionizing healthcare,” they might be referring to diagnostic imaging systems, drug discovery algorithms, or robotic surgery assistance, or they could be talking about a system that processes insurance claim letters or transcribes and catalogs a provider’s notes. The uncertainty makes evaluating these claims almost meaningless.
The Generativity Divide
There’s another important term to consider: “generative AI.” It describes tools that produce content, like LLM chatbots and image generation tools like Midjourney, as opposed to other ML technologies, like image processors, recommendation engines, and robot control systems. Often, replacing the overbroad “AI” in casual use with “generative AI” captures the right distinction.
And that’s an important distinction to draw! One unfortunate result of current “AI” discourse is that the failings of generative tools, such as their tendency to bullshit, become associated with non-generative ML technologies. Analyzing mammograms to diagnose breast cancer earlier is an extraordinarily promising ML application. Helping meteorologists create better forecasts is another. But they get unfairly tainted by association with chatbots when we lump them all together under the “AI” umbrella.
Consider another example: ML-powered traffic optimization that adjusts signal timing based on real-time conditions to reduce congestion. Such systems don’t generate content and don’t lie to their users. But when the public hears “the city is using AI to manage traffic,” they naturally imagine the same unreliable systems that invent bogus sources to cite, despite the vast underlying differences in the technologies involved.
That said, we can’t simply call generative AI risky and other AI safe. “Generative AI” is a classification based on a technology’s use, not its structure. And while most critiques of AI, such as its impact on education, are specific to its use as a content generator, others are not. All learning models, generative and otherwise, require energy and data to train, and there are valid concerns about where that data comes from and whether it contains (and thus perpetuates) undesirable bias.
The Business Case for Vague Language
Why does this all have to be so confusing? The short answer is that the companies developing LLMs and other generative tools are intentionally using imprecise language. It would be easy to blame this on investors, marketing departments, or clueless journalists, but that ignores the ways technical leadership—people who should know better—have introduced and perpetuated this sloppy way of talking about these products.
One possible reason for this relates to another term floating around: artificial general intelligence (AGI). This is also a poorly-defined concept, but researchers who favor building it generally mean systems with some level of consciousness, if not independent agency. For better or worse, many of the people involved in the current AI boom don’t only want to create AGI, they believe they are already doing so. Putting aside questions of both feasibility and desirability, this may explain some of the laxity in the language used. AGI proponents may be intentionally using ambiguous, overgeneralized terminology because they don’t want to get bogged down in the specifics of the way the technology works now. If you keep your audience confused about the difference between what’s currently accurate and what’s speculative, they are more likely to swallow predictions about the future without objection.
But I think that’s only part of what’s happening. Another motivation may be to clear the way for future pivots to newer, more promising approaches. Nobody really understands what allows LLMs to exhibit the emergent behaviors we observe, and nobody knows how much longer we’ll continue to see useful emergent behavior from them. By maintaining a broad, hand-wavey association with the vague concept of “AI” rather than more specific technologies like LLMs, it’s easier for these companies to jump on other, unrelated technologies if the next breakthrough occurs elsewhere.
Speaking Clearly in a Messy World
That makes it all the more important for those of us who do not stand to gain from this confusion to resist it. Writing clearly about these topics is challenging. It’s like walking a tightrope with inaccuracy on one side and verbosity on the other. But acquiescing to simplistic and vague language serves the interests of the AI promoters, not the community of users (much less the larger world!).
From now on, I’m committing to being more intentional about my language choices when discussing these technologies. When I mean large language models, I’ll say LLMs. When I’m writing about generative tools specifically, I’ll use “generative AI.” When I’m talking about machine learning more generally, I’ll be explicit about that, too. It might make my writing a bit more cumbersome, but this is a case where I think precise language and clear thinking go hand in hand. And anyone thinking about this field needs to be very clear about the real capabilities, limitations, and implications of these tools.
The stakes are too high for sloppy language. How we talk about these technologies shapes how we think about them, how we regulate them, and how we integrate them into our lives and work. And those are all things that we have to get right.
Updated 7 June 2025:
I use the term "AI" in this post in its non-specific, colloquial form, which I regret. I'm leaving this post as written, but I encourage you to read my subsequent post where I clarify my thinking and discuss bettew ways to write and think about this topic.
Last week I posted some thoughts on the impact new artificial intelligence (AI) tools are having on education. As if on cue, this article came out a few days later. It should come as no surprise that an article subheaded, “ChatGPT has unraveled the entire academic project,” missed the mark by a wide margin in my view. The article largely blamed students and placed effectively no responsibility on educators or administrators for actually ensuring that students learn. The only educational alternatives mentioned in the article are found in a throwaway line about blue books and oral exams.
As it happens, this discourse touches on some additional thoughts I had about the Search Engine podcast episode (“Playboi Farti and his AI Homework Machine”) that prompted my previous post. In the middle of the episode, a guest relates to host PJ Vogt the story of a man who used ChatGPT to write a birthday card for his wife. She was moved to tears and told him it was the best card he’d ever given her. Both Vogt and his guest were disturbed by this (and, in the interest of full disclosure, I was too), but even more disturbed that teenagers who heard the story saw nothing wrong with it.
This story raises a couple of fascinating questions that I chose not to address in my previous post. First, when could AI be appropriate as a creative tool? And second, why do different people form such different moral judgments about its use?
The Classroom Context: A Different Issue
Before diving deeper, I want to clarify something I took for granted in my previous post about AI in education. Using AI to complete homework assignments is wrong because it defeats the purpose of education: providing students with experiences from which they learn and grow. When a student avoids writing an essay, regardless of exactly how they do it, they lose the opportunity to learn.
This is an age-old problem. For as long as there have been students, their short-term incentives haven’t been aligned with their long-term learning needs. From this perspective, asking what ought to be done about AI is akin to asking what ought to be done about essay mills or CliffsNotes. It’s a worthy question, but it doesn’t illuminate the larger ethical questions around responsible AI use in everyday life.
Getting Past Absolutism
There’s another argument I want to acknowledge without addressing in full. Many people consider all AI tools to be inherently unethical and strictly avoid their use. I’ve seen this view justified by concerns over the origins of the training data, the motives and behavior of the people and companies who profit from AI products, and environmental impacts.
For the purposes of this post, I’m going to put aside this valid perspective for a few reasons. First, it’s highly contingent on facts that are both disputable and subject to change. Models can be trained on legitimately obtained data, open source systems can mitigate concerns over profit-driven corporate actors, and environmental arguments overstate current resource consumption and underestimate efficiency gains.
Second, while I respect anyone who’s made the decision not to use AI themselves, this stance provides no guidance beyond withdrawal from a world where these tools are becoming commonplace. That’s not to say that everyone must accept any way any other person chooses to use AI. Rather, I think even those who find all AI use objectionable will find some uses more troubling than others. It’s worth exploring why those feelings differ, even if you have resolved to avoid these tools entirely.
Why Does the AI Birthday Card Feel Wrong?
What I find fascinating about the birthday card story is how differently people react to it. Let’s consider some thought experiments to clarify these differences.
Most people find store-bought greeting cards acceptable, even if they aren’t the most personal or thoughtful ways to express affection. What if instead, the man had asked his eloquent friend “Cyrano” to write the card for him? If he passed off Cyrano’s work as his own, I suspect most people would feel the same unease as with the AI-generated card. If, on the other hand, he was up front about commissioning Cyrano to write a special birthday message, it feels much less troubling.
I see three factors at work in my reactions to these scenarios:
Honesty vs. deception: Misleading someone about authorship feels wrong regardless of whether a source is human or AI. The teenagers who see no problem with the AI-generated card seem focused on the outcome rather than the deception. This may reflect a broader cultural shift regarding the importance of truth and authenticity that I find quite disturbing.
Authentic engagement: The husband’s own poorly written but heartfelt card would express authentic feelings. Even a card written by a friend who knows his wife could engage meaningfully with her and the relationship. The AI-generated card, on the other hand, seems more like the store-bought card: generic and lacking an authentic connection to the specific individuals involved.
Meaningful effort: We value gifts in part based on the sacrifice they represent, not just in terms of money but time and attention. Since AI tools are specifically designed to achieve better results with less effort, that sacrifice is explicitly reduced, diminishing the value of the gesture.
Of these three factors, honesty seems most fundamental, in that it’s difficult to imagine valuing work presented deceptively, no matter how much effort went into creating it or how well it engages with its subject. But all three factors contribute positively to the value and meaning of communication.
What Our Tools Say About Us
Perhaps this offers some insight into why some of us react so negatively to others’ use of AI. If you are more inclined to use it for work that you personally value less, then your decision to use AI communicates how little you value the task at hand. It should be no wonder that someone might take offense at receiving a supposedly personal token that signals it wasn’t worth your time or effort.
There’s another possibility that I think is worth considering. The teenagers who approved of the AI birthday card might be signalling something else entirely: feelings of incompetence. Growing up involves struggling with skills like self-expression while gradually developing mastery and confidence. Psychologist Erik Erikson describes this in the fourth stage of his developmental model as a crisis between “Industry and Inferiority.” According to Erikson, failing to successfully navigate that crisis could produce individuals who internalize a sense of inadequacy.
I wonder whether this reveals a cohort of students who, faced with the common struggle of learning how to express themselves (perhaps magnified by the trauma of pandemic-related isolation and educational disruption) have simply concluded that they aren’t capable of writing competently. If so, the resulting dynamic, where young people try to express themselves using AI because they feel incapable while others interpret that as insincerity or disengagement, would be tragic in more ways than one.
Finding a Way Forward Together
A world with AI in it is undoubtedly more ethically complicated than one without. All of us are navigating this new terrain together, even if we don’t see everything the same way.
In my own work, I have been applying three key questions when considering AI use:
Am I presenting my work honestly? Am I transparent about how I’ve used AI?
Am I engaging authentically with my subject? Does my use of AI allow me to maintain a genuine connection to the people and ideas involved?
Does my work meaningfully reflect my effort? Am I using AI as an enhancement or a substitute?
My thinking here is still in the formative stages, and I hesitate to call this even a framework. At best, I’m pulling together field notes that may some day mature into something more robust.
I am genuinely curious: What considerations beyond efficacy influence your decisions to use or avoid AI? Do the three factors I’ve identified resonate with you, or have you found other considerations more useful?
The birthday card anecdote may seem trivial, but it sits atop an iceberg of profound questions about the impact new technologies have on human relationships. Ethical questions that seemed academic until recently are now implicated in our everyday choices. A person moved to tears by a computer’s words under false pretenses is not a hypothetical—it is now a fact of life. The urgency with which we approach the task of developing ethical guardrails for these tools should reflect that.
Updated 7 June 2025:
I use the term "AI" in this post in its non-specific, colloquial form, which I regret. I'm leaving this post as written, but I encourage you to read my subsequent post where I clarify my thinking and discuss bettew ways to write and think about this topic.
I was catching up on my podcast backlog when I came upon an episode of Search Engine entitled “Playboi Farti and his AI Homework Machine” that raised some important questions about education in a world where students have access to artificial intelligence (AI) tools. In it, the podcast’s host, PJ Vogt, interviews a 13-year-old using the pseudonym Playboi Farti (😚🤌), who happily admits to using ChatGPT for his homework assignments. That conversation got me thinking about how AI tools are changing early education and led me to a key question: How can schools move beyond reactive policies and truly give students what they need?
It’s worth giving the entire episode a listen. In fact, there was a segment that sent me down another line of thinking that I’ll revisit in a later post. For now, I’ll focus on two segments from the podcast that illuminate the challenges of adapting pedagogy to changing technology.
The Unlearned Lessons of Long Division
In one early segment of the podcast, PJ Vogt recounts his childhood frustration with learning long division. As a student, he considered it “hard and stupid” since calculators could do the work more efficiently. As an adult, he feels justified in this assessment because he never uses long division in his daily life.
This anecdote highlights a fundamental problem with traditional math education that remains relevant today. When schools emphasize memorizing algorithms over developing conceptual understanding, students may never come to understand the point of learning math in the first place. Vogt’s experience is shared by generations of students (myself included) who encountered early math the same way: rote memorization, repetitive drills, and endless worksheets of identical problems. The result is as unfortunate as it is predictable: many students develop math anxiety, if not outright aversion, and never progress to understanding core mathematical concepts.
If you associate math with mechanical calculation, it makes sense to view a calculator as a math machine. In reality, calculators excel at computation but can’t contextualize a math problem, determine which operations to perform, or make meaning of results. In fact, people who don’t really understand math have an unnerving tendency to get wrong answers even with calculators at their disposal. In contrast, the everyday mathematical skill I use most often isn’t calculation but estimation, a feel for quantities and relations that helps me move through the world without reaching for my phone constantly.
America’s struggle with math education has created a vicious cycle that hinders reform. Both “new math” in the mid-20th century and more recent “common core” math standards faced resistance in part from adults who never developed comfort with basic arithmetic reasoning. Many of the parents who complain that they can’t help their kids with elementary school math homework are revealing gaps in their own understanding that their educational experiences should have addressed.
The underlying cause here is that memorizing tables and practicing algorithms does not by itself develop reasoning skills. Before students had access to calculators, it made more sense to build a math curriculum on the foundation of calculation. But schools adapted too slowly to the decreasing importance of calculation as a skill, and now the public is facing the consequences of that delay.
The lesson here is critical: new technologies can change the meaning and value of certain skills. When this happens, schools shouldn’t cling to outdated methods. Instead, they should focus on the underlying thinking skills that remain durably valuable. This lesson applies directly to the way schools should approach writing in the age of AI.
Beyond the Five-Paragraph Essay
The central question of the podcast—what should be done about school essay assignments—bears a striking parallel to the long division issue Vogt mentioned in passing. If educators simply ban AI tools and continue using traditional writing assignments, they’ll likely repeat the mistakes that have undermined early math education.
Treating the five-paragraph essay as inherently valuable is just as misguided as valorizing long division as a crucial mathematical skill. Both may have served purposes in their time, but their educational value is diminishing as new technology becomes widespread. If teachers continue to assign easily generated essays, I expect students will look back on them in later years as “hard and stupid” tasks that taught them little of lasting value.
Anyone who claims to know the best way to integrate AI tools into schools today should be viewed with extreme suspicion. But purely from a practical standpoint, AI detection tools simply don’t work well enough for teachers to enforce outright bans. Educators’ time is better spent rethinking the way writing is taught than policing its production.
In a world where a book report or informational essay can be generated in seconds, teachers need new methods to ensure that students learn how to take in information, think critically, and express themselves effectively. Just as elementary math education is gradually moving beyond calculation drills, writing instruction needs to decrease its reliance on formulaic, trivially automatable exercises. This will require creativity from educators, and flexibility and openness from parents.
One possible approach might be to tease apart the educational goals of traditional writing assignments. In the same way that older math curricula attempted to develop both calculation proficiency and numerical reasoning through arithmetic drills, a five-paragraph essay creates opportunities to develop multiple skills at once: research, reasoning, argumentation, and composition. Now that students can generate these essays automatically, these skills need more direct attention, perhaps in the form of structured class discussions and collaborative writing projects supported with continuous feedback.
I’m confident that creative, dedicated educators can develop effective new writing curricula if they aren’t forced to spend their limited time playing AI detective.
Teaching All Tomorrow’s Playboi Fartis
The concern over AI is just the latest example of an issue that has recurred throughout the history of education. Changes in the world, especially new technologies, have always disrupted established educational practices and forced schools to reconsider what and how they teach. The cultural anxiety and corporate hype surrounding AI make today’s stakes feel particularly high, but whether or not this is the greatest technological disruption in the history of education, it surely won’t be the last.
What’s most important is maintaining focus on the deeper purpose of education. Calculation shouldn’t overshadow mathematical reasoning. The five-paragraph essay can’t take priority over critical thinking and communication skills. Distinguishing between process, purpose, and outcome is essential when reevaluating longstanding practices.
Educational methods are themselves technologies, and like all technologies, they evolve over time. But education’s core mission—to develop intellectual capabilities and skills that help students thrive—remains constant. Holding fast to outdated approaches undermines that mission. Although adapting to technological change is challenging, it’s the only way to ensure that students get an education worth having.
It’s futile and counterproductive to engage in an arms race against the likes of Playboi Farti. Rather than focusing on preventing students from using AI tools, educators should develop teaching methods that can’t be undermined by those tools. Only then will they have any chance of finding ways to incorporate them constructively.
If you spend time in the board gaming community, you’ll eventually hear people talk about the “Golden Age of Board Games.” The one small problem is that nobody seems to agree on exactly when or what this golden age is. Historians may place it in the distant past, while newer gamers and industry voices tend to say we’re living through it right now, or perhaps that it just recently ended.
I believe the story is more complicated. The past few years have undoubtedly been good ones for certain types of games and certain industry business models. These prosperous times have attracted investment and attention, but they’ve also created a market crowded with similar products and a more homogenous gaming culture. And as any ecologist will tell you, homogenous ecosystems also tend to be fragile. That fragility leads publishers to grow conservative, fearful that changing customer preferences or economic conditions could threaten their businesses. It’s not easy being golden.
Setup Phase: Laying the Board on the Table
I don’t claim to be a historian of games, just someone with the perspective of a lifelong gamer. I grew up in a house where my parents kept a closet full of board games we played regularly. As a kid, I pored over my dad’s SPI games, careful not to lose a counter or damage a paper map. Over the years, my gaming touchstones grew to include Dungeons and Dragons, Axis & Allies, and Diplomacy. I learned to play Magic the Gathering on the wildly unbalanced Unlimited Edition. And sometime in college, between games of Axis and Allies and Diplomacy, I played my first game of Settlers of Catan (now simply CATAN).
After college, I got more involved with Play by Email Diplomacy. I have never been a big convention-goer, but I attended a few around this time, including DipCon XXXIV in Denver, where I was playing at the table next to Edi Birsan’s legendary Immaculate Concession. My wargaming peaked around these years with an epic World in Flames campaign and explorations of classic games from before my time, like Flat Top. I have even beheld firsthand the immensity of The Campaign for North Africa, considered by some the longest and most complex wargame ever published.
Like many American game enthusiasts in the 1990s, my perspective on board games’ potential was reshaped by contact with contemporary European design. While “eurogame” has become a vaguely defined term today, it originally referred specifically to games developed and published in Europe, characterized by language-independent design, indirect player conflict, and non-military themes.
Although I remain a game omnivore who enjoys everything from wargames to party games, most of my current gaming falls into what I would categorize as “hobby gaming,” which I describe broadly as the activity of the dedicated community that has arisen around contemporary tabletop games. As it is in most hobbies, the gaming community is integrally connected to a network of professional and commercial interests, including publishers, retailers, conventions, and media.
Opening Moves: What Makes an Age Golden?
When people claim we’re in a golden age of board gaming, they typically point to the volume of new releases, the quality of designs (referring variously to design or production quality), the size of the market, and the proliferation of board gaming events.
I used to hear more celebration of game diversity, but this seems less common now than it was 20 years ago. This reflects a real shift in the industry toward larger, more elaborately produced games and away from smaller, more economical ones. Crowdfunding platforms, which have become critical channels for discovery and marketing, contribute to the escalating expectations around new major releases. Where crowdfunding once largely served to help small publishers validate demand for projects, it now draws consumers’ attention to projects’ growing scale and ambition. You’re not just buying a game, you’re participating in a campaign! As more and more money rolls in and the game swells with stretch goals and bonus materials, the excitement of the campaign can overshadow the long-delayed enjoyment of playing the finished product.
Publishers have adapted to these raised expectations by adopting familiar strategies from other parts of the media world, relying on established intellectual property, sequels, and series to manage risk. This isn’t entirely new, of course. Before Settlers was CATAN, there were Seafarers, Cities and Knights, expansions, and expansions for expansions. But when one of the most popular recent releases, Wingspan, spawned fantasy (Wyrmspan) and aquatic (Finspan) followups, the community couldn’t help poking a little fun at the emerging pattern of “Thingspans”.
Early Game: The Play Remains the Same
The repetitiousness of popular new releases is no joke. When Wingspan became a commercial success with its then-unusual birdwatching theme, the market responded with all manner of plant- and animal- and nature-themed card collecting games. Even visual presentation serves a commercial end: talented artists do amazing work, but once successful, the industry transforms them into marketable brands. And that’s not even considering the recognizably one-note visual slop of AI-generated graphics, sadly becoming more common.
Gameplay itself is growing monotonous, in part due to the growing role of conventions in the hobby. Playing with strangers in a convention setting is very different from playing with friends at home. Some players react badly when they feel they’re being attacked, so managing conflict at a table can be challenging. It’s even more difficult when friends and strangers play together, as often happens at conventions. It’s no accident that designers and publishers attuned to the needs of convention play tend to create games with limited player interaction, sometimes derisively called “multiplayer solitaire.”
This may be my own bias talking here. I’ve never been comfortable at large conventions, and even my limited participation has exposed me to bad experiences with problem players. But interestingly, Diplomacy—a game with an undeserved reputation as a “friendship killer”—has one of the most cordial communities I’ve encountered in the hobby. I have only ever had positive experiences playing this high-conflict, betrayal-heavy game face-to-face with strangers at conventions. One reason may be that it’s an established game with a dedicated community, whereas mainstream hobby gaming focuses on newer releases with fewer established expectations.
Midgame: The Consumption Machine
The focus on new releases is driven by board game media and publishers who rely on these channels for publicity. Gaming content increasingly centers on acquiring new games, creating a whole subgenre dedicated to reviews and buying guides. Content creators face tremendous pressure from both audiences and sponsors to produce definitive verdicts on games as early as possible in their release cycles.
With the predominance of crowdfunding, much of this coverage occurs before games are widely available, requiring advance copies and publisher collaboration. Unsurprisingly, this arrangement creates significant conflicts of interest and conditions ripe for misbehavior, as we’ve seen in cases like that of the Quackalope YouTube channel.
Even honest reviewers work under intense time pressure, creating a media environment full of superficial impressions based on little play time and even less reflection. It’s just not possible to produce thoughtful analysis of an unreleased game comparable to what you could create for a game that’s 20 years old, like Twilight Struggle, or 30 years old, like Settlers of Catan, or 66 years old, like Diplomacy.
But this isn’t solely a media issue, and the fact is that most new games aren’t designed to be played in 66 years, or even 20. They’re created to look impressive, drive successful crowdfunding campaigns, be easy to review, learn, and teach, and play reasonably well during early learning sessions (often with unfamiliar players you won’t see again). If particularly ambitious and farsighted, they’ll include hooks for expansions or sequels. In short, they primarily serve the industry’s commercial interests rather than hobbyists’ personal enjoyment.
Interlude: The Things We Lost in the Dark
One example of what today’s “Golden Age” seems unable to produce is Nacht der Magier. This innovative game casts players as wizards gathered around a magic campfire, with a unique twist—it’s played in complete darkness. The elevated board features only a few glow-in-the-dark elements to guide players as they try to land one of their glowing cauldrons in the central divot. They cannot see most of the obstacles that fill the board, and they must end their turn as soon as any piece falls off, signaled by the sound of wood hitting the tabletop.
Such a game would struggle to succeed today. It’s essentially impossible to play at a convention, requiring darkness and light-charged phosphorescent pieces. It can’t be effectively demonstrated in a youtube video. It’s an intimate experience, played huddled together in the dark with hushed voices listening for the clatter of falling pieces to break the tension.
It’s also a bit complex to produce, requiring custom wooden and plastic pieces with glow-in-the-dark printing. And unfortunately, for a game with targeted appeal, production complexity alone likely takes it out of the realm of economic feasibility.
The game may not be for you personally. Its BoardGameGeek reviews are not all glowing. But even though Nacht der Magier isn’t for everyone, I find it hard to argue that it is good for the hobby for there to be so little room for such innovative designs. Hobbyists sometimes look down on “kids’ games,” but it’s precisely through these games—designs for kids that can be enjoyed by grownups—that we transmit our love for this hobby to the next generation of players. My school-aged daughter had a sleepover a few months ago, and I can confirm that Nacht der Magier was a hit.
Late Game: Bright Spots
Despite the challenging environment, designers and publishers continue to do interesting and challenging work. Stonemeier Games, despite receiving gentle ribbing for their Thingspans, consistently produces a variety of high quality designs. Niche publishers like Hollandspiele and New Mill Industries use small print runs and just-in-time production to produce designs that larger publishers can’t or won’t.
On the design side, Cole Wehrle has found commercial and critical success doing diverse and innovative work. His recent games range from asymmetric conflict in a fantasy (Root) or historical setting (Pax Pamir) to sci-fi campaign games that creatively combine gameplay mechanics (Arcs), to an ambitious and earnest exploration of colonialism (John Company). Even if these games don’t appeal to you, they clearly avoid repetition.
While I appreciate these creative outliers, they remain exceptions to the rule. As the industry mainstream grows larger and louder, the vital work being done outside of it seems increasingly marginalized by comparison.
Final Scoring: Reconsidering the Golden Age
Considering how much the hobby has changed during my life, I can’t deny that we’ve likely been living through some kind of golden age. What remains unclear is whether it has ended, and whether it was as golden as it could have been. I can’t celebrate the commercial pressures that have limited our imagination as players and reduced the space for innovation. And I remain particularly skeptical of the role commercial interests play in prioritizing consumption and acquisition over enjoyment.
Speaking for myself, I like variety, creativity, and innovation. I believe a healthy hobby publishes diverse games, even if not all appeal to me personally. And while the end of this golden age may hurt commercial interests, it might not be so bad for the hobby as a whole. After all, many of us now have shelves full of great games, and we’ll have plenty of time to enjoy them while we wait for the next golden age to begin.
I first encountered Bryan Stevenson and his work nine years ago. Google, my employer at the time, invited him to speak about his nonprofit, the Equal Justice Initiative (EJI), which provides legal representation to people who have been subjected to unjust treatment during criminal proceedings or incarceration. His talk, which is posted on YouTube, moved me profoundly, and I went on to read his book, Just Mercy.
A core idea motivating Bryan Stevenson’s work, laid out both in that talk and in Just Mercy, is that “Each of us is more than the worst thing we’ve ever done.” In his context, that “worst thing” could be quite terrible. While its work includes representing the falsely accused, EJI often represents people who truly have committed heinous crimes.
It takes a special kind of moral clarity to represent and seek justice for a murderer who has been abused in prison. After reading Just Mercy, I found myself much more attuned to the many ways our society discards those deemed unworthy of fair treatment. We can’t call justice a foundational social value if it is contingent. If there are things we can do to have the protection of the law withdrawn from us, then that protection isn’t really meaningful. Everyone whose work touches criminal investigation, trial, or punishment should be held to the highest standards because of, not despite, their impact on those who may have done horrible things.
Lately, however, I’m troubled to hear this language of mercy and second chances voiced in some unexpected places.
When venture capital firm Andreessen Horowitz (a16z) hired a man freshly acquitted for choking Jordan Neely to death on the New York subway, they told their investors, “We don’t judge people for the worst moment in their life.” Notably, nobody disputes the fact that a16z’s new investment associate killed a man on the floor of a subway car. He was acquitted only because a jury did not deem that act to be a crime.
Similarly, when a 25-year-old staff member of the new “Department” of Government Efficiency resigned over vicious tweets endorsing racism and eugenics, the Vice President of the United States dismissed it as “stupid social media activity” that shouldn’t “ruin a kid’s life.” The staffer was promptly reinstated into his role as a “special government employee.”
These echoes of Stevenson’s words might sound familiar, but they deserve careful scrutiny. Have a16z or the current administration ever invoked mercy as a broader goal? Not so far as I can tell. Beyond a handful of podcast episodes posted five years ago, Andreessen Horowitz’s engagement with criminal justice seems limited to investing in drones and surveillance technology. Their founding partner made such large personal donations to the Las Vegas police that he felt the need to write a petulant response when investigated by journalists. Regardless of how worthwhile it may be to buy drones and cappuccino machines for the police, I think it’s fair to say that these investments do nothing to advance the cause of building a merciful society. As for the administration, its tough-on-crime rhetoric speaks for itself.
So what’s really happening here? One obvious answer is that hiring a killer and refusing to accept the resignation of an unapologetic bigot are ways to sanction their behavior. We like to believe we maintain social norms against killing someone when it could be avoided and against racist public statements. Rejecting these norms requires some affirmative signal, and that’s what a16z and the administration are providing.
This reveals the first crucial distinction: Stevenson calls on society to recognize the rights of those who do wrong. In contrast, these recent cases are declarations that certain acts are not wrong at all, and even more troublingly, they suggest that those who still maintain longstanding social norms are themselves the wrongdoers.
But there is a second important difference, one hidden by rhetorical sleight of hand. When EJI takes a case, they fight for fundamental civil rights: the right to a fair trial, to representation, to be sentenced fairly, and to humane punishment. By contrast, in these other cases, the talk of “ruined lives” is a misdirection disguising what’s really happening: the loss of positions of extraordinary influence and privilege.
When the Vice President claims a racist’s life has been ruined, he’s talking about someone losing a powerful role in the federal government, treating an immense privilege as though it were a basic right. And if we take a16z at their word, they seem to be claiming that it would be unfair not to elevate a killer to the investing team at one of Silicon Valley’s biggest venture firms. They’ll have to forgive my skepticism that you or I would get very far with that approach if we applied for such a coveted role.
I’m reminded of the 2018 Brett Kavanaugh Supreme Court confirmation hearings, when supporters claimed that sexual assault allegations against him were intended to “ruin his life.” Obviously, they didn’t, but even if they had prevented his appointment to the Supreme Court, was that his right? Would his life have been ruined had he instead returned to his previous role as a federal judge on the DC Circuit Court of Appeals? I’d wager many people would gladly accept that kind of “ruined” life.
But of course, this is nothing more than a flimsy, bad faith attempt to cloak approval for violence and eliminationism in lofty language. If you spend your time, like Bryan Stevenson, ensuring that the poor and marginalized are afforded basic rights and dignities, then I believe in your commitment to mercy. If, on the other hand, you spend your time granting vast privileges to people who harm the poor and marginalized, then you’re not showing mercy—you’re showing your hand.
One of the best things I did last year was to read Robert Caro’s classic 1974 biography The Power Broker: Robert Moses and the Fall of New York. I finally tackled all 1,162 pages of it (plus 80 more pages of notes), so it can be retired from the shelf of shame where I keep the massive books I want to read but never get around to. (Look out, Infinite Jest.)
I was inspired by the 99% Invisible podcast, which covered The Power Broker in a series of episodes that ran like a virtual book club. Now that all 12 episodes have been posted, you can follow along at your own pace, but I recommend taking your time with it, because there is so much to enjoy about the book.
Calling The Power Broker a biography doesn’t do it justice. More than a picture of one (admittedly significant) individual, it is an exploration of power, how power flows both through and around democratic institutions in America, and how power etches itself into the physical structure of our cities, highways, parks, and homes. It is also, in its way, a love letter to New York in all its multitudinous possibilities: the many faces of New York experienced by its residents as well as the many New Yorks that once existed or might have existed. It is a monumental work written with extraordinary grace.
Unlike readers in 1974, I approached the book already aware of Robert Moses’ reputation as a villain of urban planning. While I found ample evidence supporting that view, I also found something more profound: an invitation to understand a man who caused immense harm to the city he claimed to serve without excusing or minimizing that harm. Nowhere is this challenge more powerfully presented than in the book’s conclusion, where Robert Caro frames and reframes Moses, forcing readers to confront the complexity of this figure, contradictions and all.
The final chapter, simply entitled “Old,” finds Moses just after the end of his nearly 44-year-long career as a public official, a time during which he never once held elected office. By this point, readers have witnessed how Moses reshaped America’s largest city through an unprecedented campaign of public development. The New York we know today would be unrecognizable without his influence. But readers have also seen his utter contempt for many of the city’s people, including but not limited to his famous and flagrant racism. Again and again, Moses callously dismissed those he deemed unworthy simply for being in his way.
The cost New York paid to realize Moses’ vision is staggering. From the benighting of Sunset Park beneath the Gowanus Parkway to the way the Cross-Bronx Expressway chopped up East Tremont like a meat axe—Moses’ words, not Caro’s—the impact on these broken neighborhoods and displaced families is impossible to fully comprehend. The bill for all this so-called development has still not been fully paid. Today, it’s a cost paid by travelers and commuters wasting their time in traffic and by residents leading lives made shorter and sicker by pollution.
Yet in the final pages of the book, Caro achieves something remarkable. He presents us Moses at nearly 80 years old, stripped of power and fading from relevance. Thrust into retirement, Moses shrinks into a frustrated and disappointed figure. Despite receiving occasional recognition that falls short of his grandiose expectations, he finds himself increasingly criticized by the very public that once celebrated him. Somehow, Caro manages to inspire a flicker of sympathy for this vain, arrogant bully who deceived and mistreated so many for so long in so many ways.
And then, in the famous closing paragraphs of the book, Caro subtly, masterfully, fractures that sympathy.
In private, his conversation dwelt more and more on a single theme—the ingratitude of the public toward great men. And once, invited by the Church to speak at the dedication in Flushing Meadows Park of the Excedra, a huge, marble bench for reflection donated by the Roman Catholic Diocese of New York, he gave vent to his feeling in public. Turning to a high church official who was also an old friend, his voice booming out over the public address system, he said:
“Someday, let us sit on this bench and reflect on the gratitude of man.”
Down in the audience, the ministers of the empire of Moses glanced at one another and nodded in their heads. [Moses] was right as usual, they whispered. Couldn’t people see what he had done?
Why weren’t they grateful?
Caro spends the bulk of the last chapter reminding readers that, despite his posturing, Moses is merely a man, as pitiful and flawed as any. But this final anecdote is a warning that sympathy cannot excuse Moses. He is humiliated in his dotage, but unremorseful. He demands thanks, not forgiveness, for the damage he caused.
There is a powerful lesson here. It’s tempting to write off people who do harm as monsters, but at some level, this means falling into the same trap that enabled Moses’ worst acts: the failure to see others as fully human. Caro challenges readers to fully apprehend Moses—or, I’d argue, any historic wrongdoer—by maintaining moral clarity about the damage he did while resisting the urge to anathematize him as a caricature of evil.
In our current moment, when public discourse rewards superficial judgments that fit into tweet-sized declarations, The Power Broker is more valuable than ever. It demonstrates that embracing nuance does not demand surrender to moral ambivalence and shallow bothsidesism. Within this deeply researched and profoundly humane work, readers have room to stretch out and wrestle with complex questions about the public good while maintaining clear-eyed judgment about right and wrong. Even 50 years after its publication, it remains a unique and precious meditation on cities and the flawed humans who live in and shape them. I’ve thought about it every day since I finished it and will undoubtedly revisit it here in future.