Stray Thought Adam Stone's Home on the Web

Links: Newspaper Stories

The Serious Pages

With the state of American news media today being a very live concern, I was reminded of two films I saw a few years ago describing the production of news in another era. The two films were both produced by the Encyclopedia Britannica, whose film output was previously unknown to me, and both are named “Newspaper Story.” The first is from 1950 and describes a seemingly fictional local newspaper in a midwestern town. The second, from 1973, documents the production process of the Los Angeles Times.

I find both of these films fascinating. One reason is the approach to gathering and reporting on news described in these pieces, which strikes me as speaking for an optimistic American post-war media consensus, idealized and imperfect as it is. The other is the amount of effort that went into the physical production of the papers themselves. The advent of near-instantaneous digital communication completely restructured these enterprises and shifted media companies’ primary efforts away from what was previously the thrust of their work: producing some physical product, whether bundles of printed pages or towers that beam radio signals to receivers.

Both of these are great watches, but if you’re only going to watch one, I recommend the LA Times film from the 70s.

Newspaper Story, 1950 via A/V Geeks 16mm Films
Newspaper Story, 1973 via A/V Geeks 16mm Films

The Funny Pages

This announcement got a lot of publicity when it came out a few weeks ago, but it’s better late than never to note that Gary Larson (of The Far Side fame) is making new comics again.


Link Round Up: Word Puzzles and Software

Dispatches from the world of daily puzzles

I already link to Bracket City, a daily word puzzle, from my links page. But I haven’t yet linked to the Bracket City Dispatch, the newsletter that for some reason hasn’t moved over to The Atlantic with its namesake game. It’s an interesting word-of-the-day newsletter. I’d really love to find something to replace the Paul McFedries’ old Word Spy newsletter, which focused on neologisms, but for now, this will do.

Speaking of daily puzzles, I recently discovered Doople, in which players find pairs of words that link together to build a chain. It’s a lot like Puzzmo’s Circuits, which came out just a month or two ago and uses the same core puzzle concept with a more variable structure.

They don’t make it (software) like they used to

I have seen a bunch of good essays and blog posts from technologists trying to grapple with the weaknesses of LLMs without sliding into facile dismissal or doomerism. One that really helped me with my own thinking on the matter is Why your boss isn’t worried about AI by Boyd Kane, which does a good job breaking down key ways LLMs don’t work the way software typically does. It’s hard to emphasize enough how important determinism is for programming, and the way LLMs rip away that assumption (possibly irrevocably so) is extraordinarily disorienting. But, as this piece lays out, that’s not the only assumption being broken.

The Great Software Quality Collapse: I look at this as a kind of companion piece to Boyd Kane’s essay. Even traditional, non-LLM software is going through a crisis of quality. It would be very difficult to argue that commercial software hasn’t become a profligate resource waster. The headline example in this essay, the MacOS calculator app leaking 32GB of memory, is a grievous example that should be a wake-up call for the industry. Not because of its cost—for better or worse, memory is cheap these days—but because of what it says about how the industry is failing to do quality work. Wasting 32GB of memory, especially for an app as simple as a desktop calculator, is hard, and actually shipping such an enormous flaw should make everyone think twice about how risky software is.

No seriously, they don’t make it like they used to

This charming little training film illustrates the way people thought about bringing computer technology into the enterprise almost 50 years ago. One of the bitterest ironies in the state of computing today is that there are still plenty of corners of the business world that could still benefit from adopting very basic information technology, but that’s not where the money is right now.


Link Round Up: Interactive Edition

Things You Can Play With

Generally speaking, these collections of links point to things out on the broader Web, but I reserve the right on occasion to highlight my own stuff. Earlier this week, in a fit of early-aughts nostalgia, I recreated a web toy that had been a very popular feature of my ancient, defunct blog: The Potty Humor Name Generator. Rebuilt from scratch as a client-side single page app, this little amusement will help you, too, relive the joy of pointless Web tools featuring the comedic stylings of the Captain Underpants series of kids’ books.

After you’ve wrung all the enjoyment out of my own humble contribution to making the Web fun again, which shouldn’t take long, some more worthwhile ways to use your time could include the new (launched today!) daily game Tiled Words, an interesting cross between a crossword and a tile-laying puzzle.

There is also the 10,000 Drum Machines project, whose name describes its goal better than its current state. Still, 55 Web-based interactive drum machines (as of today) is a pretty impressive collection. I love these kinds of creative tools and really want to see more of them.

Things To Think About

I was recently thinking about contronyms, which are words that can be their own opposites. One classic example of this is the word “dust,” which can refer both to removing dust (e.g., “he dusted the bookshelves”) and adding dust or powder (e.g., “he dusted the counter with flour”). In my case, I was thinking about the word “deliver,” which means both providing something (“deliver the package”) and taking something away (“deliver me from evil”). In the course of my wonderings, I happened upon the phase “skunked term”, which to me captures not only the specific linguistic experience of a term’s usefulness diminishing because of changes in its usage but also something larger about the experience of trying to communicate more generally as semantic drift seems to accelerate in these first few decades of the millennium.

Things To Watch

You think you know about bowling, but do you know about duckpin bowling? What if I told you there are a whole bunch of YouTube vidoes containing full telecasts of duckpin bowling matches just waiting for you to watch? When I was a kid, it felt like this stuff was on local TV all the time, and honestly, we’re worse off without them.

And as a bonus this week, a modern classic of the YouTube Poop genre, especially for Columbo superfans such as myself:


Link Round Up

Music

The years when my musical taste really formed were somewhat later than Gang of Four’s prime creative period in the late 70s and early 80s, but their influence was clearly legible in many of my favorite bands. This live TV performance of “He’d Send in the Army” is such a perfect rendering of Andy Gill’s unique approach to the guitar within the driving groove and urgency of the subject matter (sadly still all too relevant over 40 years later).

It happens so rarely that a recommendation algorithm brings me good new music that I take note when it happens. Most recently, The Delvon Lamarr Organ Trio happened across my feed, and I have been hooked. They play a kind of small-combo funk that really lands for me, and while their studio recordings are great, you can also find some excellent videos of their live sets, one of which I’ve included below.

(Non-Musical) Notes

  • Ian Leslie’s notes on growing older: For something that happens constantly to every living thing, it’s strange how little we talk or write about aging. If the theory here is that we’re all in a state of collective denial, that can’t be contributing to a healthy society.
  • Adam Aleksic’s notes on slop: I’m still chewing on this list, but one major takeaway for me has been to remember that we had slop before we had AI to create it for us. I think the impact of algorithmic feeds and automated content generation are only beginning to be felt, much less reckoned with.

Podcast

I’ve already written two posts inspired by episodes of PJ Vogt and Sruthi Pinnamaneni’s Search Engine podcast, but this recent episode with Ryan Broderick discussing the “Dubai Chocolate theory of the internet” is I think a very illuminating look at the path an idea takes through social media to ubiquity. The number of highly specialized influencers (e.g., hot women eating food on camera) and their reach, combined with the incentives at play (e.g., people can’t taste food through social media, so it has to be visually striking—not necessarily in a good way—to become popular) are elements that drive culture, even for people who don’t participate in them directly.

Search Engine: A Dubai Chocolate theory of the internet

Gym Class Games

Diagram of the phys ed game 'Pinball' showing a playing area set up on a basketball court with hockey nets set up on each end of the court, surrounded by bowling pins placed variably around the goals, with red and blue stars showing where players might position themselves

This is a site and a YouTube channel I happened across that catalog all kinds of games invented for physical education classes in schools. I have fond memories of playing floor hockey and matball, especially when it was too rainy to use the fields, and I found myself unexpectedly tickled to see such a collection of games invented by gym teachers for their students. Some of these games look pretty good!


The Next Season of Stray Thought

Closeup of the roots of an enormous fallen tree. The roots are dried and weathered from exposure to the sun, and moss has started to grow on some of the more sheltered surfaces.

A few months ago, I laid out a plan for slowing down posting on this blog for the summer. That break has gone largely as I expected it would, giving me the chance to focus on a couple of other projects. Throughout September, when I had originally planned to get back to regular updates here, I was instead preoccupied with getting one of those projects over the finish line. Though I didn’t quite manage to complete it—and I’ll write more about it here when I do—I am close enough at this point that I’m ready to get back into more regular Stray Thought updates.

Here’s the Posting Plan

I intend to write here on a regular schedule for the rest of 2025. About every other week, I plan to post a piece along the same lines as the other bits and pieces on the site.

I’m also going to start a new series of posts. Each week, I plan to post a collection of links to things I find interesting. I’ve come to the conclusion that if I don’t want algorithmic social media feeds to drive everyone’s experience of the Web, I should try to provide an alternative, even if only in some small way.

So, over the course of the week I’ll keep track of worthwhile articles, videos, podcasts, interactive tools, and whatever else I happen to come across, and then drop a little bundle of goodies on the site. Human-curated online content for human consumption, just like an old-fashioned weblog. I haven’t decided on the best day to post those, so I may play around a bit at first, but I’ll get the first one up this coming weekend.

Topics to Look Forward To

A few topics I kicked off 2025 with will likely be making repeat appearances sometime around the New Year. I plan to do the Advent of Code again this year, and while I don’t think I’ll post through that event, I will gather my thoughts into a wrap-up post afterward. I’ll also be posting a 2025 ludography reviewing my board gaming activity.

That’s in addition to some reflections I hope to share about the side projects I’ve been working on over the summer. Right now I’m too engaged in the “doing” to be doing much “reflecting,” but I’m fairly certain that there will be something worthwhile to write when the time comes.

Thanks

Thanks for sticking with me as the site grows through these seasons of activity. I’m feeling recharged after the break, and it’s so fulfilling to share my renewed creative energy with you.


On Bee Boxes and Moral Compasses

It can be hard to perceive the subtle and gradual ways I change over time. Rationally, I know it’s happening, but until something specific tells me otherwise, I tend to think that whatever is true about me today was true about me in the past and vice versa. Evidence of these long, slow changes can be hard to come by. There are obvious things, of course: I have no shortage of pictures that prove I don’t look like I did when I was younger. But it’s not every day that I’m confronted with a way my moral intuitions have shifted.

It came up unexpectedly, as moments of self-realization often seem to. My daughter and I were watching an episode of Mythbusters. If you’re unfamiliar with the series, it aired on Discovery from 2003-2018, and its original idea was for a crew of special effects experts to film experiments confirming or disproving urban legends and other popular folklore. Early on in the show’s run, it expanded its focus to include movie scenes, internet rumors, news stories, and more.

The specific episode we watched was “Paper Crossbow” from 2006. At this point in the show’s run, most episodes have a two-story structure. The original hosts, Adam and Jamie, test one myth while the “build team” works on another, with the edit cutting back and forth between the two. In this episode, the hosts tested whether they could build a lethal crossbow from newspaper while the build team experimented with household uses for vodka.

As a side note, we watched this episode as a family not long after we ourselves used some cheap vodka to get the smoky smell out of some clothes and camping equipment that had been too close to a fire burning uncured firewood, so we already knew firsthand the result of one of the myths tested in that episode.

It was a different vodka myth that caught my attention, and it wasn’t the result of the test but the experimental setup that was striking. The build team tested whether vodka would act as an insecticide if sprayed on bees. They got bees from a beekeeper, divided them into two ventilated acrylic boxes, and sprayed the contents of one box with water and the other with vodka to see which would kill more bees. Now, with a little bit of thought, it’s clear that even if no bees were killed by the vodka or water (and, in fact, only two of the bees in the water box died during the experiment), all of the bees were going to die. Honeybees depend on their hives to survive; removed from the hive, they die.

Screencap of the episode showing a few wet bees lying on their backs on a paper towel
Don't worry, they're only sleeping... for now

I have no doubt that when I originally watched this episode, I had no problem with it. But now, it strikes me as needlessly cruel to treat bees this way. Insofar as this change is part of a larger shift in my moral intuitions around the treatment of animals, it’s not a very large shift. I still eat meat, though perhaps less than I did 20 years ago, and I am not shy about killing bugs in my living space. But I can say with some confidence that if I had been part of the Mythbusters build team in 2006, I would have seen nothing wrong with the experiment, while I would object to running it in 2025.

It’s possible that some of this shift has to do with the bees themselves. Not long after the episode originally aired, I, like many other Americans, started to learn about colony collapse disorder and the ecological and agricultural importance of honeybees more generally. Being aware of those issues certainly makes me wonder what the beekeeper who provided the bees was thinking, but I don’t think it explains how my reaction changed. After all, I’m experiencing an intuitive response to the experiment on moral terms, not a calculated analysis of the benefits and drawbacks of killing important and valuable animals for a science education show.

Screencap of the episode showing a smiling woman in a protective bee suit and veil. The caption reads, LYNN ARCHER "Beekeeper"
Seriously, what were you thinking?

Instead, I think this is just an example of an imperceptibly gradual change at the heart of the way I see and judge the world. I hope it’s for the better: I would like to think that I grow more attuned to suffering and more willing to speak and act against it as I get older. Or, maybe the duty of care I feel for the world applies to a larger set of things than it did before. I don’t know for sure, but I can say that I’m clearly not the same as I was when I watched this show back in 2006. And that’s a bit of valuable self-understanding.

I encourage you, too, to consider how your moral orientation has changed over time, and how it hasn’t. Just as you might look at your picture in an old yearbook, consider how the passage of time has deflected your moral compass. Don’t just think about when you have been most comfortable—it’s almost a cliche that the worst people lose the least sleep over their behavior. Try to find these points of difference and explore them. Because the deeper truth about the way time changes us is that it reveals we are never finished products. There is always more change to come.


Family Movie Night: The Naked Gun (1988)

Recently, for family movie night, my wife and I watched the original Naked Gun (1988) with our not-quite-teenage daughter. The idea was somewhat prompted by all of the advertising we’ve seen around LA for the new Naked Gun reboot coming out this weekend. The experience spurred me to read about what makes things funny and reflect on how audiences engage with comedy over time and how younger generations react with fresh perspectives to old humor.

My first takeaway is simple: the movie is still pretty funny. I laughed at many of the jokes, and my daughter found it funny (in parts), too. Part of this is because the Zucker-Abrahams-Zucker filmmaking team is like what basketball fans call a “volume shooter”—if you take a lot of shots, you’re going to score some baskets. Or, to look at it another way, packing a movie as full of gags as possible means that they don’t all need to land.

According to my daughter, the funniest scene in the movie comes just after the credits, when the hapless Nordberg unsuccessfully attempts to singlehandedly bust a drug deal. I tend to agree that one of the true pleasures of this movie is the amount of punishment it dishes out to OJ Simpson’s character, whose limited screen time is unrelentingly humiliating.

I found the jokes she didn’t get just as interesting. It was no surprise that the cold open, featuring lookalikes playing a rogue’s gallery of foreign adversaries in the typical 80s-era American mind, didn’t connect. But I didn’t expect her to ask what was going on during the title sequence, shot from the siren’s-eye-view of a police cruiser’s roof. I guess police emergency lights no longer look anything like spinning reflectors.

Closeup of an old-style cruiser-top police emergency light
They don't make 'em like they used to.
Cropped from Federal Signal CJ-284 4-bean Beacon by Rei Findley, CC BY-SA 2.0

Watching the movie got me thinking about what makes things funny in the first place. Researcher Peter McGraw has proposed the Benign Violation Theory, which might come as close as I’ve ever seen to a theory’s name explaining itself. The core idea is that humor arises from situations that an audience perceives simultaneously as both incorrect and acceptable—that is, both a violation and benign. There’s definitely room to argue with this theory, but I do think it explains a lot about what makes things funny.

Consider a few jokes from the Nordberg drug bust sequence through the lens of the Benign Violation Theory. Most viewers are familiar with the idea that a police officer might break down a door to apprehend a criminal. So, when Nordberg attempts this move only to get his leg stuck in the door, viewers experience it as a violation. When Nordberg demands a gang of armed criminals drop their weapons, and one dimwittedly obeys, both characters have violated our expectations of reasonable behavior. Importantly, the sequence only plays as benign because the film has already established (in the cold open) that it is operating in a world where violence does not harm its characters. The scene would not be funny if the audience didn’t already know that Nordberg will be okay no matter how many times he gets shot or how many bear traps he steps in.

Benign Violation Theory also explains why the cold open wasn’t funny to my daughter. For example, she had no reaction to seeing Leslie Nielsen’s character wipe Mikhail Gorbachev’s famous birthmark off his head. To a kid who may never have seen an image of him before, neither Gorbachev nor his birthmark carries any significance, so it doesn’t register as a violation.

This theory helps explain why some humor goes stale and how comedians can create humor that stays funny. While nobody would mistake me for a comedian, I think much of the work of being funny lies in creating the violation, which is more often than not a subversion of the audience’s expectations. The classic structure of a joke involves a setup, in which the comedian does something to trigger an expectation, and a punchline, which violates that expectation. Something purely surprising can be a good punchline, but relying on surprise is one way comedy becomes stale. True surprise is a one-time thing, and once the audience knows the surprise, the trick loses its magic.

Great comedy isn’t only funny the first time around. When you know the joke by heart—perhaps because it’s in a 37-year-old movie you’ve seen more times than you can count—and it remains funny, it’s because the comedian has managed to violate your expectations even though you knew what was going to happen. Sometimes the punchline is truly outstanding or delivered in a uniquely perfect way. But I think most enduring jokes owe their longevity to the way they are set up. A great setup leads audience members almost subconsciously into an expectation, which gets punctured by the violation no matter how familiar they may be with the joke.

Although the classic joke structure persists because it works, comedians can also subvert it to great effect. One thing I noticed while watching The Naked Gun was the number of jokes that have little or no setup at all. These jokes depend on the audience to come preloaded with expectations that the jokes violate. A running gag throughout the Naked Gun movies is how dangerous and violent the police officers are. This is one way the movie reads very differently in 2025, especially the line Nielsen delivers after his character is dismissed from Police Squad, “Just think, the next time I shoot someone, I could be arrested.” These jokes worked on the adolescent version of myself who walked into a movie theater with the expectation that police—especially the main characters of detective shows—are helpful authority figures worthy of unquestioned respect. I suppose I find these jokes less funny now for two reasons: my changed expectations diminish the impact of the violation, and my awareness of real police violence makes the joke less benign.

These setup-less jokes not only risk changes in audience expectations, but changes in the audience itself. When future audiences lack the expectations comedians assume they will have—as with my daughter and Gorbachev’s birthmark—the would-be punchlines become confusing nonsense. Comedy that relies too much on specific audience expectations is liable to lose its punch with each passing generation.

The passage of time can also add new context, altering humor beyond the comedian’s original intent. These context shifts can be dramatic, as with OJ Simpson, whose murder trial and resultant infamy could not have been predicted by The Naked Gun’s filmmakers. Seeing a wheelchair-bound OJ Simpson roll down a stadium staircase and tumble helplessly over the railing is funnier to me today than it could possibly have been in 1988. But I can just as easily understand someone who can’t laugh at Simpson no matter how much physical abuse his character endures on screen.

The experience of sharing comedy across generations fascinates me. Sharing something I found funny in the past with my daughter means confronting the ways I have changed, the ways the world has changed, and the ways the two of us are different. But it’s magical when some throwaway gag manages, despite all odds, to connect my past, my present, and my child, like a thread stretching across time. All culture is like this, I suppose, but comedy, with its dependence on shared meanings and expectations, seems especially susceptible to the passage of time. It somehow feels reassuring to know that I’m not the only one who finds “Hey! It’s Enrico Pallazzo!” as funny as I did when I first saw it in my own not-quite-teenage years.

I hope this little digression into humor was enjoyable, or at least interesting. As I think about the direction I want to take this blog, I’d like to make room to write about topics outside of my usual areas of expertise, and I hope I can address them in ways that express how much I enjoy exploring new ideas and learning new things. Explaining a joke may kill the humor, but understanding the universal human experience of laughter feels deeply worthwhile.

One final digression

In the extreme, a joke can be nothing more than its structure. This is what’s going on in one of my favorite species of joke, the shaggy dog story, which was perhaps perfected by the late Norm Macdonald over his many appearances on various Conan O’Brien shows. (Here’s a great example.) These jokes consist of the comedian telling a long story, often full of digressions and unnecessary details, culminating in an anti-climax: either a weak punchline or no punchline at all. The humor arises from the audience’s violated expectations around the structure of the joke itself: setups are supposed to be economical, or at least interesting, and punchlines are supposed to be unexpected and funny. The shaggy dog story does neither of these things, and the audience starts laughing as the realization gradually dawns on them that they’re not getting the joke they anticipated.


Notes on Attention

I was originally working on a longer post to finish off the month of June, but I couldn’t get it to a place where I was happy with it. There was, however, an element of that post that I thought was both important and worthwhile. So, instead of what I had originally planned, I’m polishing up and posting my notes on the topic of contemporary attention economics.

A user profile icon marked with a red low battery icon
Graphic composed from Head icon by heisenberg_jr and Low battery icon by afif fudin via Flaticon

The concept of the “attention economy” was originally developed by the researcher Herbert A. Simon in the early 1970s, and in recent years it has attracted renewed interest including, among others, popular books by Jenny Odell and Chris Hayes. Simon studied how information affected organizational decision making, work which earned him the Nobel Prize in Economics in 1978. He observed that information had historically been a scarce resource, but that changes in computing and communication technologies were making information more abundant and accessible.

Simon’s critical insight was that information on its own is not all that valuable. To be useful, information must be combined with human attention. Intuitively, this makes sense. If you sit in a room and read a single book, the same amount of information has been transmitted whether you read the only book in the room or you were in a library surrounded by endless shelves of books. It’s your attention to the information that matters.

This key insight leads to an important secondary observation: attention is itself a scarce resource. In fact, attention isn’t merely scarce but fundamentally limited by the hard fact that we each have only 24 hours in a day. Simon theorized that as the supply of information grew, the scarcity of its necessary complement, attention, would become the limiting factor preventing organizations from successfully incorporating the available information.

What Simon predicted would be a concern for strategic decision makers within firms has become an issue faced by every individual on earth with a smartphone, but our challenge is complicated by conditions he never anticipated.

For one thing, the growth of companies whose business models depend on monetizing popular attention has created an incentive structure that would have been alien to Simon. These companies have innovated upon information itself, broadening and flattening it to invent the more flexible concept of “content,” a superset of information that encroaches on all aspects of human attention. Information bears some relationship to facts and data, but content can consist of anything that consumes attention regardless of its truth, meaning, or substance.

Not only do these actors have strong incentives to generate endlessly growing mountains of content, but they also face an economic imperative to capture more and more of their audience’s attention as directly as possible. This means that the amount of content is growing faster than ever, and that platforms actively drive audiences to spend more and more time consuming their content, without regard for its value to them.

The other condition Simon didn’t anticipate is the combination of global-scale social media platforms and ubiquitously-connected smartphones with push notifications, which are the technological changes that truly enabled the modern information environment. It’s characterized not only by an extreme abundance of content, but also by continuous, relentless efforts to seize the audience’s attention through distraction, interruption, heightened emotional intensity, and exaggeration of urgency, importance, and relevance. Innovations like infinite scrolling feeds, algorithmic recommendation, and engagement-based content ranking are all designed to manipulate individuals to consume more content than they would have otherwise.

Graph of the logistic or sigmoid function rising gradual from a base value through a rapid exponential growth phase and ending by gradually approaching a limit value
Our good friend the logistic function, courtesy Wikimedia CC BY-SA 3.0: the basic mathematical model for a property that grows exponentially within a limited range.

Simon rightly predicted that informational abundance would cause an organization’s (or the public’s) ability to absorb new information to grow until it started to be limited by scarcity of attention. But nobody knew exactly how that limitation would manifest itself.

I suspect that what’s currently happening is the process of arriving at a new attention equilibrium as limits on content consumption become driven entirely by attention scarcity. Anecdotally at least, it feels like both online and offline, I encounter people who seem exhausted and troubled by the information overload they experience every day. If I’m right, and the world is starting to bump up against the upper limit of the public’s capacity to absorb content, it’s reasonable to be concerned about how that equilibration will happen.

Because the socioeconomic configuration that created these conditions is exploitative and unregulated, the process of arriving at this new equilibrium is most likely going to be chaotic, ugly, and painful. It would be naive to expect anything else, with platforms using any technique at their disposal to drive ever higher levels of content consumption, combining content of all types (e.g., current events, political opinion, entertainment, intentional misrepresentation, personal communications) into a single, undifferentiated stream, and competing viciously amongst themselves for an increased share of the audience’s limited time.

These changing conditions shape mass public opinion, which seems to be souring on technology in general, and they affect individual experiences as well. Consider a recent report from the American Psychiatric Association suggesting that diagnoses for adult ADHD have increased in recent years. Could this be an indication that people are feeling the effects of the fierce competitive struggle going on over their limited attentive capacity? Maybe people feel overwhelmed and conclude that the problem must be a deficit in their ability to pay attention, when what’s really happening is that their perfectly normal attention is being undermined by an information environment designed to manipulate and exploit it.

Speaking for myself, I’m looking for ways to reclaim my attention and be more intentional about how I spend it. This summer, I’m taking a more active role in choosing where I get my information from and resisting being led by algorithmic feeds. I’m also getting offline more, seeing people in person, and being more present in my community. These are small steps, to be sure, but critical ones. If you are also paying attention to your attention, I’d like to hear from you. Please let me know what steps you’re taking, what’s working, and what’s not.


Reflections on the First "Season" of Stray Thought

Back in January when I launched this blog, I wrote that I planned to post content actively in “seasons,” recognizing that neither my inclinations nor my track record made it wise to expect consistent, uninterrupted posting over the long haul. The first season, I thought, would run through May, at which point I would figure out what to do over the summer.

Now that we’re well into June, it’s time to take stock of how this first season has gone and where things will go from here.

A close up shot of a pencil with a chewed-up ferrule resting on a notebook in front of a crumpled sheet of paper
Image by congerdesign via Pixabay

Learning to Write Again

First and foremost, I learned that I was terribly out of practice at this kind of writing. Nearly all of the writing I’ve done since retiring my old blog 20-some years ago has been in either a business or an academic context. I think I got pretty good at both, but there is a bigger difference between that kind of writing and what I’m doing here than I anticipated.

The change has been a positive one. It’s good to practice a kind of writing that requires me to devote some effort to welcoming the audience and convincing them to keep reading. It’s also been worthwhile to put effort into cultivating a more aesthetically appealing writing style, a major departure from typical business writing, which usually aims to be generic and unadorned. I don’t know how successful I’ve been, but the mindset shift alone feels worthwhile.

Another thing I’ve learned is that my instincts are driving me toward longer posts. Most of my posts so far have exceeded 1,000 words (including this one), and a few drafts passed the 2,000 word mark before I edited them down. Although I did end up breaking one post into multiple parts, I decided to let my interests guide the writing, so I haven’t prioritized brevity. Instead, I’m writing as much as I feel I need to get my message across.

On my old blog, I experimented with microblogging avant la lettre, posting occasional tiny updates ranging from a phrase or two to a couple of sentences. This feels especially out of place today given the immense malign influence of Twitter on contemporary online culture. These days, the topics that draw my attention seem to demand substantially more space to articulate my ideas. In fact, I’ve determined that some of the topics I’m most eager to write about won’t fit into even a couple thousand-word posts. As I started to gather my thoughts on them, they felt like longer series that might stretch on for a half dozen or more entries. But given how much effort it took to complete more modest posts, I put those series ideas aside for the time being.

The Practical Challenges

Beyond the craft of writing itself, I’ve also encountered some practical challenges. Each post has taken me longer than I expected to complete. Much as I enjoy the process of outlining, drafting, reworking, and editing, each of these things has taken me more time than I had hoped. My rustiness and preference for longer posts certainly contributed to the slow pace, and I’m considering tracking my time more systematically to identify ways to speed things along.

Speaking of time, my other work has also reduced my writing productivity. Surprisingly, my consulting practice proved less disruptive than a side project I started. It has been quite fulfilling, drawing on my coding experience while giving me a chance to explore creative work that’s well outside my comfort zone. But a combination of factors caused that project to take up more of my time in March and April than I expected, which led to an unplanned two-month hiatus. The obvious remedy to that situation is just to post about my side projects, which I will probably start doing, perhaps once the current one is a little farther along.

Plans for the Summer

I’m thinking about the next phase of this project with these lessons in mind. Summer is coming, and between work, family stuff, a bit of travel, and projects on the side, I’ve got a lot going on. I’m going to use the next couple of months to regroup a bit. Taking an intentional break should feel a lot better than living with the angst of an unplanned posting drought hovering over my shoulder.

That said, I don’t mean to disappear entirely. My plan, after probably one more regular post this month, is to post one update a month for July and August, then pick up again with a second season in September. I might experiment with micro-posts as well, but I won’t be putting pressure on myself to post a ton of new content while I take some time to recharge and reorient myself.

A Preview of Next Season

Part of what I hope to accomplish over the next couple of months is to form a clearer picture of what the next season of this blog project will look like. But I do have a few ideas in mind that I can share now.

First, as I mentioned above, I am itching to write about a couple of topics that I think will each require a series of posts to cover in the depth I feel they deserve. Assuming I am able to complete my research and get myself organized, I would like to launch at least one of those series this autumn.

Second, I’d like to figure out what it would take for me to post weekly. That will require me to address at least one of the issues I described earlier around my preparation, pace, and post length. I’m working to resolve the tension between my inclination to write longer posts and my desire to post more often. It’s possible that I’ll come up with a plan for that over the summer, but since I intend to use the time primarily as a break, I will most likely aim for a more manageable posting schedule—something like a major post every other week—when things pick up again in September and try to make incremental improvements from there.

I remain energized by this project, and I encourage anyone who’s thinking about getting (back?) into blogging in 2025 to do it. Despite big platforms’ outsized influence over public attention, the Web as it was originally conceived—distributed, democratic, and a little bit weird—isn’t dead yet. It’s up to those of us who hope to revive it to create and maintain online spaces we control, and I couldn’t be happier to be playing my small part in that movement.


Why Every "AI" Conversation Feels Like Nonsense

Another “AI” Post?

I really didn’t want to write about artificial intelligence (AI) again so soon, and I promise this will be the last one for a little while, but I came away from my last two posts with a very unsettled feeling. It wasn’t the subject matter itself that bothered me, but the way I ended up writing about it.

In writing those posts, I made an intentional decision to be consistent with the vernacular use of the term “AI” when discussing a specific set of popular tools for generating text. Surely, you’ve heard of tools like ChatGPT, Claude, and Gemini. I did this because using more precise language is (1) awkward, (2) inconsistent with the terminology used in the podcast that prompted my posts, and (3) so unfamiliar to a layperson that accurate language comes off as pedantic and unapproachable. In the interest of making the posts simpler and more focused on the issues raised by the way these tools are being used, I ended up just accepting the common language used to talk about the tools.

But the truth is that I know better than to do that, and it bothered me. ChatGPT ≠ AI, even if it’s very common to talk and write about it that way. As time passed, I felt worse and worse about the possibility that by accepting that language, I gave the impression that I accept the conceptual framing that these tools are AI. I do not. In this post, I intend to make an addendum/correction to clarify my position and add some context.

A still image from the movie The Princess Bride showing Inigo Montoya looking at Vizzini. A caption in meme font reads: You keep using that word. I do not think it means what you think it means.

What’s Wrong with the Term “Artificial Intelligence”?

People have been confused about what “artificial intelligence” means for as long as they’ve used the term. AI, like all of computer science, is very young. Most sources seem to point to the 1955 proposal for the Dartmouth Workshop as its origin. Right from the start, it seems to have been chosen in part to encompass a broad, uncoordinated field of academic inquiry.

By the time I was a computer science undergraduate (and, later, graduate student responsible for TAing undergrads) in the late 90s, little had changed. The textbook I used, Artificial Intelligence: A Modern Approach (AIMA) by Russell and Norvig, didn’t even attempt to offer its own definition of the term in its introduction. Instead, it surveyed other introductory textbooks and classified their definitions into four groups:

  • Systems that think like humans
  • Systems that act like humans
  • Systems that think rationally
  • Systems that act rationally

The authors describe the advantages and drawbacks of each definition and point out later chapters that relate to them, but they never return to the definitional question itself. I don’t think this is an accident. If you take AIMA as an accurate but incomplete outline, AI consists of a vast range of technologies including search algorithms, logic and proof systems, planners, knowledge representation schemes, adaptive systems, language processors, image processors, robotic control systems, and more.

Machine Learning Takes Over

So, what happened to all this variety? The very short answer is that one approach really started to bear fruit in a way nothing else had before. Since the publication of the first edition of AIMA, researchers produced a host of important results in the field of machine learning (ML) that led to its successful application across many domains. These successes attracted more attention and interest, and over the course of a generation, ML became the center of gravity of the entire field of AI.

If you think back to the technologies that were attracting investment and interest about 10 years ago, many (perhaps most) were being driven by advances in ML. One example is image processing—also called computer vision (CV)—which rapidly progressed to make object and facial recognition systems widely available. As someone who worked on CV back in the bad old days of the 90s, I can tell you these used to be Hard Problems. Another familiar example is the ominous “algorithm” that drives social media feeds, which largely refers to recommendation systems based on learning models. Netflix’s movie suggestions, Amazon’s product recommendations, and even your smartphone’s autocorrect all rely on ML techniques that predate ChatGPT.

Herein lies the first problem with the way people talk about AI today. Though much of the diversity in the field has withered away, AI originally encompassed an expansive collection of disciplines and techniques outside of machine learning. Today, I imagine most technologists don’t even consider things like informed search or propositional logic models to be AI any more.

The Emergence of Large Language Models

In the midst of this ML boom, something unexpected happened. One branch of the AI family tree advanced unexpectedly rapidly when ML techniques were applied: natural language processing (NLP). Going back to my copy of AIMA, the chapter on NLP describes an incredibly tedious formalization of the structure and interpretation of human language. But bringing ML tools to bear on this domain obviated the need for formal analysis of language almost entirely. In fact, one of the breakthrough approaches is literally called the “bag-of-words model”.

The first edition of AIMA open to the NLP chapter. Visible on these pages are a complicated parse chart for a simple sentence (I feel it), a table labelling the chart with parsing procedures and grammatical rules, and a parse tree for a slightly more complex sentence.
A sample of what the NLP chapter of AIMA looked like before everything became a bag of words.

What’s more, ML-based language systems demonstrated emergent behavior, meaning they do things that aren’t clearly explained by the behavior of the components from which they are built. Even though early large learning networks trained on language data contained no explicit reasoning functionality, they seemed to exhibit that behavior. This is the dawn of the large language model (LLM), the basis of all of the major AI products in the chatbot space. In fact, this technology is the core of all of the most talked-about products under the colloquial AI umbrella today.

Here’s the second problem: people often use the term “AI” when they really this specific set of products and technologies excluding everything else happening in the field of ML. When someone says “AI is revolutionizing healthcare,” they might be referring to diagnostic imaging systems, drug discovery algorithms, or robotic surgery assistance, or they could be talking about a system that processes insurance claim letters or transcribes and catalogs a provider’s notes. The uncertainty makes evaluating these claims almost meaningless.

The Generativity Divide

There’s another important term to consider: “generative AI.” It describes tools that produce content, like LLM chatbots and image generation tools like Midjourney, as opposed to other ML technologies, like image processors, recommendation engines, and robot control systems. Often, replacing the overbroad “AI” in casual use with “generative AI” captures the right distinction.

And that’s an important distinction to draw! One unfortunate result of current “AI” discourse is that the failings of generative tools, such as their tendency to bullshit, become associated with non-generative ML technologies. Analyzing mammograms to diagnose breast cancer earlier is an extraordinarily promising ML application. Helping meteorologists create better forecasts is another. But they get unfairly tainted by association with chatbots when we lump them all together under the “AI” umbrella.

Consider another example: ML-powered traffic optimization that adjusts signal timing based on real-time conditions to reduce congestion. Such systems don’t generate content and don’t lie to their users. But when the public hears “the city is using AI to manage traffic,” they naturally imagine the same unreliable systems that invent bogus sources to cite, despite the vast underlying differences in the technologies involved.

That said, we can’t simply call generative AI risky and other AI safe. “Generative AI” is a classification based on a technology’s use, not its structure. And while most critiques of AI, such as its impact on education, are specific to its use as a content generator, others are not. All learning models, generative and otherwise, require energy and data to train, and there are valid concerns about where that data comes from and whether it contains (and thus perpetuates) undesirable bias.

The Business Case for Vague Language

Why does this all have to be so confusing? The short answer is that the companies developing LLMs and other generative tools are intentionally using imprecise language. It would be easy to blame this on investors, marketing departments, or clueless journalists, but that ignores the ways technical leadership—people who should know better—have introduced and perpetuated this sloppy way of talking about these products.

One possible reason for this relates to another term floating around: artificial general intelligence (AGI). This is also a poorly-defined concept, but researchers who favor building it generally mean systems with some level of consciousness, if not independent agency. For better or worse, many of the people involved in the current AI boom don’t only want to create AGI, they believe they are already doing so. Putting aside questions of both feasibility and desirability, this may explain some of the laxity in the language used. AGI proponents may be intentionally using ambiguous, overgeneralized terminology because they don’t want to get bogged down in the specifics of the way the technology works now. If you keep your audience confused about the difference between what’s currently accurate and what’s speculative, they are more likely to swallow predictions about the future without objection.

But I think that’s only part of what’s happening. Another motivation may be to clear the way for future pivots to newer, more promising approaches. Nobody really understands what allows LLMs to exhibit the emergent behaviors we observe, and nobody knows how much longer we’ll continue to see useful emergent behavior from them. By maintaining a broad, hand-wavey association with the vague concept of “AI” rather than more specific technologies like LLMs, it’s easier for these companies to jump on other, unrelated technologies if the next breakthrough occurs elsewhere.

Speaking Clearly in a Messy World

That makes it all the more important for those of us who do not stand to gain from this confusion to resist it. Writing clearly about these topics is challenging. It’s like walking a tightrope with inaccuracy on one side and verbosity on the other. But acquiescing to simplistic and vague language serves the interests of the AI promoters, not the community of users (much less the larger world!).

From now on, I’m committing to being more intentional about my language choices when discussing these technologies. When I mean large language models, I’ll say LLMs. When I’m writing about generative tools specifically, I’ll use “generative AI.” When I’m talking about machine learning more generally, I’ll be explicit about that, too. It might make my writing a bit more cumbersome, but this is a case where I think precise language and clear thinking go hand in hand. And anyone thinking about this field needs to be very clear about the real capabilities, limitations, and implications of these tools.

The stakes are too high for sloppy language. How we talk about these technologies shapes how we think about them, how we regulate them, and how we integrate them into our lives and work. And those are all things that we have to get right.

A view down a jetty looking toward a breakwater and the ocean beyond under a clear blue sky