I spent some time toward the end of last year experimenting with Claude Code, building a little project that I may still finish and release. A lot of other people had the same idea, and many of them shared their experiences before I had a chance to collect my thoughts. Steve Yegge’s Gas Town has been the one I’ve seen circulated the most, though to be honest, most of the attention I’ve seen it receive has been more about gawking than serious consideration.
This post from Jason Perlow called Adventures in Extreme Vibecoding came closest to my experience. I may post my own thoughts in more detail later, but I feel Perlow did a good job getting across the way it feels to start using these new tools. On one hand, you can get a new greenfield idea off the ground with remarkable quickness, and at times the tools deliver good results quite directly. But just as often, I was frustrated by misunderstandings and compounding errors.
The best approach seems to be to create frequent checkpoints and be ready to go back and start fresh as soon as the agent gets off track. Version control is not optional, even for little personal projects! Frequently throwing away work feels very counterintuitive when you’re used to treating programming effort as the most critical resource to optimize for. In the end, I have to think that the viability of coding with LLM-powered agents is going to depend a lot on how expensive they are. Repeated experimentation and frequent false starts only work as long as costs remain low, which makes me wary of how long this approach will make sense.
More Web Games
Word Grid is a clever game that feels inspired by the Immaculate Grid genre of sports trivia. The goal is to fill a word into each square of the 3x3 grid that meets the criteria specified for each row and column. The rarer your word, the better your score.
Chains is a neat word association game that doesn’t seem to be updated every day, but the puzzles that have been posted so far have been enjoyable.
I’ve had this R.E.M. song stuck in my head lately. It says something great about R.E.M. and sad about the state of the world that a 34-year-old political song is still so relevant today.
With another year of tabletop gaming behind me, it’s time to look back on my 2025 at the game table. I wrote up a similar post last year, for comparison, which you can find here. And as with last year, I logged my plays in the Board Game Stats app.
Summary Statistics
I squeezed in a lot more gaming this year, totaling 330 plays. I think this is due to two things. First, I was a much more regular attendee at my weekly game group, and second, I played many more games online this year. The latter was driven in part by a dear gaming friend who moved out of the country in January, which led us to keep a pretty continuous stream of asynchronous games going on Board Game Arena. In total, I played exactly half of my logged games online, mostly on BGA or Yucata.
My biggest month for games was March, when I played 37 games, but I also hit 36 plays in August and October. In total, I played 154 different titles this year, exactly 100 of which I played once. I’m very fortunate to be part of a gaming group that likes sharing new games with each other, and this brings a lot of variety to our table.
Fives and Dimes
Of the games that hit the table more than once in 2025, the pattern looks a bit similar to last year’s. I played one quarter (25+ plays), two dimes (10+ plays), and 11 nickels (5+ plays):
I won’t go through every game at the top of this year’s list, but I do want to comment on some notable appearances:
No surprise, Mottainai again tops my list. It is one of Carl Chudyk’s designs that feature multi-use cards (along with the outstanding Glory to Rome, Innovation, and others), and the one major downside shared by all of these designs is a challenging teach. Using the same cards in multiple ways can be hard to grasp for new players, and it takes quite a while to get into the game. However, once players have the hang of things, games typically breeze by, but it’s hard to get over the hump. Fortunately, the lovely community of players on Yucata make it possible to pick up a quick game anytime.
Also unsurprisingly, Clank Legacy 2 clocked in at 10 plays. Last year my group got two sessions into the twelve session campaign, and although we had a multi-week layoff over the summer, we did reach the end by autumn.
Rebirth is a Reiner Knizia design that arrived in late 2024, but didn’t hit the table until this year. The good doctor has returned to tile-laying mechanics, and I am a huge fan of this one. Rebirth is reminiscent of Knizia tile-layers like Samurai and Babylonia, in terms of both mechanics and quality. Players add tiles to a shared board trying to both create large contiguous areas of identical tiles and surround resources and occupy settlements. This edition contains two similar, but intriguingly distinct variants, one for Ireland and the other for Scotland, which suggests expansions may someday follow. Of the games I played for the first time this year, it’s far and away my favorite.
Flip 7 is a quick and easy push-your-luck card game that looks a bit similar to the casino game Blackjack. Players receive cards one at a time into a face-up tableau, most of which are number cards that are worth points equal to their value, but getting a pair causes you to bust, losing all of your cards and scoring 0 points for the round. Added to this basic framework are just a handful of additional cards that offer insurance against busting, or force players to stop or take cards, creating a very swingy but accessible family game. Perhaps unsurprisingly, it seemed to gain some purchase even outside the hobby, reaching the shelves of just about every big box store I’ve been in lately. The production quality feels a bit flimsy, but the publisher has clearly prioritized wide distribution at an affordable (~$8 USD) price point. I could see this game joining Uno, Flip-Bo, and Phase 10 in the pantheon of family card game classics.
Heul doch! Mau Mau (or, as my game group calls it, “Onions”) is a card game I had never heard of before a frequent tablemate picked it up at a game convention flea market. Like a lot of the games that hit our table frequently, it’s a quick filler that we play while we’re waiting for players to arrive or as an end-of-night palate cleanser. Each player is dealt a hand of cards that they try to use to build a scoring pile in front of them, adding cards that match the top card in either number or suit, with a couple of important hitches. First, if a card is playable on a neighbor’s score pile, it must go there instead of one’s own. Second, if a player has no legal play (or if they choose to voluntarily), they may play a card face down on their score pile, which may be topped in the next round with any card. In final scoring, face up score cards are worth their face values, but the number of face down cards in the pile removes all the cards of that rank. (That is, if you wind up with five face down cards in your pile, you don’t score any of your fives.) With a couple of action cards to add variety, there’s enough strategy to be interesting but enough chaos to keep things light. It doesn’t appear to be widely available, but if you happen across a cheap copy, its unique design and simple gameplay make it well worth it.
I saw The Gang on a few year-end recommendation lists, and I understand why. It’s a cooperative card game based on the Texas Hold ‘Em variant of poker. It helps to be familiar with the ranking of poker hands, but I think the game is pretty accessible to novices as well. At its heart, the game consists of guessing the relative ranks of all players’ hands solely through a bidding mechanic. After each set of community cards are revealed, players rebid, just as in a normal game of hold ‘em. There are some additional mechanics that advanced players can add on, but its core alone is a clever twist on a classic table game that plays quite well.
Finally, Oracle of Delphi is an older (2016) Stefan Feld game that I encountered for the first time this year and loved. Players take on the role of classical Greek heroes racing to complete 12 trials and be the first to return home, but the clever card-based action selection and the various “divine favors” players can earn create interesting choices and strategic depth. Sadly, it’s out of print and used copies are somewhat rare, but thanks to Yucata, I wouldn’t be surprised to see it on this list again next year.
Arcs
One notable game that didn’t make it to the top of my play list was the much hyped and much debated Arcs. I played a couple of standalone games in 2024, but not the Blighted Reach campaign, which I really hoped to try in 2025. Unfortunately, after one more standalone game and one campaign session, I don’t think Arcs will make it to my game group’s table again. I would absolutely love to give it another go, but I’ll need to find tablemates who aren’t on the “hate it” side of the “love it or hate it” divide.
I understand at least some reasons for Arcs’ divisiveness. It looks like a space-themed 4X game, but it doesn’t play out as a satisfying power fantasy like most games in that genre. The play experience of Arcs does not promise you a sense of continuous growth—you can’t guarantee that your little planetary empire will get bigger, stronger, or more powerful. Indeed, because of its unusual card driven action selection mechanic (which, as an aside, is only obliquely related to trick taking), you can’t even be sure you can take strategically optimal actions on any given turn, and that clearly frustrates some players.
I don’t think I’ve played Arcs enough to explain how to play it best, but I have played it enough to say with confidence that no matter what its detractors say, it’s not a broken design. Rather, it seems to be trying to force players to attend carefully to their opponents’ strengths and strategies and move boldly and opportunistically when they can. I believe the design is an attempt to capture the dramatic sweep of a classic space opera, but you can’t deliver the feeling of a miraculous victory against impossible odds if players always feel strong, powerful, and capable. Steadily climbing a power curve doesn’t encourage dramatic, hopeful gambits. Quite the opposite: it’s much smarter under those conditions to bide your time until success is assured.
Cole Wehrle, Arcs’ designer, has once again taken an ambitious risk and created a design that runs counter to commercial trends in the hobby. It’s heartening that he is able to enjoy success and acclaim while doing this. Arcs is not a “feel bad” game, but the market is so overcrowded with hyper-palatable games determined to avoid any experience players won’t enjoy that I understand why some react negatively to it. Still, I sense something powerful and rewarding at the heart of Arcs’ design, and I still hope to get the full experience sometime in the future.
Other Games
I also had thoughts on a few games that didn’t make it to the table as much in 2025.
Fishing: Of the new trick taking games I played in 2025, this is the one that sticks out the most to me for its creativity and strategic potential. Fishing is played over the course of several hands, with stronger and stronger cards being introduced after each one. The clever hook here is that each player’s hand is dealt first from a private deck formed by the cards they won in prior hands, so winning tricks scores points but guarantees weaker hands in later rounds. I only encountered this one near the end of the year, but I’m hoping to get deeper into it in the year to come.
Molly House: This is another innovative design that came out in 2025 with Cole Wehrle’s name on it (co-designed by him and Jo Kelly). At a high level, players are Mollies, members of a community of gender-defying libertines in 18th century London who must work together to create joy or be destroyed by the Society of the Reformation of Manners. At this level, there is a loyalty/betrayal mechanic, where players who cannot win through joy creation can risk turning in the other Mollies as an alternative path to victory. At a more granular level, the gameplay is built around Festivities where joy is created through cardplay. Each player contributes cards in turn, and if the right combination of cards are present, those who contributed them score joy. The catch is that some cards put participants at risk of exposure, and failing to build a scoring combo results in a boring party, which is a thief of joy. The game’s semi-competitive feel is very evocative of a clandestine social scene full of risky relationships that may suddenly transform from supportive to spiteful. It’s not a game we’ll play every night, but I definitely intend to play more.
I enjoy games that try to model politics, and Votes for Women was one I played for the first time this year that immediately calls to mind its inspiration, 1960: The Making of the President, While the latter models a presidential campaign and the electoral college, the former models the campaign to pass a constitutional amendment, specifically the Nineteenth. In many ways, it’s a simpler, streamlined design that incorporates the specific history of the fight for women’s suffrage with fairly clean mechanics. It’s also possible to play solo or competitively with up to 4 players, and the 3 player configuration takes advantage of one of the game’s more interesting design choices, which is to model the challenges in coordinating the Suffragist side of the conflict using two distinct sets of player resources that are aligned but not interchangeable. This is Tory Brown’s debut design, and a promising one at that.
A licensed title based on the Alfred Hitchcock movie, Rear Window is a mystery/deduction game whose core mechanic of a silent “Director” laying out cards to help detectives solve a murder is reminiscent of Mysterium. And while the core gameplay is good, I was even more impressed by the art direction, which did a fantastic job of capturing the aesthetic of both the film and the era from which it originated. All of the components have a midcentury illustrated style that would feel perfectly at home on Jimmy Stewart’s coffee table. With so many licensed titles being cheap cash grabs, Rear Window was a truly impressive effort from the sadly now-defunct Funko Games.
Looking Ahead to 2026
It’s going to be a lot harder to predict the coming year at my game table. I think it quite unlikely that I will log anywhere near as many games as I did last year. I expect to be traveling for work a bit more than I have of late, so that will reduce the number of game nights I can attend. But, if my recent trend of playing more games online continues, that might somewhat make up for it.
This is also the first year in a long time that I don’t have any new games I’m keen to get playing. My longstanding goal, going back well over a year, is to get a proper in-person session of Dune going. Maybe this will be the year for that. Perhaps there will be another campaign game as well, but at this point, nothing is scheduled. Truthfully, I’ll be very happy if I play enough Fishing to get a real feel for it.
In any case, I go into 2026 with an open mind and an open dance card. If you’re in the hobby (or want to be), I wish you a great year of gaming, and if you’re interested, get in touch and we can meet up sometime around the virtual table.
I participated in Advent of Code for the second time this year. My notes on last year’s challenge were one of the first things I posted here, so let’s call this post a tentative continuation of that nascent series.
The most substantial change introduced in this year’s challenge was its shortening. The global leaderboards were also removed, but that wasn’t as meaningful to me, since I don’t care about leaderboards. The organizer, Eric Wastl, posted comments explaining this decision, and while some participants seemed quite put out, I fully support the decision to continue an ongoing project in a more sustainable form. Judging by some of the public negativity, I think Eric would have been more than justified in shutting the project down altogether—running a fun public project for free doesn’t obligate him to keep it going in any form—so I appreciate him taking steps to keep things going.
I have written more code in the past year than any period since I stopped working professionally as a software engineer. In contrast, last year’s challenge represented a return to programming for me, and it’s remarkable how different it felt. Last year, solving even the simplest problems brought me satisfaction, much of which came from focusing on doing things in the most Pythonic1 way.
This time around, I still found satisfaction in figuring out how to crack the problems. I think this is because programming challenges present a much higher ratio of problem to boilerplate than real world programming does. Honestly, nobody would do programming challenges if they were more realistic in that way. I can’t imagine anyone wants to spend any more time than necessary negotiating data format conversions, handling obscure protocol issues, or any of the other gruntwork that comes with real world software engineering.
What felt most different was being more familiar with Python convention. This meant that while I spent less time working out the best way to express my logic in the language, I also didn’t get as much enjoyment out of reaching those conclusions. For working programmers, that’s no loss. You literally don’t want to spend a lot of time reasoning through the right way to express yourself unless you’re writing something novel and distinctive. The point of idiom and convention is to allow a community to make correct assumptions about what a particular chunk of code does. But for hobbyists and dabblers, in whose company I count myself, it’s certainly less fun.
Thoughts on LLM Coding Assistants
I wrote all of my code myself, but because I use a modern editor, I received inline suggestions from an LLM coding assistant. If you’re unfamiliar, think of it as a kind of autocomplete for programming. Editors have offered support tools like these for decades to help programmers fill in variable names or get function signatures correct, and bringing LLMs to bear on this problem makes a lot of sense, given how unreasonably effective LLMs have proven at digesting and generating code.
Earlier this year when I was working on a different project, I found these suggestions annoying and unhelpful. In part, this was because I was doing some fairly technical processing that the assistant was clearly familiar with, but whose logic it could not accurately generate. That experience actually lowered my estimation of the usefulness of coding assistants in general, but in retrospect, I was probably coming at the problem in the wrong way.
Unlike coding agents like Claude Code, whose purpose is to build working software according to a user’s specification, inline coding assistants really do need to be approached like autocomplete. They provide a potential end for something the author has begun, and much of the time, they are incorrect, because it’s often hard to know for certain what should come next. However, when the next thing really is an obvious extension of what came before, it’s quite useful to have the editor handle the rote typing after the thinking is done.
An Example
The problem for Day 4 this year involved looking at items laid out on a grid and checking for adjacent items. My solution for Part 1 processes the input and stores the locations of the items as coordinate pairs like (1,2) to represent an item in column 1 and row 2. To look for adjacencies, my code checks all of the coordinate pairs that are one away from at least one of the coordinates, so in this case, (0,2), (2,2), (0,1), (1,1), and so on. I decided to do this by storing a list of offsets that could be added to each coordinate pair. Once I started the expression adjacency_offsets = [, the correct inline suggestion appeared:
That’s a pretty helpful suggestion, and a substantial improvement over earlier systems that could complete token names but not much else. These autocompletions don’t directly save a ton of time, but they can help programmers stay focused on problem solving by obviating some of the toil that accompanies it.
One Minor Issue
As I mentioned, I fully support shortening Advent of Code, especially if doing so makes it more likely to remain an ongoing annual event. However, I would like to suggest one adjustment to the new format for future years.
In previous years, it seems to have been typical for the final day’s challenge to be simpler than the rest. I think of it as a kind of Christmas gift to participants, and respectful of the likelihood that even dedicated participants would prefer to spend Christmas Day doing holiday things other than coding.
This year, the final challenge was released on December 12, but it seems the organizers still decided to treat it as a “gift” day, providing a challenge that seemed tricky but was actually trivially solvable without any coding. In my case, I examined the puzzle input with a spreadsheet that ended up generating the answer for me. I saw quite a few disappointed comments from participants online, and I tend to agree that Day 12 didn’t feel great. With the last day of the event no longer falling on Christmas, I think it makes sense to retire the gift day tradition and make all 12 challenges more or less equally real.
My Most/Least Favorite Problem
For me, the most notable problem this year bears striking similarities to something I wrote a bit about last year. For the second year in a row, one of the challenges boiled down to a linear algebra problem that nearly cost me a star. Much as I tried, I struggled to write a solution that could calculate answers for Part 2 of Day 10 efficiently enough.
I ended up breaking my own rule about sticking to standard libraries and brought in SciPy to handle the linear programming, rather than stumble my way through implementing Gaussian elimination by hand. On one hand, I wish I had been able to meet the challenge myself, but it’s not so bad to have a chance to refresh my memory and dip back into an extremely useful Python library. At least this year, I recognized the structure of the program at the outset instead of banging my head against it futilely for far too long.
I’ll conclude by encouraging anyone who’s interested to review my best effort using only standard libraries. Let me know if I’m missing any ways to further reduce the search space. I’m reasonably confident that the code I’ve written is correct, but it’s just not efficient enough to come up with solutions for larger cases in a reasonable time on my laptop.
In fact, just for kicks, I posted all of my solutions, which you’re welcome to check out. Until next year!
For non-coders, the different capabilities afforded by different programming languages tend to coalesce into a shared style that the community views as being most consistent with the language’s strengths and weaknesses. The community for Python, the language I used, calls this style “Pythonic.” ↩
The Best Video on Board Game Boxes You’ll See This Year
The main link I wanted to share this week is Amabel Holland’s latest video contemplation of board game boxes. As someone who doesn’t mind—even kinda likes—the beige-cardboard-everywhere feel of 90s-2000s eurogames, I’ve watched with fascination over the past few years as the hobby game world has become much more aware of the power of image and marketing. It takes someone like Amabel who takes art games and the game business seriously to really delve productively into areas like boxes and physical presentation, where the reality of games as commercial entities collides with their various other natures—artistic, ludic, or otherwise.
The Best Apocalyptic Synthwave Hammer Time Video You’ll See This Year
The Best Flickr Galler of Public Domain Images That Go Hard You’ll See This Year
This is the latest in an unintentional series I’m apparently doing in which I write about topics related to the contemporary debate over the use, usefulness, impact, and harms of generative Artificial Intelligence (genAI) inspired by weeks-old podcasts. The fact that I write these things about old podcast episodes is entirely a function of the fact that my listening backlog is regularly weeks or months long.
This time, I’m writing about an interesting passage from an episode of This American Life entitled My Other Self. The bit I’m discussing starts around 27:45 and runs for about two minutes. You can listen to it directly on the episode page or any number of locations—indeed, wherever you listen to podcasts, as the saying goes.
The part I’m interested in happens during a segment by reporter Evan Ratliff, in which he is experimenting with hooking up ChatGPT to ElevenLabs’ voice synthesizer and a telephone number. He uses this stack to call friends, waste scammers’ and telemarketers’ time, and entertain his father. In the relevant portion of the story, he invites strangers to call the number to participate in a research interview discussing their experiences with AI. Of course, none of the callers know ahead of time that they are speaking to a bot.
A remarkable exchange occurs with a survey respondent named Stephanie. Answering a question from the bot, she says she has probably interacted with AI without even knowing it. Then, half-joking, she exclaims, “Jesus! I’m probably talking to an AI right now!” The bot asks her why she said that, and she explains that the bot is speaking a bit awkwardly. The interaction is brief, and they quickly move on.
The truly remarkable moment happens just afterward. Stephanie calls back after the end of the initial call, and without exactly apologizing, expresses clear regret for hurting the interviewer’s feelings by calling them a bot.
To me, this illustrates an underappreciated effect of the proliferation of genAI agents, particularly those that are foisted on the public without notice or consent. Clearly, Stephanie was uncertain whether or not she was speaking with a bot. After she got off the phone, she did some kind of moral reckoning: what would it mean if she were in fact talking to a bot? What if she weren’t? What obligation did she feel she had to a possibly human interviewer whom she’d just suggested might be a computer? Conversely, what were her obligations to a computer that she had been led to interact with under false pretenses?
However she happened to arrive at her decision, she concluded it would be worth it to call back and try to smooth things over. And she did that knowing there was a chance she was offering a token of respect to a bot that would not only be unable to appreciate it, but in fact didn’t deserve it.
I think that the more the public is forced or tricked into using genAI, the less common Stephanie’s reaction will be. I’m not talking specifically about apologizing to ChatGPT; I’m talking about considering the feelings of an interlocutor whose humanity can’t be conclusively established.
Any time you interact with an entity that could be either human or AI, you have to decide whether to treat that entity like a person or a computer. As long as your powers of discrimination are imperfect, you run the risk of treating a person like a computer or treating a computer like a person. However, in my view, these two errors are not morally equivalent. Treating a computer like a person might be a little embarrassing, but treating a person like a computer is potentially harmful. The harms are not necessarily large: failing to apologize to someone you’ve mistakenly accused of deceiving you won’t likely do much more than hurt their feelings. But the inverse case, in which you apologize to a computer who actually was deceiving you, costs only the effort that goes into the apology.
I think a world where people don’t treat each other like people because they might be bots is unlikely to be a good world, much less a happy one.
I’ve got a lot going on for this holiday week, even before I caught whatever bug laid me low for the past 24 hours, so for links I offer a few entertaining videos: two older songs and one wild compilation.
First, a music video for a song that I remember as one of the first that really attracted me to the then-emerging “alternative” scene. Of course, at the time the only way for me to glimpse this scene was through the tiny keyholes of college radio and pre-Matt-Pinfield 120 Minutes. It’s one of those songs that takes me back to a specific point from the past, and I think the video really captures the aesthetic of a moment:
This other video, from a few years later, is for a song that I have known and loved for a long time. Strangely, though, I hadn’t seen the video until recently.
And finally, there’s this wonderful little clip compilation. As you watch it, I want to remind you that this was posted by the official Jeopardy! YouTube channel:
I was out for a run not far from the house when I fell. I turned the corner past an intersection and saw a slow pedestrian not far ahead. Rather than slow down, I tried to pass around a traffic light control box. Just as I did, I lost my footing and slapped down onto the concrete. I wasn’t badly hurt—just scrapes on my hands and an inexplicable shoulder bruise—but I was stunned. For just a moment, I stayed down, felt my body’s weight on the dirty sidewalk, leg hanging over the blacktop, head resting on concrete. Then I picked myself up, literally dusted myself off, and went on with my run.
Everyone falls, has fallen, will fall again.
Little kids—especially the really little ones—fall down all the time. Most of the time, they are fine. It can be hard to tell which falls they’ll happily toddle away from and which ones will require comforting and bandaids, and even those are usually unworrying in the end. When they have love and support, kids are resilient, mentally and physically. When I learned to ski in my 30s, I envied the little kids I shared the bunny hill with for their ability to fall hard and pop right back up unfazed. Learning to ski is a good way to reacquaint yourself with falling if you’re out of practice as a grownup.
Young children can achieve a kind of falling-drunk ecstatic state. I’ve seen it. Sometimes, when they’re playing at something that causes them to fall a lot, they’ll abandon the game and just start falling down for the sheer joy of it, spreading like contagion. Hurling themselves on the ground again and again, laughing uncontrollably, they fall every way they can imagine: tumbling headfirst, flopping like a ragdoll, sliding longways, jumping for hangtime, clutching each other to land in a pile.
Some unlucky kids get hurt in a fall, but I estimate that most don’t come to understand the potential for injury until they’re much older, maybe teens. By then, the simple geometry of bigger bodies makes falling a riskier proposition. I remember taking a couple of painful bumps at that age, but looking back now, it’s almost comical how quickly my body recovered from those injuries. They were painful enough to remember, but largely not serious enough to scar.
Most teenagers have gotten pretty good at the mechanical aspects of staying upright, which greatly mitigates one of the major reasons people fall. Falling at that age is usually the result of mental error. Kids largely figure out their bodies relatively early, dealing with the rest is the work of a lifetime.
A fortunate thing about middle age is that falling hasn’t yet taken on the grim implications it has for the elderly. Still, I’m somewhere on that road. Unless something else takes me out before I get there, it’s the destination I’ll eventually reach, where the prospect of a fall is persistent and life-threatening.
For now, at least, I’m fortunate enough to be able to get back up after a brief rest on the pavement. I’m still green enough to take bumps and learn from them, and in so doing avoid more dire falls later on. Scrapes and bruises (even inexplicable ones) don’t heal as fast as they used to. The little aches seem to last longer, too. But I’m on my feet, one after the other, picking up speed down the sidewalk. I’ll slow down before I go around the next signal box, and I’ll feel my feet steady-sure to the ground.
Paged Out! is, in its own words, an experimental (one article == one page) technical magazine about programming (especially programming tricks), hacking, security hacking, modern computers, electronics, demoscene, and other similar topics. I just got turned on to this, but I am really loving the most recent issue, Issue 7, which came out last month.
The Boston Public Library has posted digital copies of its M. C. Escher prints, and they are really something to behold. I remember a high school art project where my teacher, Mrs. Allard, asked the class to create something inspired by Escher. I did a transitional pen-and-ink pattern-filling based on Day and Night, and I’ve had a soft spot for his work ever since.
I have been thinking about game design a lot lately, so Raph Koster’s latest post, titled “Game design is simple, actually,” is very timely for me. Despite its trollish title, it covers a lot of ground laying out the complexities of game design, but in a very well structured way—better structured, in fact, than I think I’ve ever seen applied. There’s a ton of depth here; he’s not kidding when he says each of the 12 items could be (indeed, is) a whole shelf of books. I expect I’ll be returning to this for quite some time.
Imagery
Context-Free Patent Art: Probably more accurately “Patent Art Taken out of Context,” but close enough.
Is it me or do the issues from this complete collection of Swedish IKEA catalogs look way more interesting than the American versions of the same? At least the ones from the late 90s and early 2000s, which is when I would have seen them.
Adam Aleksic wrote recently on the similarities between the harms we popularly attribute to social media and those attributed to television. I grew up squarely in the midst of the moral panic that Baby Boomer parents felt over the malign influence of television on their children. No doubt influenced by their own tender years spent in front of the “boob tube,” there seemed to be endless hand wringing over not whether but how much TV would damage kids’ brains. I see the parallels between that shared freakout and today’s panic over the impact of social media, even as I harbor plenty of concern myself.
The first and most obvious parallel to me is that so much of the anxiety seems to be displaced from parents to children. That is, the young parents of the 70s and 80s represented the first generation of children who grew up with TV in the house expressing worry about its influence on their own children. It’s not hard to see how much of that concern seems motivated by fear of the damage their own exposure to television may have caused. Likewise, today’s young parents come from the first generation of social media users, and one way they are grappling with the harms they experienced is to express worry for their children. Like many parents, I’m not only trying to minimize my daughter’s exposure to social media, I’m also minimizing my own.
I think Aleksic’s notes on the similarities between TV and social media should prompt those of us who see the harms as distinct to be more clear about what those differences are. Two immediately come to mind for me:
The first is accessibility and ubiquity, as noted in his piece. The availability of social media on personal, portable, handheld devices greatly increases its reach, and its ability to send urgent-seeming notifications and alerts allows it to interrupt non-social media activities and colonize more of our time and attention. Television, much as its critics may have complained about its supposed inescapability, could never have such direct, immediate, and intrusive access to its viewers.
The other is the very tight feedback loop of individualized, algorithmic adjustments these apps use. Any individual TV channel was (and is) the same for all viewers. But modern social media apps learn from their users, reading both overt and subtle, implicit behavioral cues to shape their behavior. It’s critical to remember that social media apps are designed with a goal in mind—almost always to encourage more and more time and “engagement” in app—and are designed to achieve those goals by constantly tweaking the way they work. These adjustments can be so effective because they are based on a direct stream of individualized behavioral data (each user’s activity within the app), and they do not need to compromise their effectiveness by targeting more than one individual at a time. In other words, it poses no problem if the thing that would make your social media feed maximally engaging to you would be extremely distasteful to me, because I’ll never see your feed.
The propagandists of the mass media era had to develop grand theories of messaging and communication to underlie their efforts to manipulate public sentiment, because they could only address their messages to the collective public. (e.g., Propaganda by Edward Bernays) Modifying behavior through social media requires no larger framework to inform what works and what doesn’t. Its infinitely granular addressability and adaptability simply requires the diligence to conceive of and execute as many experiments as necessary to achieve the desired result, because the effectiveness of each can be easily and directly measured.
This effect is magnified by how much tighter the experimental iteration cycle is in the social media world than it is in television. In just a few seconds of scrolling through a feed, a social media app may run dozens of experiments to gauge a user’s engagement and adjust its behavior in real time. The amount of time it takes to produce a piece of television media, broadcast it, gather audience feedback, and incorporate changes into more content is better measured with a calendar than a stopwatch. And this means that even if each refinement has only a tiny effect, the overall impact is much greater because refinements can be made in much greater volume.
It is certainly instructive to consider modern attitudes toward technologies like social media in the context of earlier generations’ reactions to media innovation. It’s important to remember that, in their day, the invention of the novel and the spread of mass market paperback publication were both considered socially harmful by at least some concerned authorities. But it’s also important to identify what’s distinctive about new media technology and consumption, and to be able to articulate how those distinctions might represent new risks. It’s far harder to mitigate or manage unidentified risks than those we assume intentionally with good information.