My Advent of Code 2025
I participated in Advent of Code for the second time this year. My notes on last year’s challenge were one of the first things I posted here, so let’s call this post a tentative continuation of that nascent series.
The most substantial change introduced in this year’s challenge was its shortening. The global leaderboards were also removed, but that wasn’t as meaningful to me, since I don’t care about leaderboards. The organizer, Eric Wastl, posted comments explaining this decision, and while some participants seemed quite put out, I fully support the decision to continue an ongoing project in a more sustainable form. Judging by some of the public negativity, I think Eric would have been more than justified in shutting the project down altogether—running a fun public project for free doesn’t obligate him to keep it going in any form—so I appreciate him taking steps to keep things going.
Overall Thoughts
I have written more code in the past year than any period since I stopped working professionally as a software engineer. In contrast, last year’s challenge represented a return to programming for me, and it’s remarkable how different it felt. Last year, solving even the simplest problems brought me satisfaction, much of which came from focusing on doing things in the most Pythonic1 way.
This time around, I still found satisfaction in figuring out how to crack the problems. I think this is because programming challenges present a much higher ratio of problem to boilerplate than real world programming does. Honestly, nobody would do programming challenges if they were more realistic in that way. I can’t imagine anyone wants to spend any more time than necessary negotiating data format conversions, handling obscure protocol issues, or any of the other gruntwork that comes with real world software engineering.
What felt most different was being more familiar with Python convention. This meant that while I spent less time working out the best way to express my logic in the language, I also didn’t get as much enjoyment out of reaching those conclusions. For working programmers, that’s no loss. You literally don’t want to spend a lot of time reasoning through the right way to express yourself unless you’re writing something novel and distinctive. The point of idiom and convention is to allow a community to make correct assumptions about what a particular chunk of code does. But for hobbyists and dabblers, in whose company I count myself, it’s certainly less fun.
Thoughts on LLM Coding Assistants
I wrote all of my code myself, but because I use a modern editor, I received inline suggestions from an LLM coding assistant. If you’re unfamiliar, think of it as a kind of autocomplete for programming. Editors have offered support tools like these for decades to help programmers fill in variable names or get function signatures correct, and bringing LLMs to bear on this problem makes a lot of sense, given how unreasonably effective LLMs have proven at digesting and generating code.
Earlier this year when I was working on a different project, I found these suggestions annoying and unhelpful. In part, this was because I was doing some fairly technical processing that the assistant was clearly familiar with, but whose logic it could not accurately generate. That experience actually lowered my estimation of the usefulness of coding assistants in general, but in retrospect, I was probably coming at the problem in the wrong way.
Unlike coding agents like Claude Code, whose purpose is to build working software according to a user’s specification, inline coding assistants really do need to be approached like autocomplete. They provide a potential end for something the author has begun, and much of the time, they are incorrect, because it’s often hard to know for certain what should come next. However, when the next thing really is an obvious extension of what came before, it’s quite useful to have the editor handle the rote typing after the thinking is done.
An Example
The problem for Day 4 this year involved looking at items laid out on a grid and checking for adjacent items. My solution for Part 1 processes the input and stores the locations of the items as coordinate pairs like (1,2) to represent an item in column 1 and row 2. To look for adjacencies, my code checks all of the coordinate pairs that are one away from at least one of the coordinates, so in this case, (0,2), (2,2), (0,1), (1,1), and so on. I decided to do this by storing a list of offsets that could be added to each coordinate pair. Once I started the expression adjacency_offsets = [, the correct inline suggestion appeared:
adjacency_offsets = [
(-1, -1),
(0, -1),
(1, -1),
(-1, 0),
(1, 0),
(-1, 1),
(0, 1),
(1, 1),
]That’s a pretty helpful suggestion, and a substantial improvement over earlier systems that could complete token names but not much else. These autocompletions don’t directly save a ton of time, but they can help programmers stay focused on problem solving by obviating some of the toil that accompanies it.
One Minor Issue
As I mentioned, I fully support shortening Advent of Code, especially if doing so makes it more likely to remain an ongoing annual event. However, I would like to suggest one adjustment to the new format for future years.
In previous years, it seems to have been typical for the final day’s challenge to be simpler than the rest. I think of it as a kind of Christmas gift to participants, and respectful of the likelihood that even dedicated participants would prefer to spend Christmas Day doing holiday things other than coding.
This year, the final challenge was released on December 12, but it seems the organizers still decided to treat it as a “gift” day, providing a challenge that seemed tricky but was actually trivially solvable without any coding. In my case, I examined the puzzle input with a spreadsheet that ended up generating the answer for me. I saw quite a few disappointed comments from participants online, and I tend to agree that Day 12 didn’t feel great. With the last day of the event no longer falling on Christmas, I think it makes sense to retire the gift day tradition and make all 12 challenges more or less equally real.
My Most/Least Favorite Problem
For me, the most notable problem this year bears striking similarities to something I wrote a bit about last year. For the second year in a row, one of the challenges boiled down to a linear algebra problem that nearly cost me a star. Much as I tried, I struggled to write a solution that could calculate answers for Part 2 of Day 10 efficiently enough.
I ended up breaking my own rule about sticking to standard libraries and brought in SciPy to handle the linear programming, rather than stumble my way through implementing Gaussian elimination by hand. On one hand, I wish I had been able to meet the challenge myself, but it’s not so bad to have a chance to refresh my memory and dip back into an extremely useful Python library. At least this year, I recognized the structure of the program at the outset instead of banging my head against it futilely for far too long.
I’ll conclude by encouraging anyone who’s interested to review my best effort using only standard libraries. Let me know if I’m missing any ways to further reduce the search space. I’m reasonably confident that the code I’ve written is correct, but it’s just not efficient enough to come up with solutions for larger cases in a reasonable time on my laptop.
In fact, just for kicks, I posted all of my solutions, which you’re welcome to check out. Until next year!
