Stray Thought Adam Stone's Home on the Web

Is The Thought Still What Counts?

Updated 7 June 2025:
I use the term "AI" in this post in its non-specific, colloquial form, which I regret. I'm leaving this post as written, but I encourage you to read my subsequent post where I clarify my thinking and discuss bettew ways to write and think about this topic.

Last week I posted some thoughts on the impact new artificial intelligence (AI) tools are having on education. As if on cue, this article came out a few days later. It should come as no surprise that an article subheaded, “ChatGPT has unraveled the entire academic project,” missed the mark by a wide margin in my view. The article largely blamed students and placed effectively no responsibility on educators or administrators for actually ensuring that students learn. The only educational alternatives mentioned in the article are found in a throwaway line about blue books and oral exams.

A photo collage depicting a birthday card divided in half between a traditional card with cartoon lettering and syntax highlighted javascript code over a black background

As it happens, this discourse touches on some additional thoughts I had about the Search Engine podcast episode (“Playboi Farti and his AI Homework Machine”) that prompted my previous post. In the middle of the episode, a guest relates to host PJ Vogt the story of a man who used ChatGPT to write a birthday card for his wife. She was moved to tears and told him it was the best card he’d ever given her. Both Vogt and his guest were disturbed by this (and, in the interest of full disclosure, I was too), but even more disturbed that teenagers who heard the story saw nothing wrong with it.

This story raises a couple of fascinating questions that I chose not to address in my previous post. First, when could AI be appropriate as a creative tool? And second, why do different people form such different moral judgments about its use?

The Classroom Context: A Different Issue

Before diving deeper, I want to clarify something I took for granted in my previous post about AI in education. Using AI to complete homework assignments is wrong because it defeats the purpose of education: providing students with experiences from which they learn and grow. When a student avoids writing an essay, regardless of exactly how they do it, they lose the opportunity to learn.

This is an age-old problem. For as long as there have been students, their short-term incentives haven’t been aligned with their long-term learning needs. From this perspective, asking what ought to be done about AI is akin to asking what ought to be done about essay mills or CliffsNotes. It’s a worthy question, but it doesn’t illuminate the larger ethical questions around responsible AI use in everyday life.

Getting Past Absolutism

There’s another argument I want to acknowledge without addressing in full. Many people consider all AI tools to be inherently unethical and strictly avoid their use. I’ve seen this view justified by concerns over the origins of the training data, the motives and behavior of the people and companies who profit from AI products, and environmental impacts.

For the purposes of this post, I’m going to put aside this valid perspective for a few reasons. First, it’s highly contingent on facts that are both disputable and subject to change. Models can be trained on legitimately obtained data, open source systems can mitigate concerns over profit-driven corporate actors, and environmental arguments overstate current resource consumption and underestimate efficiency gains.

Second, while I respect anyone who’s made the decision not to use AI themselves, this stance provides no guidance beyond withdrawal from a world where these tools are becoming commonplace. That’s not to say that everyone must accept any way any other person chooses to use AI. Rather, I think even those who find all AI use objectionable will find some uses more troubling than others. It’s worth exploring why those feelings differ, even if you have resolved to avoid these tools entirely.

Why Does the AI Birthday Card Feel Wrong?

What I find fascinating about the birthday card story is how differently people react to it. Let’s consider some thought experiments to clarify these differences.

Most people find store-bought greeting cards acceptable, even if they aren’t the most personal or thoughtful ways to express affection. What if instead, the man had asked his eloquent friend “Cyrano” to write the card for him? If he passed off Cyrano’s work as his own, I suspect most people would feel the same unease as with the AI-generated card. If, on the other hand, he was up front about commissioning Cyrano to write a special birthday message, it feels much less troubling.

I see three factors at work in my reactions to these scenarios:

  • Honesty vs. deception: Misleading someone about authorship feels wrong regardless of whether a source is human or AI. The teenagers who see no problem with the AI-generated card seem focused on the outcome rather than the deception. This may reflect a broader cultural shift regarding the importance of truth and authenticity that I find quite disturbing.
  • Authentic engagement: The husband’s own poorly written but heartfelt card would express authentic feelings. Even a card written by a friend who knows his wife could engage meaningfully with her and the relationship. The AI-generated card, on the other hand, seems more like the store-bought card: generic and lacking an authentic connection to the specific individuals involved.
  • Meaningful effort: We value gifts in part based on the sacrifice they represent, not just in terms of money but time and attention. Since AI tools are specifically designed to achieve better results with less effort, that sacrifice is explicitly reduced, diminishing the value of the gesture.

Of these three factors, honesty seems most fundamental, in that it’s difficult to imagine valuing work presented deceptively, no matter how much effort went into creating it or how well it engages with its subject. But all three factors contribute positively to the value and meaning of communication.

What Our Tools Say About Us

Perhaps this offers some insight into why some of us react so negatively to others’ use of AI. If you are more inclined to use it for work that you personally value less, then your decision to use AI communicates how little you value the task at hand. It should be no wonder that someone might take offense at receiving a supposedly personal token that signals it wasn’t worth your time or effort.

There’s another possibility that I think is worth considering. The teenagers who approved of the AI birthday card might be signalling something else entirely: feelings of incompetence. Growing up involves struggling with skills like self-expression while gradually developing mastery and confidence. Psychologist Erik Erikson describes this in the fourth stage of his developmental model as a crisis between “Industry and Inferiority.” According to Erikson, failing to successfully navigate that crisis could produce individuals who internalize a sense of inadequacy.

I wonder whether this reveals a cohort of students who, faced with the common struggle of learning how to express themselves (perhaps magnified by the trauma of pandemic-related isolation and educational disruption) have simply concluded that they aren’t capable of writing competently. If so, the resulting dynamic, where young people try to express themselves using AI because they feel incapable while others interpret that as insincerity or disengagement, would be tragic in more ways than one.

Finding a Way Forward Together

A world with AI in it is undoubtedly more ethically complicated than one without. All of us are navigating this new terrain together, even if we don’t see everything the same way.

In my own work, I have been applying three key questions when considering AI use:

  • Am I presenting my work honestly? Am I transparent about how I’ve used AI?
  • Am I engaging authentically with my subject? Does my use of AI allow me to maintain a genuine connection to the people and ideas involved?
  • Does my work meaningfully reflect my effort? Am I using AI as an enhancement or a substitute?

My thinking here is still in the formative stages, and I hesitate to call this even a framework. At best, I’m pulling together field notes that may some day mature into something more robust.

I am genuinely curious: What considerations beyond efficacy influence your decisions to use or avoid AI? Do the three factors I’ve identified resonate with you, or have you found other considerations more useful?

The birthday card anecdote may seem trivial, but it sits atop an iceberg of profound questions about the impact new technologies have on human relationships. Ethical questions that seemed academic until recently are now implicated in our everyday choices. A person moved to tears by a computer’s words under false pretenses is not a hypothetical—it is now a fact of life. The urgency with which we approach the task of developing ethical guardrails for these tools should reflect that.