One Generation's Technological Vice is A Later One's Virtue
Looks Like AI Adoption Will Repeat This Familiar Pattern
Parents and teachers in 2025 struggle like teeth-pullers to get teen-agers to read a good book, like To Kill a Mockingbird or The Catcher in the Rye. That would surprise our 18th century forebears, who thought reading novels was a waste of time– practically the TikTok of its day. But then many a new practice is treated as a vice until a newer one takes that role, making the former Bad Thing look like a Good Thing. Inexpensive and widely available novels stopped being seen as frivolous and became virtuous. Movies went from inconsequential twaddle to serious art. Comic books went from brain-rotting slop for children to solemn works of genius. Videogames went from pixelated silliness to museum-worthy creations.
In my own craft of journalism, calling people on the phone has gone from a mark of laziness (you should go in person!) to a token of journalistic virtue (well done, you didn’t just send an email or crib it off the web).
Now, perhaps, the humdrum web search is going from “the lazy way out” to “the thing serious thinkers do instead of just asking AI.”
That, at least, was the way some media outlets interpreted this study, in which Nataliya Kosmyna of the MIT Media Lab and her co-authors say they found that people who used ChatGPT for essay-writing had less engagement with their work, less measurable brain activity, less of a feeling of ownership over it, and less memory of what they had written, compared to people who used good old-fashioned web search. (Importantly, a third group, who used only their noggins, did best on those measures.) It’s a small study (54 undergrad and grad students, drawn from MIT and nearby universities) but it seems to confirm and quantify a widespread sense about large language model assistance.
Brain rot?
Does it show that AI use rots the brain, as some headlines declared – that, as the researchers more soberly write, using LLMs to help write will lead to “a likely decrease in learning skills”? Maybe. But maybe it’s a snapshot of a moment in the tech-adoption cycle, where AI is the new villain, because we haven’t figured out how to fit it into life.
That’s suggested by this analysis, by Vitomir Kovanovic and Rebecca Marrone, education researchers at the University of South Australia. The MIT results, they write, reflect the state of education in 2025: Faced with widespread use of AI, but still mostly running as if AI did not exist. “Educators,” they write, “still require students to complete the same tasks and expect the same standard of work as they did five years ago.” Assignments designed to teach the unaided mind will not teach as much to a mind that’s using AI assistance.
“Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago.”
Imagine you’re in Athens around 420 BCE. Socrates and other philosophers are strolling about, pondering deep questions. They do this using lively, intense conversation. If you have a point to make about the views of someone you spoke with yesterday, you’d better remember what he or she said.
Writing was once a suspect technology
Or, thanks to the technology called writing, you could bring a text, and refer to that. In the 21st century, we regard that as the virtuous basis of intellectual life. But there in Athens, Socrates wants nothing to do with it. Writing, he says, creates
forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. {…} they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Second, Socrates says, written words lack the liveliness and specificity of conversation. A speaker can sense how well listeners are getting the point, and adapt; a speaker can answer questions. But in a text, its writer is just saying the same thing, over and over:
writing is unfortunately like painting; for the creations of the painter have the attitude of life, and yet if you ask them a question they preserve a solemn silence. And the same may be said of speeches. You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer.
Now, imagine that you walk up to Socrates and some of his followers, ready for the usual Socratic workout. But you have a piece of sheepskin with your written notes on it. You will blow away the other ephebes, because you won’t struggle to recall your material. You’ve brought a cognitive tool to a situation in which no one is assumed to be using cognitive tools.
Of course, the practice of inquiry and debate adapted to writing (this was already happening when Plato wrote what Socrates allegedly said). For centuries, to supplement remembering and conversing, we’ve used written tests, closed-book exams, term papers, lab reports and other instruments. This tool Socrates disliked is no longer a disrupter of learning, because the mechanics of learning changed.
All of this has happened before
Kovanovic and Marrone offer a non-hypothetical example of this process. It happened a half century ago, after electronic calculators came into wide use. “[T]heir impact was regulated by making exams much harder,” they write. “Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks. […] Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available.”
If this process takes place again as AI gets baked into educational life, then today’s conniptions about its menace to learning are just another story of loss that’s missing the next chapter – the one with the gains. That’s a big “if,” I admit. But there are signs that this kind of adaptation is starting to take place.
Some good adaptations to AI involve getting away from AI
Adaptation, of course, does not mean capitulation. It may not even mean engagement. It sounds paradoxical, but some good adaptations to AI involve getting away from AI. As Jessica Grose wrote the other day, some humanities professors lately have turned to “oral examinations, one-on-one discussions, community engagement and in-class projects” to teach students in ways AI can’t infiltrate. (Maybe Socrates would be pleased, after all these centuries, to see students once again obliged to engage in lively conversation. This AI simulation of the philosopher says it is.)1
Another possible adaptation to AI is to make the work more demanding (as was done in response to the calculators a half century ago). I wonder, though, if that will be future-proof, given AI’s increasing capacities.
A third avenue might be to require students to use AI for activities that actually spur learning. Having AI write for you, as the MIT researchers have documented, does not. But using AI to give you a read on what you wrote can give insights into your writing tics, your arguments’ holes, and your research blind spots. AI can also be effective at teaching specific concepts or skills – as in this example from Matt Beane, whose grad students learned to code by engaging with a LLM while working, under pressure, on an assignment. This approach to the problem would reduce the kind of assignment that practically invites the student to get AI involved (“write a paper on the theme of loyalty”) in favor of the kind of assignment that leads that student to use AI effectively.
I don’t mean to sugar-coat the problem AI poses. Sometimes a narrative of decline is correct! But sometimes the loss of old practices is accompanied by the birth new capability for the human mind. Compare 1825 to 2025, for instance, and you’ll find a way smaller percentage of the modern population can track, kill, skin and cook a rabbit, build a shelter or grow a harvest of beans. But a much larger proportion of humanity can read.
In the long run society will decide the AI trade-offs were worth it, because “society” will consist of people who live in the new dispensation, and can’t imagine another life. Will they be right? In 2025, it is far too soon to tell.
Why I’ve Been Quiet
It has been a long time since my last post. I’ve been at work on a couple of large-ish projects (about which more when they come to fruition) and I am not great at multitasking. But I’m back, and there will be posts and podcasts aplenty in coming weeks.
Paid subscribers: I will figure out how to extend your subscriptions by 8 weeks, to make you whole, as the lawyers say.
AI is Biased Toward AI Work
So says this study, published last week. Walter Lauritzo and co-authors find evidence that AIs will prefer the work of AIs and AI-assisted people over stuff from plain-vanilla humans. (So maybe professors, who want students to get away from using AI to produce their writing, should eschew using AI to grade that writing?)
GPT-5 is Here
Many aren’t all that impressed. Here’s a round-up of readworthy reactions to OpenAI’s latest model: Ethan Mollick on how it guides you better as you use it, Alberto Romero on reductions in hallucinations, Nathan Lambert on how it advances over earlier iterations, Gary Marcus on the reasons it’s far from what it was cracked up to be, Luisa Jarovsky on a troubling privacy aspect.
Illinois Bans AI-Only Therapy
Under a just signed Illinois law it’s now illegal for an AI to interact with patients, unless a licensed professional mental health provider is in the loop and making decisions on treatment. I’m not sure if this affects popular mental-health apps like Woebot and Ash. Ash’s parent company says, in its EULA, that it “is not a healthcare provider or a provider of mental health services and does not engage in the practice of medicine,” but just “provides a technology platform that supports users’ individual efforts at self-help.” On the other hand, the landing page calls Ash “the first AI designed for therapy,” so I think its situation in Illinois might be legally fuzzy.
In any event, expect more regulations worldwide as people turn in vast numbers to ersatz humans made by AI for human-like companionship. (An analysis last year of one million conversations with ChatGPT found that the second-most frequent use of the bot is sexual role-play.)
Getting Away from Tired Old AI & Robot Imagery
Are you a media person of some sort who sometimes has to come up with an image for writing about AI? Would you like it not to be the same old tired pastiche of a robot head or hand, or a brain made of circuits? After all, these images are boring and generally have little to do with the reality of AI in 2025. If so, then check out Better Images of AI , a site dedicated to improving illustrations of AI. This week’s illustration, “The Two Cultures,” by Zoya Yasmine, is from their library.
My prompt for DeepAI’s simulated Socrates was: “The use of AI in writing has prompted some teachers to move away from assigning written work, in favor of oral reports, small discussions and outreach to community members. As a philosopher who was skeptical of the benefits of writing for wisdom, are you please by this recent trend?” To my ear, the answer, too long to quote here, sounded less like Plato’s Socrates and more like standard LLM prose (“We must ask ourselves, what is the purpose of education? Is it merely to impart information, or is it to cultivate wisdom, virtue, and character? And how can we, as educators, create learning experiences that truly foster these qualities?”). But you can, of course, try it yourself.



Thank you so much for shouting out the Better Images of AI library, and using one of the images I contributed!
The education and training of the epheboi is the issue!! In ancient Greece these were the young men trained to be the citizen soldiers. They learned poetry, rhetoric and how to wield the hoplight short sword in man to man battle. Neither the short sword nor the memory of a Homeric poet will protect us now. So, the epheboi today had better wield AI with excellence, and leave the books round shields to historians. Arete remains. Warcraft changes.