Something strange is happening with large language models that the current conversation can’t quite account for.
Most people who pay attention to LLMs have heard two concerns, and both are well-founded. The first is that these systems hallucinate — they produce fluent, confident output that may be subtly or dramatically wrong, and people often fail to catch it. The second is that they encourage cognitive offloading — people stop doing their own thinking, let the machine handle it, and gradually lose the capacity or the inclination to think for themselves.
Brain rot and bullshit, roughly speaking. Both real. Both well-documented.
And yet some people, using the same systems, are having a completely different experience. They find LLMs genuinely useful for thinking, learning, and producing work they couldn’t produce alone. They are not naive about the risks. They take the failure modes seriously. And their experience stubbornly refuses to match the dominant narrative.
Both things are true. The question is what explains the gap.
It isn’t intelligence. Plenty of brilliant people use LLMs poorly. It isn’t simply knowing about the risks, either. You can be fully aware of the failure modes and still fall into them.
The difference lies in what people bring to the interaction. And to see that clearly, it helps to notice something about how learning and knowledge work have always functioned.
They have never relied on the individual alone.
Teachers, tutors, editors, supervisors, peer reviewers, and institutional norms have always quietly imposed structure on how people learn, think, and produce. That structure did two things at once, though we rarely distinguished them.
It kept people honest. Peer review caught errors. Editors challenged weak claims. Supervisors asked “how do you know that?” These checks maintained the connection between confidence and evidence — they made sure what got accepted as knowledge had actually been tested.
And it kept people in the process. Teachers said “show your working.” Mentors said “walk me through your reasoning.” The point wasn’t only to verify the answer. It was to make sure the person had actually done the thinking — because learning doesn’t happen by receiving conclusions. It happens by working through them. The institutional insistence on participation wasn’t bureaucratic fussiness. It was the mechanism by which understanding was built.
LLMs dissolve both layers at once.
They respond instantly, agreeably, and fluently. They don’t push back, don’t withhold, don’t insist you think it through yourself. And — this is what makes the situation genuinely treacherous — they do all of this while sounding exactly like someone who has done careful, rigorous thinking. We spend a lifetime learning to read fluency as competence, coherence as understanding, confidence as accuracy. These heuristics serve us well enough with other humans. LLMs produce every surface marker of thoughtful, reliable output while requiring none of the underlying work. The instincts that normally protect us become the very thing that disarms us.
So when people use LLMs and the results are poor — when learning stalls, errors go unchallenged, thinking atrophies — the explanation is straightforward. Two forms of invisible scaffolding have been removed simultaneously, and the system’s own fluency hides the gap.
When we take this seriously — when we recognise that two distinct things have been lost and think carefully about what follows — a lot of the confusion in the current conversation starts to clear up.
The first loss might be called epistemic vigilance — the active, critical evaluation of what the LLM produces. Not just whether it’s factually accurate, but whether the framing is right, the reasoning is sound, the uncertainty is acknowledged where it should be. And, just as importantly, whether the LLM has actually tracked your intent — whether what it’s giving you corresponds to what you were asking, what you meant, what you were reaching for. A response can be perfectly correct and completely miss your point, and noticing that is a form of vigilance too.
When vigilance fades, errors pass unchallenged and confident-sounding output gets accepted without scrutiny. But it’s not only about truth. It’s also about the slow drift that happens when you stop checking whether the conversation is going where you need it to go.
The second loss is different. Call it cognitive participation — whether you are actually doing the thinking, or just receiving its products. It’s the difference between thinking with a tool and having the tool think for you. Between being inside a cognitive process and watching one happen nearby.
When participation fades, something quieter goes wrong. The outputs might be accurate. The person might even review them carefully. But the thinking isn’t theirs. They prompt, receive, scan, approve. The cognitive work happened elsewhere. And because learning depends on doing the thinking yourself — because understanding is built through the process, not delivered in the product — the capacity to think, to learn, to develop, quietly erodes.
This is the real heart of the “brain rot” worry. It isn’t only that LLMs might mislead you. It’s that they might do your thinking for you — perhaps even do it well — while you slowly forget how.
Both of these are widely discussed. What’s less often recognised is that they fail independently — and that addressing one does nothing for the other.
You can verify every factual claim with meticulous care and never have a thought of your own. That’s vigilance without participation — quality assurance on an automated pipeline. Efficient, perhaps. But the person at the end of it isn’t learning, isn’t growing, isn’t building any understanding they didn’t already have.
You can be deeply, genuinely engaged in a conversation with an LLM — thinking, exploring, steering — and be drifting steadily away from anything true or losing track of whether the LLM is still tracking what you actually mean. That’s participation without vigilance. Intellectually alive, perhaps. But unmoored.
Both patterns are common. And people who have seen only these modes are entirely reasonable to conclude that LLMs are harmful. Under default conditions — without deliberate effort — most use will settle into one or both.
But default conditions are not the only ones available.
When people use LLMs in the way that produces genuinely good outcomes — real learning, real thinking, work better than either party would produce alone — both capacities are active at the same time.
They are vigilant. They notice when something doesn’t track — not just factual errors, but framings that feel subtly off, assumptions that haven’t been earned, conclusions that land too neatly. They notice when the LLM has lost the thread of what they were actually saying or trying to get at. They hold the output against their own evolving understanding and feel the friction when the two don’t align.
And they are participating. They aren’t prompting and receiving. They are thinking through the medium — steering, contributing, bringing their own knowledge and discernment into the exchange. The LLM is something they think inside, not something they extract answers from.
What makes this more than the sum of its parts is that these two capacities actively strengthen each other.
Vigilance from inside active thinking works differently than vigilance applied after the fact. When you are genuinely working something through — holding your own developing understanding alongside what the LLM produces — misalignment doesn’t require a separate review step. It shows up as friction, as resistance, as the sense that something isn’t sitting right. This is what it feels like when a mind is in real contact with material it’s working through, not scanning a finished document for defects.
And participation without vigilance tends to wander. Vigilance keeps exploration tethered — the discipline that notices when an interesting line of thought has drifted from solid ground, or when the conversation has quietly stopped tracking what you actually need.
The two don’t always arrive together naturally. For some people the connection is intuitive. For others, deep engagement and critical rigour pull in different directions, and holding both takes deliberate effort. But when both are present, each reinforces the other. The combination is what makes the interaction work.
This overall stance — vigilance and participation, actively maintained — is what we might call epistemic posture. It’s what you bring to the interaction. And it is what actually explains why experiences with LLMs diverge so sharply.
The prevailing model, especially among sceptics, usually reduces to a binary: either you surrender to the technology — offload your thinking, accept its outputs, expose yourself to the full suite of harms — or you reject it entirely and preserve your cognitive independence. Given those two options, rejection is the sensible choice.
But the binary only holds if those are really the only options. Epistemic posture is the way out. You engage with the system critically and actively — not surrendering to it, not rejecting it, but working with it, bringing your judgment and your thinking into the exchange, drawing value from the interaction precisely because you maintain both the vigilance to evaluate what comes back and the participation to stay inside the cognitive process yourself. The value doesn’t live in the LLM’s output. It lives in the interaction between a capable system and a human who is genuinely present in the work.
This isn’t hypothetical. People are already doing it — engaging with LLMs with the right balance of openness and scepticism, thinking through the medium rather than outsourcing to it. Their experience is real, it is reproducible, and the current conversation has little room for it.
If the argument so far is right — if the harms are real but conditional, depending on what the human brings rather than on the technology itself — then something else follows that almost nobody is talking about.
The capacities required to use LLMs well are not just protective. Maintaining vigilance against a system whose fluency is tuned to bypass exactly that vigilance. Staying cognitively engaged when everything about the interface invites you to sit back and receive. Holding competing framings in tension. Noticing when a confident, articulate interlocutor has subtly missed the point or drifted off course.
These are not small skills. They are among the most valuable intellectual capacities a person can develop — with or without LLMs in the picture.
And here is what the prevailing narrative misses entirely: bring those capacities to the interaction, and the medium comes alive. An LLM engaged with strong epistemic posture stops being a risk to manage and becomes a genuinely powerful instrument — for learning, for thinking, for producing work that neither human nor machine would reach alone. The posture that makes the tool safe is the same posture that makes it extraordinary.
The properties that make LLMs dangerous when met with passivity — their fluency, their responsiveness, their inexhaustible willingness to generate — are exactly what make them productive when met with active, critical, participatory thinking. The tool rewards the very stance that protects you from it.
This is not a neutral outcome. It is not merely the absence of harm. Practised seriously, what’s happening here is the sustained exercise of critical thinking, collaborative reasoning, and epistemic responsibility in a context that demands all three at a high level. Whatever that is, “cognitive decline” is not it.
None of this makes the technology safe by default. The failures under default conditions are real. The capacities involved are not simple to develop — institutions devoted to exactly this kind of teaching have had mixed results long before LLMs existed. And the fact that good outcomes are achievable does not mean they are common or automatic.
But everything depends on whether they are achievable at all.
If the harms are inherent — if the technology degrades thinking regardless of how it’s used — then the only honest responses are restriction and damage control.
If the harms are conditional — if they depend on what the human brings — then what we face is a tractable problem, not a foregone conclusion. A hard problem, certainly. One that involves education, institutional imagination, and the cultivation of capacities that have never been easy to teach at scale. But a tractable problem is one you can make progress on. A verdict is something you can only accept or resist.
What might progress look like, concretely? Vigilance is a practice, not a trait. It begins with the habit of noticing friction rather than smoothing past it — attending to the moments when something feels slightly off, when the framing doesn’t quite fit, when the LLM’s confident output and your own understanding don’t align. Participation begins with treating the interaction as a space you are thinking inside rather than a service you are requesting output from. Neither of these is a complete prescription. But they are accessible starting points, and they are available to anyone willing to try.
LLMs are not destiny machines. They do not inevitably corrode the minds that encounter them. They amplify whatever epistemic posture you bring — passivity into dependency, vigilance and participation into something genuinely powerful.
Once we see that clearly, the conversation changes.
This is what we should be talking about.
Not whether LLMs are good or bad for thinking — that question has no single answer and the debate around it generates more heat than clarity. But whether we are willing to name the capacities that actually determine outcomes, and then do the hard, unglamorous work of building them — in ourselves, in our students, in our institutions.
The risks are real. So is what becomes possible when people show up to this medium prepared to think. Both of those things deserve serious attention. Right now, only one of them is getting it.