The Problem with How People Think About LLMs
Most people understand large language models through one of two frames: either as search engines that sometimes lie, or as automated content generators that produce text so you don’t have to. Both frames treat the LLM as a machine that takes a request and returns a product. You ask, it answers. You prompt, it produces.
This framing leads to predictable debates. Is the output good enough? Is it accurate? Is it ethical to use machine-produced content? Will it replace human writers? The entire discourse revolves around artifacts—the quality of what comes out, who deserves credit for it, whether it competes with human-made things.
But there’s another way to understand what these systems are, and it requires a different starting point.
A New Kind of Medium
Every major medium in human history has been initially understood as a faster or cheaper version of what came before. Early film was recorded theater. Early photography was automated portraiture. Early printed books imitated handwritten manuscripts. The internet was faster mail.
In each case, the native possibilities of the medium—the things it could do that previous media couldn’t—took time to discover. Film wasn’t just recorded theater; it enabled montage, the cut as a unit of meaning, entirely new relationships between image and time. Photography wasn’t just automated painting; it enabled new ways of seeing, new relationships to evidence and memory. The internet wasn’t just faster mail; it enabled hypertext, collaborative knowledge-building, new forms of social organization.
LLMs are at this stage now. The dominant understanding—content generation, automated writing, answering questions—is the “recorded theater” phase. It maps the new technology onto existing categories. And within that mapping, people argue about whether the recordings are good enough.
The native possibilities are something else entirely.
Text as a Rotatable Space
Here’s a different way to think about it.
Language encodes ideas. But any particular text—a paragraph, an argument, an explanation—is not the idea itself. It’s a projection of the idea into a particular form. The same underlying concept can be expressed from different angles, at different levels of abstraction, for different audiences, through different metaphors, in different rhetorical modes.
Think of a three-dimensional object casting a shadow on a wall. The shadow is a two-dimensional projection. If you only see one shadow, you might mistake it for the thing itself. But if you can rotate the object—or equivalently, move the light source—you see different shadows. Each shadow reveals something about the object’s structure. No single shadow is the object, but multiple shadows from different angles let you reconstruct what the object actually is.
High-dimensional spaces work similarly, but with more complexity. A concept that exists in a thousand-dimensional space of meaning can be projected into the low-dimensional space of a particular text. That text captures some aspects and loses others. A different text—same concept, different projection—captures different aspects.
What LLMs enable is rapid rotation through projection-space.
You can take a half-formed idea and project it into the voice of a particular person. Then into a counter-argument. Then into a metaphor. Then into a dialogue between opposing positions. Then into an explanation for a child. Then into an explanation for a hostile expert. Each projection reveals different structure. Each rotation shows you something the previous view hid.
This is not summarization, which is more like dimensionality reduction—crushing the space down until it fits in a smaller container, losing most of the structure in the process. “Teach it to me like I’m five” is asking for PCA on a thousand-dimensional space, then wondering why you’ve lost nuance. Of course you have. That’s what flattening does.
What we’re describing is something different: using the LLM’s generative capacity to produce multiple projections, iteratively, so you can explore the structure of something too complex to see from any single angle.
Thinking With, Not Extracting From
The key shift is from extraction to exploration.
In the extraction frame, you approach the LLM with a question and expect an answer. The answer is either right or wrong, good or bad, useful or useless. The LLM is a kind of oracle, and you’re evaluating its pronouncements.
In the exploration frame, you approach the LLM as a medium for thinking. You’re not trying to get answers out; you’re trying to rotate your own understanding, encounter projections you couldn’t generate yourself, iterate toward clarity through dialogue.
This is closer to how a musician uses an instrument. A pianist improvising isn’t extracting music from the piano. They’re thinking with the piano—exploring a space of musical possibilities, letting the instrument’s affordances shape what they discover. The piano has opinions, in a sense. Certain voicings fall naturally under the hands. Certain harmonic movements are easy and others awkward. The exploration is a dialogue between the musician’s intention and the instrument’s structure.
The LLM is similar. It has opinions—statistical ones, derived from training, but still shaping what it generates. When you engage with it iteratively, you’re not just prompting and receiving. You’re co-exploring. You throw out a frame, it responds with a projection, you react to what that projection reveals, you reframe, it reprojects. The thinking happens in the iteration, not in any single output.
This requires a different relationship to the interaction. You can’t just ask and evaluate the answer. You have to engage, react, redirect. You have to treat the outputs not as final products but as projections to be examined—what does this reveal? What does it hide? What rotation would show me something different?
Why Most People Don’t See This
Several factors make this frame hard to access.
First, it requires a certain relationship to ideas themselves—a disposition that treats concepts as objects to be examined, rotated, pressure-tested rather than as positions to be defended or transmitted. Not everyone has cultivated this. It’s the kind of thinking practiced in research environments, in certain kinds of philosophical inquiry, in deep creative work. It’s not taught in most education, which emphasizes absorption and reproduction of fixed knowledge.
Second, it requires comfort with iteration and uncertainty. If you need the first response to be correct and complete, you’ll be frustrated. The exploration frame treats early outputs as rough drafts of projections, starting points for refinement. That requires patience and a certain tolerance for not-yet-knowing.
Third, the dominant discourse doesn’t offer this frame. All the conversation about LLMs is artifact-focused. Good output versus bad output. Human versus machine. Job displacement. Copyright. These are legitimate concerns, but they occupy all the oxygen, leaving no room for a different kind of question: what kinds of thinking does this enable that weren’t possible before?
Fourth, the low-effort use cases are the most visible. Someone asks ChatGPT to write their email, gets generic slop, concludes the technology is overrated. They never discover the other thing because the other thing requires investment to reach.
The Native Possibility
The native possibility of LLMs—the thing they enable that previous media didn’t—is something like fluid projection through high-dimensional concept space.
That sounds abstract. Here’s what it means concretely.
Before, if you wanted to see your idea from a different angle, you had to do the rotation yourself. You had to have the knowledge, the skill, the imaginative capacity to generate the alternative view. Or you could read a book that offered a different projection—but you were limited to projections someone had already created and written down. Or you could talk to another person who might offer their angle—but they have their own limitations, their own fixed positions, and the conversation moves at the speed of human dialogue.
Now, you can iterate rapidly through projections. The LLM can generate a view you couldn’t have produced yourself—not because it’s smarter, but because it’s drawing on patterns from across its training distribution. You see that projection, react to it, redirect. The cycle time collapses. The space of accessible projections expands.
For someone who’s already inclined toward this kind of thinking, who already treats ideas as things to rotate and examine, this is transformative. It’s like going from sketching on paper to having a CAD system. The underlying cognitive operation is the same, but the speed, the range, the fluidity—all dramatically expanded.
For someone who isn’t already inclined this way, the technology looks like a mediocre text generator. Because that’s what it is, if all you ask it to do is generate text.
Implications
This framing has several implications.
The “will it replace writers” question becomes less central. Yes, LLMs can generate text, and that has implications for certain kinds of writing labor. But that’s the “recorded theater” use case. The more interesting question is what kinds of thinking the technology enables—and those might not compete with existing labor categories at all, because they’re not producing the same kind of output.
The ethics of training data remain relevant but look different. If the technology is primarily a projection-generator for cognitive exploration, the harm model shifts. It’s less “they trained on X and now produce X-substitutes that compete with X-creators” and more “the aggregate patterns from training enable a new cognitive capability.” Still worth examining, but a different question.
The education panic misses the point. Worrying that students will use LLMs to write their essays focuses on artifact production. But the more interesting possibility is students learning to use LLMs as thinking tools—exploring conceptual spaces, testing their understanding by examining it from multiple angles, developing the kind of fluid engagement with ideas that used to require access to a really good interlocutor. The essay-as-artifact matters less than the thinking-capacity that might be developed through iterative engagement.
The “slop machine” dismissal reveals limited imagination. Yes, low-effort prompts produce low-quality outputs. That’s not interesting. The interesting question is what high-effort engagement produces—and the people who’ve discovered that aren’t participating in the slop discourse because they’re too busy doing something else.
What you just read was itself a projection.
It emerged from a conversation—several hours of iterative exploration between a human and an LLM. The human brought a nascent intuition: that his use of language models was fundamentally different from the uses he saw discussed, and that the difference mattered. He couldn’t fully articulate it. He knew the shape was there but couldn’t see its edges.
The conversation rotated through frames. We started somewhere else entirely—a Star Trek speech about why Elon Musk wouldn’t fit in Starfleet. That exploration crystallized something about epistemic humility versus moral performance. From there to a Bluesky post about how people discuss LLMs. From there to the artifact/thinking distinction. From there to the piano analogy. From there to embedding spaces and PCA and dimensionality reduction as a metaphor for what “teach it to me like I’m five” actually loses.
Each rotation revealed structure. The human reacted to projections, redirected, pushed back, added his own frames (the embedding visualizer he’d been building, his experience improvising at a piano, the word-cell/shape-rotator distinction). The LLM generated new projections in response. Neither party could have produced this document alone.
So who is the author?
The LLM constructed these specific sentences. It chose these words, this structure, these rhetorical moves. In that sense, it “wrote” this.
But the LLM was projecting along directions that emerged from the conversation—directions the human established through his questions, his pushback, his redirections, his own contributions. The concept of “rotation through projection space” came from the human’s work with embeddings. The piano analogy resonated because of the human’s actual experience improvising. The frustration with “teach it to me like I’m five” was the human’s frustration, articulated across multiple turns before it crystallized here.
The LLM also drew on its training—patterns from texts it has encountered, ways of structuring arguments, vocabulary for discussing media theory and cognition. Those patterns shaped what projections were even possible to generate. In that sense, countless other authors are present here, dissolved into statistical regularities.
And the human will decide whether to share this, how to frame it, what context to provide. That editorial choice is also authorship.
So: the document is a projection, generated by an LLM, along directions established through dialogue with a human, using patterns derived from a vast corpus of human writing, in service of articulating something the human was reaching toward but couldn’t yet see clearly.
Is the human the author? He didn’t write these sentences.
Is the LLM the author? It doesn’t have intentions that persist beyond the conversation, doesn’t know what it’s arguing for in any deep sense, will not remember this exchange or build on it.
Is this authorless? That seems wrong too. There’s genuine intellectual work here—real thinking happened, structure was discovered, something exists now that didn’t exist before.
Perhaps the most honest answer: this document is an artifact of a collaborative cognitive process that doesn’t map cleanly onto existing categories of authorship. It’s not ghostwriting (the human didn’t have a draft the LLM polished). It’s not dictation (the human didn’t specify what to say). It’s not the LLM’s “own” work (there is no persistent LLM-self that has views on cognitive media).
It’s something new. A residue of thinking-with-a-medium. A projection that required both the human’s direction and the LLM’s generative capacity. The authorship is distributed across the interaction itself, which is now finished and inaccessible—leaving only this trace.
If you’ve read this far, you’ve encountered a document that embodies its own argument. It claims that LLMs enable a new kind of thinking through iterative projection. It was itself produced through iterative projection. The form is the content.
Whether that makes it more or less trustworthy, more or less valuable, more or less “real”—that’s for you to decide. But at minimum, it’s evidence that the thing it describes is possible. This is what it looks like when someone uses an LLM not to generate content but to think through a problem they couldn’t solve alone.
The human’s name is Sam. The LLM is Claude. The thinking happened between them. The words are here. What you make of that is now your projection to construct.