Fear Wearing a Lab Coat
There is a certain kind of argument that shows up every time something changes in how people work. It arrives dressed in research, cited with studies, delivered with the solemn gravity of someone who has thought very hard about civilization. And underneath it, almost without exception, is a personal preference that got promoted to a universal law.
Remote work. LLMs. Vibe coding. The specific subject changes. The argument does not.
The Remote Work Version
The case against remote work is familiar. You need human interaction. You cannot stay home five days a week and function properly. Collaboration dies without physical proximity. Culture crumbles. People wither.
What is actually being said, stripped of the rhetoric: some people genuinely need an office to do their best work. That is real and worth acknowledging. The problem is the leap from "I need this" to "everyone needs this." That leap is not logic. It is a failure of imagination dressed up as concern.
Consider what remote work actually looks like for someone who has built their life around it. You wake up, make tea, do not worry about being late. You work at your pace. You prepare lunch. You take breaks that look like tea with family, time with a dog, a trip to buy vegetables at noon. You make it to doctor appointments without burning leave. You get your work done. No nine-to-nine theater. No performing busyness while waiting for the senior to leave so you can leave.
"The vegetables run in the middle of the day sounds small. It is not. That is your life. And remote work gives it back to you."
The loudest opponents are rarely people whose productivity suffers from working remotely. They are usually managers who feel irrelevant without physical presence, or people who cannot focus at home and assume the inability is universal, or organizations that never had good communication culture to begin with and are now blaming the tool for the pre-existing failure.
The honest version of the argument is "I personally need an office." That is self-knowledge. The dishonest version is "remote work is objectively broken." That is noise.
The LLM Version
The case against LLMs in software development follows the same structure. AI use causes cognitive decline. Developers are atrophying. A generation of engineers is being hollowed out by the comfort of autocomplete. The studies say so.
The studies are real. In a 2026 randomized trial by Shen and Tamkin, 52 professional developers learning a new async library were split into AI-assisted and unassisted groups. The AI group scored 17% lower on conceptual understanding, debugging, and code reading after one hour. The authors named debugging as the sharpest gap, which makes sense given that debugging is exactly the skill you need to catch what AI gets wrong.
The study is valid. But context matters enormously. Fifty-two developers. One hour. A learning task designed to build new knowledge. That is not the same as a senior engineer with a decade of baked-in intuition using AI on a domain they already own. The conclusions are sound for that specific scenario. Extrapolating them to an entire profession is the stretch that the data does not support.
Margaret-Anne Storey gave the underlying phenomenon a precise name: cognitive debt. Technical debt lives in the code. Cognitive debt lives in developers' heads. It is the accumulated loss of understanding that happens when you build fast without comprehending what you built. Rachel Thomas named something related "dark flow," borrowing from gambling research: the trance-like state of prompting and approving that feels like productive deep work but has broken the feedback loop. You feel absorbed. You are not getting better. You are getting dependent.
These are real risks. For a specific kind of usage pattern. The question worth asking is not "does AI cause cognitive decline" but something narrower and more honest: what does a thoughtful senior developer with strong fundamentals look like three years into heavy AI use? Nobody is running that study. It does not get clicks. The alarming short-term finding on 52 people is more fundable than the boring long-term finding on experts.
The Thing LLMs Actually Do
There is a description of what LLMs produce that is more precise than anything else: not correct code, but plausible code.
Correct code is correct because someone reasoned it into existence. They thought about the edge cases, the failure modes, the invariants that must hold throughout the system. Plausible code is plausible because it resembles correct code. It compiles. The types check. The tests pass. It looks right. But the reasoning underneath may be absent entirely.
In a small isolated function, plausible and correct overlap enough that it rarely matters. In a large system with tight constraints, real failure modes, and subtle state, plausible starts diverging from correct in ways that stay invisible until they are catastrophic. The LLM does not know your system. It produces something shaped like what should go here. Usually that is fine. Until it is not.
This is why ten years of intuition is the actual asset. It is what lets you feel when something is merely plausible. The LLM cannot feel that. You can.
The Pattern That Never Changes
Every major tool shift in computing history produced someone mourning the skills it was replacing.
Assembly to C: You do not really understand what the machine is doing anymore. Real programmers write assembly.
Manual memory management to garbage collection: You are outsourcing your thinking. You do not understand what is happening underneath.
IDEs with autocomplete: You are not actually learning. You are just tab-completing your way through code.
Stack Overflow: Developers are just copying answers they do not understand. The craft is dying.
Frameworks and package managers: Nobody builds anything from scratch. No one understands the fundamentals anymore.
LLMs: Cognitive decline. Plausible code. The seniority pipeline is drying up. The studies say so.
Each time the argument was the same. Each time, what actually happened was that the floor raised. The baseline of what a single developer could build, alone, in a day, went up. Problems that required a team in 1995 required one person in 2005. The skills that got "lost" were mostly the ones that were not worth keeping. Nobody today is poorer for not hand-optimizing memory allocation.
"The cognitive skills worth preserving have never been about the typing. They have always been about the thinking above and around the code. Managing complexity. That is what Fred Brooks identified in 1986 in No Silver Bullet and it remains true today."
LLMs can generate code. They cannot hold the full complexity of a large system the way an experienced engineer does. They do not feel the weight of a bad architectural decision made 18 months ago that is now blocking three teams. They do not have the intuition that says this is clean now but it will be a nightmare to scale. That is still a human job. And because the mechanical parts are now cheap, the complexity management is what separates real engineering from AI slop that technically works until it does not.
Who Is Actually at Risk
The cognitive decline concern is not wrong. It is misdirected.
Someone who uses AI as a complete substitute for thinking, who never internalized the why behind what they are building, who prompt-and-pastes without engaging with the output: that person is probably atrophying. The Shen-Tamkin study identified three AI interaction patterns that led to poor learning outcomes: full delegation, progressive reliance, and outsourcing debugging entirely. Those patterns are dangerous. They are also a choice.
Three other patterns preserved learning even with full AI access: asking for explanations, posing conceptual questions, and writing code independently while using AI for clarification. The differentiator was not whether developers used AI. It was whether they stayed cognitively engaged.
For the junior engineer, the concern is more real. The junior-to-senior pipeline was built on years of writing bad code, getting it torn apart in review, building intuition through failure. That struggle is where the learning happens. AI makes it possible to skip it entirely. But it was always possible to skip it. Juniors have always found ways to get answers without building understanding. Stack Overflow copy-paste was the previous version. LLMs are faster and more seamless. The tool did not break the pipeline. The culture that stopped valuing the struggle broke it.
Companies tracking AI usage per engineer will not get better engineering. They will get compliance theater. When a measure becomes a target, it ceases to be a good measure. Mandate AI-first policies and you get developers asking bots to examine random directories to hit a metric. You get code reviews described as infinitely harder due to AI slop produced by tech leads who have been off the tools long enough to be dangerous. The problem was never the tools.
What the Panic Is Really About
The people most loudly concerned about cognitive decline are often people who feel their expertise is being devalued. The skills they spent years building feel threatened overnight. That discomfort is real and worth sitting with honestly.
Instead, it gets externalized. Citing one controlled study on 52 people learning an async library for an hour, concluding that an entire profession is heading toward cognitive collapse. That is not rigor. That is fear wearing a lab coat.
The remote work discourse works the same way. Real anxiety about relevance, about culture, about what it means to lead a team you cannot see, dressed up as concern for productivity and human flourishing. The tool becomes the villain because the tool is easier to argue with than the actual discomfort underneath.
Both conversations would be more productive if people said the honest thing: "I am anxious about how fast this is moving and I do not yet know where I fit." That is a completely reasonable thing to feel. It is also not a study. It is not a universal law. And it is not the tool's fault.
The right amount of AI is not zero and not maximum. The right amount of remote work is the amount that lets you do your best work and live your actual life. Neither of these is a radical position. But nuance rarely travels. The headline is always cognitive decline, always culture death, always the craft is dying.
The craft is not dying. It is just uncomfortable to watch the floor rise under you.