A woman lies on her back on a CT scanner table. She is forty-three years old and was sent here because her primary care doctor felt something during a routine palpation: a small, vague firmness on the right side of her abdomen, just below the rib line. The order on the requisition reads abdomen and pelvis with contrast, evaluate for renal mass. She has not slept much. The contrast warms her chest in the predictable way she was warned about. She lies still. The scanner takes five hundred and twelve slices of her body in roughly ninety seconds. A technologist tells her she can sit up.

A 43-year-old woman gets a CT scan because her doctor felt something unusual near her kidney. The machine takes 512 detailed images of her body in about 90 seconds.

The woman is a person. The five hundred and twelve slices are also, now, a person. They will be sent to a radiologist who will read them at a workstation in another wing of the hospital, possibly another time zone, possibly another country. The radiologist has thirty-two minutes scheduled for this study. The slices include not only her right kidney, the question the requisition asked about, but also her liver, her pancreas, her spleen, her adrenal glands, her gallbladder, both lungs from the bases up, the visible portions of her colon and small bowel, every vertebra from T11 through L5, the iliac crests, the sacrum, the visible musculature, the visible vasculature, and the soft tissues that surround all of it.

Those 512 images capture not just her kidney, but nearly every organ in her body. A radiologist somewhere else will have about 32 minutes to look at all of it.

Every part of her is now legible. A trained reader, given enough time, can find anything that is wrong with the parts of her body the scan captured. She has been made, in the most literal sense the word can carry, transparent. The information is all there. The question is whether anyone is going to be able to see what is in it.

Everything inside her is now visible on those images. Whether anyone actually catches everything that's there is a different question entirely.

Satisfaction of search

This section is about a well-known problem where doctors miss things because they stop looking once they find the first problem.

The radiology literature has been studying the failure mode for over fifty years. It is called satisfaction of search. The phenomenon, first described in the 1960s and 1970s by Harold Kundel at the University of Pennsylvania and refined in subsequent decades by other vision-research groups, occurs when a radiologist finds the abnormality the scan was ordered to evaluate and stops looking for the second one [1, 4]. The mind, having satisfied the question it was sent to answer, ends the search. A peer-reviewed 2021 review by Adamo, Gereke, Shomstein, and Schmidt synthesized the literature across both clinical radiology and laboratory cognitive science and concluded that the effect is not a moral failing of individual readers; it is a measurable and reproducible property of human visual attention under multi-target search conditions.

When a radiologist finds the one thing they were looking for, their brain often stops searching for anything else. This isn't carelessness - it's a proven, repeatable quirk of how human attention works.

In clinical practice, the consequence shows up in lawsuits and morbidity reviews. A 2014 categorization of radiologic error by Kim and Mansfield, examining 656 examinations with delayed diagnoses, found that satisfaction-of-search errors and the related satisfaction-of-report errors together accounted for a measurable share of all delayed cancer diagnoses, with one analysis placing satisfaction of report at the fifth most common cause of diagnostic error in the dataset [1, 3]. The most-cited canonical example in the teaching literature involves a chest radiograph ordered to confirm a peripherally inserted central catheter line. The line is found. The pulmonary nodule three centimeters above it is not. Two years later, the patient returns with a cancer that is no longer treatable.

This error shows up in real-world harm: patients get cancer diagnoses late because the radiologist found one thing and stopped looking. A classic teaching example involves a missed lung nodule because the doctor was only checking a catheter line placement.

This is not the failure of a careless person. The radiologists in the Kundel eye-tracking studies fixated on the nodule. The fovea passed over it. The duration of the fixation was insufficient for recognition. The eye had already been told what it was looking for. The eye stopped looking once it found it [1, 4].

The radiologist actually looked at the thing they missed. Their eyes landed on it. They just didn't register it, because their brain had already decided it found what it was looking for.

The technical literature names two mechanisms. One is resource depletion: the cognitive work of identifying and characterizing the first abnormality consumes the attentional budget the reader has for the rest of the image. The other is what the cognitive science literature now calls subsequent search misses, or SSM, a renaming of satisfaction of search that emphasizes the structural property: detection of any target in any visual field reduces the probability of detecting subsequent targets, regardless of how visible those subsequent targets are.

There are two reasons this happens: first, finding the first problem mentally exhausts the reader; second, simply detecting any target in a visual field makes the brain less likely to notice the next one, no matter how obvious it is.

The error is not in the eye. The error is in the architecture of attention itself. The reader can be the most experienced person in the building. The error rate persists.

This isn't about being bad at your job. Even the best, most experienced radiologist in the building will still make this error. It's built into how human attention works.

What the woman's scan also showed

This section describes what else was on that woman's scan - something the radiologist didn't catch.

There is a documented case in the trauma radiology literature of a renal mass identified incidentally on a CT of the thoracolumbar spine, ordered for back pain after a mechanical fall. The reading radiologist, looking at axial views of the vertebrae, did not initially register the lesion in the kidney that crossed the bottom edge of the imaging field. The mass was visible. It was less conspicuous on the axial view than it would have been on the sagittal reformats. The reader's attentional template was set for the bones. The kidney was a peripheral observation in a study that had not asked about kidneys. The case was published as a teaching example. The patient, in the published case, was identified in time. In an unknown number of cases that are not published, the patient is not.

There's a documented case where a radiologist looking at spine images missed a kidney tumor at the edge of the scan, because their brain was focused on bones, not kidneys. That patient was found in time. Many others in similar situations probably were not.

The woman on the CT table is now back at her primary care office. The radiology report has come back. The right kidney shows a 1.7-centimeter solid lesion in the inferior pole. The descriptor in the report is suspicious for renal cell carcinoma. She is being referred to urology. She is going to be fine, statistically; small renal cell carcinomas caught at this stage have a five-year survival above ninety percent.

The report does not mention the small density in the tail of her pancreas. The tail of her pancreas was on the same scan. The reader's attentional template, set by the requisition, located the lesion in the kidney with high precision and stopped. The pancreatic finding sits in the substrate of the woman's body, fully imaged, fully captured at sufficient resolution to be evaluated, fully visible. It is not seen. The transparency was complete. The discernment was not. The same architecture is operating, at a different scale, on every screen.

But the scan also showed something on her pancreas that the radiologist didn't mention in the report. It was fully visible. The radiologist's brain had already found its answer in the kidney and stopped. The same problem happening in radiology is happening everywhere else, at a much larger scale.

The infrastructure of seeing

This section is about how we process information at a societal level - and how that infrastructure is failing.

The thesis the card is asking about can be stated this way: in 2026, the human population has access to more information about each other than at any prior moment in recorded history, and the population's collective ability to evaluate any of it has not just failed to scale but is now measurably collapsing.

In 2026, we have more information about each other than ever before in history. But our ability to actually evaluate that information isn't just falling behind - it's actively getting worse.

A 2025 review published in MDPI's Brain Sciences synthesized research from 2023 and 2024 across PubMed, PsycINFO, Scopus, and Web of Science on the cognitive condition that Oxford University Press named its 2024 Word of the Year: brain rot [6, 7]. The review documents emotional desensitization, cognitive overload, executive function impairment, working memory deficits, and a measurable decline in deliberative reasoning across populations exposed to high-volume low-quality digital content. The clinical vocabulary is unsentimental. The pattern is consistent. The substrate is overloaded.

Research confirms that heavy exposure to low-quality digital content is degrading people's ability to think carefully. Memory, focus, and reasoning are all measurably declining across exposed populations.

A separate 2023 cross-national study published in Scientific Reports, conducted across eight countries, found that social media fatigue directly predicts misinformation belief and sharing behavior, mediated by the user's reduced capacity for deliberative processing under sustained cognitive load. The reader who has been exposed to enough signal stops being able to evaluate the next signal. The mechanism is not bias. The mechanism is depletion. The reader's attentional template was set by the algorithm hours earlier; everything that arrives afterward is read against that template, with whatever resources remain.

A large international study found that when people are mentally exhausted from social media, they are more likely to believe and share misinformation - not because they're biased, but because they're simply too depleted to think it through.

This is satisfaction of search, scaled to a population. The radiologist's failure mode has become the civic baseline.

The same mental glitch that makes a radiologist miss a second finding is now happening to entire populations processing the internet.

The political theorist who wrote about radical transparency in the 1990s, the technologist who built the early forums on the assumption that more visibility meant more accountability, the platform engineer who decided that exposing everything would surface the truth, were all working from the same hypothesis. The hypothesis was that the problem with public discourse was insufficient information. Make every position visible, every motive broadcast, every allegiance public, and the population would arrive at better collective judgments because the inputs would be richer.

The people who built our modern information systems believed that giving everyone access to more information would lead to better public thinking. More visibility, more accountability, better decisions.

The hypothesis was a clinical one and it has been tested. The result is in the Brain Sciences review and in the Scientific Reports study and in the lived experience of every person who can no longer remember what they read this morning [7, 8]. The information is all there. The discernment is not.

That assumption has been tested, and the research shows it was wrong. We have all the information. What we've lost is the ability to make sense of it.

The patient on the table

This section is about a real patient - a person, not just a case.

The woman with the kidney lesion has a face. She has a name the article will not use. She has a husband who drove her to the appointment and could not come into the room. She has a daughter who is eleven and who knows that her mother went to a doctor about something but does not know that the something has now produced a 1.7-centimeter lesion that someone is going to remove. She is going to be told, in a few days, that the urologist would like to do a partial nephrectomy and that her prognosis is good. She is also, in the absence of any second reader looking at her scan with a different attentional template, carrying a small density in the tail of her pancreas that nobody has yet seen.

There's a real woman with a kidney problem. Her family doesn't know everything yet. Her doctors are planning to remove part of her kidney and expect her to be okay. But nobody noticed a small suspicious spot on her pancreas because they were only looking for the kidney problem.

She is the readable human at the scale of her own body. Every organ is on the screen. The reader's attention has been spent on the question that was asked. The question that was not asked has produced an image that is not being read.

Her whole body is visible in the scan, but the doctor only focused on what they were asked to look for. The thing nobody asked about didn't get looked at.

Now widen the field. Replace her body with the public-facing identity of a person on the contemporary internet. The posts. The professional history. The statements made in earnest five years ago and now visible to a stranger who is searching. The reactions. The reposts. The deletions, which are themselves visible. The voice memos that became transcripts. The metadata. The mutual friends. The photographs the person did not realize had been taken.

Now imagine the same idea but for a person's online life - their posts, job history, old opinions, deleted content, photos they didn't know were taken. All of it is out there.

Every part is legible. The reader has thirty-two seconds, not thirty-two minutes. The reader's attentional template was set, before the search began, by whatever algorithm delivered them to this page. The reader is going to find what they were looking for. They are not going to see what is also there.

Everything about a person is readable online, but the person looking only has a few seconds, and an algorithm already decided what they'd focus on. They'll find what they expected. They'll miss everything else.

The pancreatic finding, in the social field, is not benign. It is the friend's quiet despair beneath the surface of a curated feed. It is the colleague whose stated allegiance and actual practice have begun to drift apart in ways that would be visible to a reader with the time to look. It is the family member whose pattern of activity has changed in ways that would, to a fully attentive observer, signal something the observer would have wanted to know. The information is being broadcast. The information is being received. The information is not being read, because no human attentional system in a state of resource depletion can read everything that arrives.

Online, the things people miss aren't harmless. A friend's hidden depression, a colleague whose actions no longer match their words, a family member whose behavior has quietly changed - these signals are all being sent but nobody has the time or focus to catch them.

This is what the era of transparency has produced. Not a population that knows more about each other. A population that has been given so much access to each other that the access has overwhelmed the capacity to interpret any of it. The friend whose despair is visible in the metadata of a single photograph posted at three in the morning is also broadcasting twelve other signals at the same time, and the friend doing the looking is in resource depletion, and the despair is in the kidney's neighbor and not in the question the friend is searching for.

The age of everyone being visible online hasn't made us better at understanding each other. It's made it worse - there's so much information coming in that nobody can process all of it, and the most important signals get lost in the noise.

What the radiologists did about it

Here's how radiologists tackled this problem.

The hospitals that have measurably reduced satisfaction-of-search errors are the ones that built protocols. Structured reporting templates that force the radiologist to report on every named anatomical region of the scan, regardless of what they think they have already found. Mandatory checklist passes after the primary finding has been recorded. Second-reader systems for high-stakes studies. The American College of Radiology and equivalent bodies in other jurisdictions have published practice guidelines that require radiologists to consciously disrupt their own attentional templates after the first abnormality has been identified [1, 2, 9]. The guidelines do not work by exhortation. They work because they are mechanical: the report cannot be signed until every checkpoint has been visited.

Hospitals that got better at catching missed findings didn't do it by hoping doctors would try harder. They built checklists and required second opinions. The rules force doctors to go back and look at everything, even after they've already found something.

The radiologists building these systems are not curing the underlying cognitive failure. The underlying cognitive failure is a property of the attentional architecture and cannot be cured. They are routing around it. They are imposing a structural protocol on top of the human, a pre-commitment device that forces the eye back over the territory the eye has already declared scanned. They are accepting that the human cannot, alone, be the discernment instrument the system requires. The instrument has to be the human plus the protocol, with the protocol carrying the part of the work that pure attention cannot.

These systems don't fix the brain's tendency to stop looking after it finds something. They work around it by forcing a second look regardless. The assumption is that no individual doctor, on their own, is a good enough instrument - the doctor plus the checklist together is what works. They function as structural pre-commitment devices, enforcing secondary coverage of previously reviewed anatomy. The effective discernment instrument is human-plus-protocol, not human alone.

The work is unsentimental. It is also some of the most consequential clinical-systems work being done anywhere in medicine. A protocol that catches one additional pancreatic mass per hundred thousand abdominal scans saves lives the radiologist would have, statistically, missed. The radiologist is not less skilled for needing the protocol. The radiologist is the same skilled reader. The protocol is the load the skilled reader's attention cannot, on its own, carry. The internet has no equivalent.

This unglamorous process saves lives. Catching one extra cancer per hundred thousand scans matters. Needing a checklist doesn't make a doctor less skilled - it just means the checklist carries the load that attention alone can't. The internet has no version of this.

The protocol that was never built

The internet was never given a checklist.

The infrastructure that delivers other humans to the reader has no checklist pass. It has no structured reporting template. It has no requirement that the reader audit the field after the first finding has been recorded. The infrastructure was built on the architectural assumption that the reader's attention was the discernment instrument. The infrastructure scaled the volume of input by approximately five orders of magnitude over the last quarter century. The instrument did not scale.

The platforms that show us other people online were built assuming users would pay full attention. Instead, they've multiplied the volume of information by a factor of roughly 100,000 over 25 years. Our ability to process it hasn't kept up.

The radiology literature has a phrase for the cognitive condition the population is now in: attentional template lock. The reader's template was set by something earlier. Everything that arrives afterward is read against that template. Findings outside the template are not registered as findings; they are registered as background. The cognitive science studies that documented the effect in laboratory conditions, using stylized search arrays, are now describing the same effect in the wild, at the scale of a planet's worth of mutual observation [2, 8].

Radiology has a name for what happens when your brain locks onto what it expects to see and stops noticing everything else. That's what's happening to everyone online, all the time, at a global scale.

The era of transparency was supposed to be the era in which everyone could see each other. The era of transparency turned out to be the era in which the seeing was rationed by the same attentional architecture that has always rationed it, applied to a substrate that delivers vastly more than the architecture was designed to receive. The result is not better-informed collective judgment. The result is a population in which every member is fully imaged and partially read.

The internet was supposed to help us all see and understand each other better. Instead, we're each drowning in more information than we can process. The result is that everyone is fully visible but only partly seen.

The pancreatic finding, in this analogy, is everywhere. It is the slow-motion crisis the friend has been signaling for months. It is the public figure whose stated commitments and revealed behavior have begun to drift in ways the careful reader could see but the depleted reader cannot. It is the institution that has been failing in its obligations for years through signals that the public has been receiving and not reading. It is the disinformation campaign that everyone is technically aware of and nobody is structurally tracking, because the structural tracking is what attentional template lock prevents.

The overlooked pancreatic finding is a stand-in for every slow-building crisis that's visible in the data but nobody is reading - a friend spiraling, a public figure quietly abandoning their stated values, a failing institution, a disinformation operation nobody is tracking end-to-end.

The information is in the substrate. The substrate is now too large for the instrument the substrate was supposed to inform.

The information we need is out there. There's just too much of it for us to take in.

Whose body is on the screen

Back to the patient - and to everyone else in the same position.

The woman with the kidney lesion is going to be fine. Statistically. The pancreatic finding may turn out to be nothing; many small densities are. The next radiologist who looks at her chart, with a different attentional template, may see it. A second reader may be assigned. The protocol may save her, even if the first reader did not. The system, in its better hospitals, has accepted that the first reader is not enough; that the architecture of attention requires a second pass; that the body is too informative for the instrument to read alone.

The kidney patient will probably be okay. The pancreatic spot might be nothing. Another doctor might catch it later. Better hospitals have systems designed so that one doctor missing something isn't the end of the story. The system, at its best, assumes one reader is never enough.

The transparency is real. The discernment is not. The instrument that was supposed to be the discernment instrument has known about its own attentional failure mode since the 1960s, and the medical specialty that depends on the instrument has spent fifty years building protocols to compensate for the failure mode. The infrastructure that delivers human beings to other human beings has built no equivalent.

The internet is genuinely transparent - everything is visible. But visibility isn't the same as understanding. Medicine has known about this attention problem since the 1960s and spent 50 years building systems to work around it. The platforms that show us each other haven't built anything like that.

If the body of every other person is now on a screen that the depleted reader is being asked to evaluate in seconds, and if the discernment instrument has known for half a century that under those conditions the second finding is missed, what does it mean that we have called the resulting condition transparency rather than calling it what the radiologists, already, would have called it?