The exploit surface that vigilance cannot close
Every credential ever issued to a person who stopped showing up is still, architecturally, an open invitation. Shadow accounts, orphaned OAuth tokens, passwords minted before modern security hygiene existed. The infrastructure does not know these doors are unguarded. It does not know the person who held the key is gone. It was never given a way to ask.
Three months after an employee died, attackers found his admin account still active on a corporate network. They used it quietly for a month, moving laterally, stealing domain admin credentials, exfiltrating hundreds of gigabytes of data. On day thirty-one, they deployed Nefilim ransomware across more than one hundred systems. The company had kept the account running because certain services depended on it. The account had high-level access. No alerts fired. No anomaly was flagged. The building had no way to know the person who held those credentials was not the person using them.
The forensic detail is instructive, but the structural observation is more important. The attack did not exploit a software vulnerability. It did not require a zero-day. It walked through a door the system had been told to leave open, using a key the system had no mechanism to revoke on its own. The entire intrusion rested on a single architectural fact: there is no protocol, no API, no standard method by which a network learns that an identity has permanently ceased to operate.
That is not a failure of security hygiene. It is a failure of ontology.
The hallway that never empties
The average enterprise maintains roughly 15,000 inactive user accounts that remain enabled. Varonis's 2025 State of Data Security Report found that 88% of organizations carry stale but enabled ghost users in their environments. These are not accounts waiting for someone to return from vacation. They are accounts whose owners have left the company, retired, transferred, or died, and whose credentials persist because no architectural mechanism distinguishes their silence from the silence of a user who simply has not logged in today. Sit with the weight of that ratio for a moment.
Credential abuse was the initial access vector in 22% of confirmed breaches in 2025, according to the Verizon Data Breach Investigations Report, making it the single most common entry method. Breaches involving stolen credentials took an average of 292 days to identify and contain, the longest of any attack vector. The global average cost of a data breach stood at $4.44 million; in the United States, it reached a record $10.22 million. These are not statistics about sophisticated adversaries outpacing defenders. They are statistics about doors that were never closed, in hallways that were never designed to notice when the occupant stopped walking through them.
The security industry frames this as a problem of access management. Audit your accounts. Revoke credentials promptly. Implement multifactor authentication. Run periodic reviews of Active Directory. All of which is correct, and all of which is manual, human-dependent, and structurally incapable of scaling to the volume of identities modern infrastructure carries.
The ontological deficit
Consider what a login system actually knows about a user. It knows a credential was presented. It knows the credential matched. It knows a session was established. It records timestamps, IP addresses, device fingerprints. What it does not know, and cannot know under current architecture, is whether the entity presenting the credential is the entity to whom it was issued. It cannot distinguish between a returning user and an attacker who found the key under the mat. It cannot distinguish between a user who stopped logging in because they went on sabbatical and one who stopped logging in because they no longer exist.
The system's model of identity is binary. Authenticated or not. Present or absent. There is no third state. There is no "departed." There is no "permanently gone." There is no "the silence you are observing is not temporary inactivity but irreversible cessation." The infrastructure was never given the vocabulary.
This is what makes the problem architectural rather than operational. You can train every IT team on the planet to decommission accounts within 24 hours of an employee's departure. Research from Grip Security still found that 31% of employees retain access to applications from previous jobs. The process fails because it is a process, not a protocol. It depends on human notification chains, HR databases, manager awareness, help desk tickets. Each link in that chain can break. The architecture underneath has no fallback, no self-correcting mechanism, no way to independently infer that an identity has transitioned from dormant to gone.
A lock that cannot tell the difference between its owner sleeping and its owner buried is not a lock. It is a suggestion.
The weapons built from silence
State-sponsored threat actors have understood this vulnerability for years. In late 2023, Russia's Midnight Blizzard group compromised Microsoft's corporate environment by password-spraying a legacy, non-production test tenant account that lacked multifactor authentication. The account was dormant. Possibly forgotten. It carried permissions that had been granted in an earlier configuration and never revised. The attackers used it as a bridge into Microsoft's production environment, accessing email accounts belonging to senior leadership, legal counsel, and cybersecurity staff.
No vulnerability was exploited. No software was breached. The entire operation pivoted on the structural persistence of an identity that nobody was watching because the infrastructure had no concept of "unwatched."
Parse the operational sequence of that sentence. The Salesloft-Drift breach of August 2025 followed the same structural pattern at SaaS-to-SaaS scale. Attackers stole OAuth tokens from a third-party integration and used them to access Salesforce instances across more than 700 organizations. The tokens were long-lived, over-permissioned, and had never been reviewed, rotated, or revoked. They sat in a governance gap where customers assumed vendors managed token lifecycle and vendors assumed customers did. In reality, nobody owned the credentials. They persisted in a gray zone, architecturally alive but operationally orphaned, until an attacker found them useful.
In June 2025, researchers discovered what may be the largest credential exposure in history: approximately 16 billion login credentials compiled from infostealer malware logs, phishing kits, and prior data breaches. The aggregation was not the product of a single corporate hack. It was an archaeological deposit, layer upon layer of credentials that had never been revoked, passwords that had never expired, tokens that had outlived every context in which they were issued. The average price for stolen credentials on one criminal marketplace in 2025 was ten dollars.
Ten dollars for a key to a door that does not know its owner is gone.
The dead-man's switch, reversed
The concept of a dead-man's switch is straightforward in mechanical engineering. If the operator stops applying pressure, the system triggers a response. Trains brake. Machines halt. The absence of input is itself the signal. It is one of the oldest safety mechanisms in industrial design, predating digital infrastructure by more than a century.
The internet has no dead-man's switch.
Instead, it has the inverse. When a user stops pressing the lever, nothing happens. The session persists. The token renews. The account sits open. The absence of input is interpreted as nothing at all, because "nothing" is not a state the system was designed to recognize. In mechanical engineering, the failure mode is inaction in the presence of danger. In digital infrastructure, the failure mode is inaction in the presence of hermetic silence.
Attackers have understood this inversion and operationalized it. The Nefilim operators did not force entry. They inhabited silence. They occupied the negative space around a credential that the system assumed was still being used by its original owner. The Midnight Blizzard campaign did the same. The Salesloft-Drift breach did the same. In each case, the attacker's primary advantage was not technical sophistication. It was patience. The willingness to sit inside an identity-shaped void and wait for the architecture to confirm, by its own inaction, that nobody was looking.
That confirmation arrives every time, because the architecture has no mechanism by which to withhold it.
The regulatory fault line
Regulators have begun to feel the edge of this structural gap without quite naming it. The FTC's 2022 enforcement action against Drizly was remarkable not for the breach itself, which exposed 2.5 million consumers' data through credentials stored on an unsecured GitHub repository, but for the remedy. The FTC required Drizly to destroy personal data it had collected but no longer needed. It required the company's CEO, James Cory Rellas, to implement information security programs at any future company he joined, a stipulation that followed the executive personally for ten years.
The order against Drizly is, structurally, a data minimization mandate. Destroy what you do not need. Do not keep credentials active for users who no longer use the service. Do not retain data whose only function is to accumulate risk. The FTC did not use the language of absence architecture. It did not call for inactivity protocols or posthumous data governance. But the enforcement logic points directly at the void: if you cannot account for whether a user still exists, you should not be holding their data.
Article 14 of the EU AI Act, which begins enforcement in August 2026, will require organizations to prove that every AI-driven action was authorized at the time it occurred, not merely when the credential was issued. The shift from issuance-time to execution-time accountability is a regulatory attempt to force the infrastructure to care about temporal context. Did the person who authorized this action still have standing to authorize it at the moment the action was performed? The architecture, as currently built, cannot answer that question.
That inability carries a maximum penalty of 35 million euros or 7% of global revenue.
What the infrastructure cannot see
The security industry's current answer to the ghost account problem is lifecycle management. Automate provisioning and deprovisioning. Tie identity systems to HR databases. Implement just-in-time access. Shorten token lifespans. These are reasonable controls. IBM's 2025 Cost of a Data Breach Report found that organizations using AI and automation extensively throughout their security operations reduced breach lifecycle by an average of 80 days and saved nearly $1.9 million in breach costs.
But lifecycle management assumes a lifecycle. It assumes a beginning, middle, and end that the system can observe. It assumes that departure is an event that gets reported, processed, and acted upon within a defined window. For employees at large organizations with mature HR systems, this assumption holds more often than not. For contractors, partners, third-party integrations, service accounts, OAuth tokens issued to applications that have since been deprecated, and the cascading web of non-human identities that now outnumber human ones in most enterprise environments, the assumption collapses.
The average enterprise now runs over 1,200 unofficial applications creating potential vulnerabilities, according to Kiteworks research, with 86% of organizations completely blind to their AI data flows. Shadow AI was a factor in 20% of breaches studied in IBM's 2025 report, adding $670,000 to average costs. These are not identities with clean lifecycle boundaries. They are identities that materialized through convenience, proliferated through neglect, and persist through the architecture's constitutional inability to distinguish a tool that is being used from one that was abandoned three quarters ago with its permissions intact.
Google began automatically deleting unused OAuth clients after six months of inactivity in June 2025. It is one of the first platform-level acknowledgments that dormancy is a signal worth acting on. But it addresses only one surface of a systemic condition. The broader architecture still has no generalized protocol for detecting, measuring, or responding to sustained absence across federated identity systems. Each platform manages its own inactivity thresholds in isolation, if it manages them at all. There is no cross-platform standard. There is no absence API. There is no RFC for "this entity stopped."
The gurney in the hallway
Return to the hallway. Fifteen thousand enabled accounts. Eighty-eight percent of organizations carrying ghost users. Twenty-two percent of breaches beginning with stolen credentials. Two hundred and ninety-two days to detect the ones that do. The infrastructure was built to open doors. It was built to verify that the key fits the lock. It was built to record the timestamp and move on. It was never built to wonder whether the hand turning the key belongs to the person whose name is on the lease, or whether that person left the building months ago and the hand belongs to someone who found the key in a drawer.
Every solution currently deployed to address this gap is a human-speed patch on a machine-speed problem. Every audit is a snapshot in a system that changes between frames. Every deprovisioning workflow is a notification chain with human-dependent links, each one capable of failing silently, each failure invisible to the architecture that depends on it.
The system cannot tell the difference between sleeping and gone. The attacker, the regulator, and the credential all arrive at the same structural question from different directions, and none of them find an answer waiting.
If the architecture has no concept of departure, who decides when the body stops being a user and starts being a vacancy, and who is pushing the gurney of the latter?