The exploit surface that vigilance cannot close

There are security gaps that no amount of careful watching can fix.

Every credential ever issued to a person who stopped showing up is still, architecturally, an open invitation. Shadow accounts, orphaned OAuth tokens, passwords minted before modern security hygiene existed. The infrastructure does not know these doors are unguarded. It does not know the person who held the key is gone. It was never given a way to ask.

When someone leaves a company, their old logins and access codes often stay active. The system never gets told those accounts should be shut down - it has no way to figure that out on its own.

Three months after an employee died, attackers found his admin account still active on a corporate network. They used it quietly for a month, moving laterally, stealing domain admin credentials, exfiltrating hundreds of gigabytes of data. On day thirty-one, they deployed Nefilim ransomware across more than one hundred systems. The company had kept the account running because certain services depended on it. The account had high-level access. No alerts fired. No anomaly was flagged. The building had no way to know the person who held those credentials was not the person using them.

After an employee died, hackers found his still-active work account and used it for a month to sneak around the network, steal passwords, copy massive amounts of data, and then lock up over 100 computers with ransomware. No alarm ever went off.

The forensic detail is instructive, but the structural observation is more important. The attack did not exploit a software vulnerability. It did not require a zero-day. It walked through a door the system had been told to leave open, using a key the system had no mechanism to revoke on its own. The entire intrusion rested on a single architectural fact: there is no protocol, no API, no standard method by which a network learns that an identity has permanently ceased to operate.

The hackers didn't use any fancy tricks or new software bugs. They just walked in through an account the company forgot to close. The real problem is that networks have no built-in way to know when someone is permanently gone.

That is not a failure of security hygiene. It is a failure of ontology.

This isn't just a case of someone not following the rules. The system itself was never designed to handle this situation.

The hallway that never empties

Old, unused accounts keep piling up and never go away.

The average enterprise maintains roughly 15,000 inactive user accounts that remain enabled. Varonis's 2025 State of Data Security Report found that 88% of organizations carry stale but enabled ghost users in their environments. These are not accounts waiting for someone to return from vacation. They are accounts whose owners have left the company, retired, transferred, or died, and whose credentials persist because no architectural mechanism distinguishes their silence from the silence of a user who simply has not logged in today. Sit with the weight of that ratio for a moment.

The average large company has roughly 15,000 old, unused accounts that are still switched on. These aren't people on vacation - they're people who left or died, but whose logins were never turned off.

Credential abuse was the initial access vector in 22% of confirmed breaches in 2025, according to the Verizon Data Breach Investigations Report, making it the single most common entry method. Breaches involving stolen credentials took an average of 292 days to identify and contain, the longest of any attack vector. The global average cost of a data breach stood at $4.44 million; in the United States, it reached a record $10.22 million. These are not statistics about sophisticated adversaries outpacing defenders. They are statistics about doors that were never closed, in hallways that were never designed to notice when the occupant stopped walking through them.

In 2025, stolen login credentials were the number-one way hackers broke in, used in 22% of breaches. Those attacks took nearly 300 days to catch and cost companies millions - not because hackers are brilliant, but because the doors were simply never locked.

The security industry frames this as a problem of access management. Audit your accounts. Revoke credentials promptly. Implement multifactor authentication. Run periodic reviews of Active Directory. All of which is correct, and all of which is manual, human-dependent, and structurally incapable of scaling to the volume of identities modern infrastructure carries.

The security industry says to audit your accounts and revoke old logins, which is good advice - but all of it requires humans to do it manually. That can't keep up with how many accounts large companies actually have.

The ontological deficit

The real gap - systems don't know who's actually there.

Consider what a login system actually knows about a user. It knows a credential was presented. It knows the credential matched. It knows a session was established. It records timestamps, IP addresses, device fingerprints. What it does not know, and cannot know under current architecture, is whether the entity presenting the credential is the entity to whom it was issued. It cannot distinguish between a returning user and an attacker who found the key under the mat. It cannot distinguish between a user who stopped logging in because they went on sabbatical and one who stopped logging in because they no longer exist.

Login systems only check if a password matches - they can't tell if the person typing it is really who the account belongs to, or whether that person even still exists.

The system's model of identity is binary. Authenticated or not. Present or absent. There is no third state. There is no "departed." There is no "permanently gone." There is no "the silence you are observing is not temporary inactivity but irreversible cessation." The infrastructure was never given the vocabulary.

For a login system, you're either in or you're not. It has no concept of "this person is gone forever." That idea was never built in.

This is what makes the problem architectural rather than operational. You can train every IT team on the planet to decommission accounts within 24 hours of an employee's departure. Research from Grip Security still found that 31% of employees retain access to applications from previous jobs. The process fails because it is a process, not a protocol. It depends on human notification chains, HR databases, manager awareness, help desk tickets. Each link in that chain can break. The architecture underneath has no fallback, no self-correcting mechanism, no way to independently infer that an identity has transitioned from dormant to gone.

Even if every IT team in the world tried to delete accounts within a day of someone leaving, research shows 31% of ex-employees still keep access. The process breaks down because it relies entirely on humans passing information along - and the system underneath has no way to catch those failures.

A lock that cannot tell the difference between its owner sleeping and its owner buried is not a lock. It is a suggestion.

A lock that can't tell the difference between someone sleeping and someone dead isn't really a lock - it's just a guideline.

The weapons built from silence

Attackers have been weaponizing forgotten, silent accounts for years.

State-sponsored threat actors have understood this vulnerability for years. In late 2023, Russia's Midnight Blizzard group compromised Microsoft's corporate environment by password-spraying a legacy, non-production test tenant account that lacked multifactor authentication. The account was dormant. Possibly forgotten. It carried permissions that had been granted in an earlier configuration and never revised. The attackers used it as a bridge into Microsoft's production environment, accessing email accounts belonging to senior leadership, legal counsel, and cybersecurity staff.

In late 2023, Russian hackers got into Microsoft's internal systems by targeting an old, forgotten test account that had no extra security on it. That account still had powerful permissions that were set up long ago and never cleaned up.

No vulnerability was exploited. No software was breached. The entire operation pivoted on the structural persistence of an identity that nobody was watching because the infrastructure had no concept of "unwatched."

No hacking tools were needed. The whole attack worked simply because the system had no concept of an account that nobody was watching.

Parse the operational sequence of that sentence. The Salesloft-Drift breach of August 2025 followed the same structural pattern at SaaS-to-SaaS scale. Attackers stole OAuth tokens from a third-party integration and used them to access Salesforce instances across more than 700 organizations. The tokens were long-lived, over-permissioned, and had never been reviewed, rotated, or revoked. They sat in a governance gap where customers assumed vendors managed token lifecycle and vendors assumed customers did. In reality, nobody owned the credentials. They persisted in a gray zone, architecturally alive but operationally orphaned, until an attacker found them useful.

In August 2025, attackers used stolen login tokens from a third-party app to break into over 700 companies' Salesforce accounts. Those tokens had never been cleaned up because each side assumed the other was responsible for them - so effectively nobody was.

In June 2025, researchers discovered what may be the largest credential exposure in history: approximately 16 billion login credentials compiled from infostealer malware logs, phishing kits, and prior data breaches. The aggregation was not the product of a single corporate hack. It was an archaeological deposit, layer upon layer of credentials that had never been revoked, passwords that had never expired, tokens that had outlived every context in which they were issued. The average price for stolen credentials on one criminal marketplace in 2025 was ten dollars.

In June 2025, researchers found what may be the biggest ever collection of stolen logins - roughly 16 billion - built up over years from passwords that were never expired or revoked. On the criminal market, each one sold for about ten dollars.

Ten dollars for a key to a door that does not know its owner is gone.

Ten dollars is all it costs to buy a login that still works because no one ever shut it down after the person it belonged to was gone.

The dead-man's switch, reversed

A safety switch that's supposed to stop working when you let go - but this one works backwards.

The concept of a dead-man's switch is straightforward in mechanical engineering. If the operator stops applying pressure, the system triggers a response. Trains brake. Machines halt. The absence of input is itself the signal. It is one of the oldest safety mechanisms in industrial design, predating digital infrastructure by more than a century.

Old machines were built to shut off the moment you stopped paying attention. If the person running the train let go, the train stopped. The system treated "nobody's there" as a reason to act.

The internet has no dead-man's switch.

The internet was never built with that kind of safety switch.

Instead, it has the inverse. When a user stops pressing the lever, nothing happens. The session persists. The token renews. The account sits open. The absence of input is interpreted as nothing at all, because "nothing" is not a state the system was designed to recognize. In mechanical engineering, the failure mode is inaction in the presence of danger. In digital infrastructure, the failure mode is inaction in the presence of hermetic silence.

Online systems do the opposite - if you walk away, everything stays on. Your account stays open, your login stays active. The system doesn't notice you're gone because it was never designed to notice.

Attackers have understood this inversion and operationalized it. The Nefilim operators did not force entry. They inhabited silence. They occupied the negative space around a credential that the system assumed was still being used by its original owner. The Midnight Blizzard campaign did the same. The Salesloft-Drift breach did the same. In each case, the attacker's primary advantage was not technical sophistication. It was patience. The willingness to sit inside an identity-shaped void and wait for the architecture to confirm, by its own inaction, that nobody was looking.

Hackers figured this out and used it. They didn't break in - they just waited inside a login that nobody was watching anymore. The attacks on Nefilim, Midnight Blizzard, and Salesloft-Drift all worked the same way. The real weapon wasn't technical skill. It was patience.

That confirmation arrives every time, because the architecture has no mechanism by which to withhold it.

And the system never raises an alarm, because it has no way to raise one.

The regulatory fault line

Where the rules are starting to crack

Regulators have begun to feel the edge of this structural gap without quite naming it. The FTC's 2022 enforcement action against Drizly was remarkable not for the breach itself, which exposed 2.5 million consumers' data through credentials stored on an unsecured GitHub repository, but for the remedy. The FTC required Drizly to destroy personal data it had collected but no longer needed. It required the company's CEO, James Cory Rellas, to implement information security programs at any future company he joined, a stipulation that followed the executive personally for ten years.

Regulators are starting to notice. In 2022, the FTC went after a company called Drizly after hackers accessed data on 2.5 million customers through a password left in the wrong place. What was unusual wasn't the fine - it was that the CEO personally had to fix security practices at every future company he worked at, for ten years.

The order against Drizly is, structurally, a data minimization mandate. Destroy what you do not need. Do not keep credentials active for users who no longer use the service. Do not retain data whose only function is to accumulate risk. The FTC did not use the language of absence architecture. It did not call for inactivity protocols or posthumous data governance. But the enforcement logic points directly at the void: if you cannot account for whether a user still exists, you should not be holding their data.

That ruling was really saying: don't hold onto data you don't need, and don't keep accounts active for people who aren't using them. The FTC didn't use fancy technical language, but the message was clear - if you can't verify a user still exists, you shouldn't be holding their data.

Article 14 of the EU AI Act, which begins enforcement in August 2026, will require organizations to prove that every AI-driven action was authorized at the time it occurred, not merely when the credential was issued. The shift from issuance-time to execution-time accountability is a regulatory attempt to force the infrastructure to care about temporal context. Did the person who authorized this action still have standing to authorize it at the moment the action was performed? The architecture, as currently built, cannot answer that question.

A new EU AI law coming in August 2026 will require companies to prove that every automated action was approved at the exact moment it happened - not just when the account was first set up. The system as it's currently built can't actually answer that question.

That inability carries a maximum penalty of 35 million euros or 7% of global revenue.

Getting that wrong could cost a company up to 35 million euros or 7% of everything it earns globally.

What the infrastructure cannot see

The gap the system can't see

The security industry's current answer to the ghost account problem is lifecycle management. Automate provisioning and deprovisioning. Tie identity systems to HR databases. Implement just-in-time access. Shorten token lifespans. These are reasonable controls. IBM's 2025 Cost of a Data Breach Report found that organizations using AI and automation extensively throughout their security operations reduced breach lifecycle by an average of 80 days and saved nearly $1.9 million in breach costs.

The security industry's answer to this problem is better account management - automatically turning accounts off when people leave, shortening how long login tokens last. These are smart moves. Companies a breach by 80 days and saved nearly $1.9 million per incident.

But lifecycle management assumes a lifecycle. It assumes a beginning, middle, and end that the system can observe. It assumes that departure is an event that gets reported, processed, and acted upon within a defined window. For employees at large organizations with mature HR systems, this assumption holds more often than not. For contractors, partners, third-party integrations, service accounts, OAuth tokens issued to applications that have since been deprecated, and the cascading web of non-human identities that now outnumber human ones in most enterprise environments, the assumption collapses.

But those tools only work when there's a clean story - someone joined, someone left, and the system knew about both. For contractors, third-party apps, automated service accounts, and non-human system identities - which now outnumber human accounts in most companies - that clean story doesn't exist.

The average enterprise now runs over 1,200 unofficial applications creating potential vulnerabilities, according to Kiteworks research, with 86% of organizations completely blind to their AI data flows. Shadow AI was a factor in 20% of breaches studied in IBM's 2025 report, adding $670,000 to average costs. These are not identities with clean lifecycle boundaries. They are identities that materialized through convenience, proliferated through neglect, and persist through the architecture's constitutional inability to distinguish a tool that is being used from one that was abandoned three quarters ago with its permissions intact.

The average company is running over 1,200 unofficial apps it doesn't fully control, and 86% of organizations have no visibility into how AI tools are moving their data. These shadow systems don't get properly shut down - they just get forgotten, with their access permissions still active.

Google began automatically deleting unused OAuth clients after six months of inactivity in June 2025. It is one of the first platform-level acknowledgments that dormancy is a signal worth acting on. But it addresses only one surface of a systemic condition. The broader architecture still has no generalized protocol for detecting, measuring, or responding to sustained absence across federated identity systems. Each platform manages its own inactivity thresholds in isolation, if it manages them at all. There is no cross-platform standard. There is no absence API. There is no RFC for "this entity stopped."

Google started deleting unused app connections after six months of inactivity in June 2025 - one of the first times a major platform treated "gone quiet" as a warning sign worth acting on. But this only covers one small piece of a much bigger problem. There's no shared standard, no cross-platform system, and no agreed way for any system to say "this account stopped being used."

The gurney in the hallway

A problem sitting in plain sight

Return to the hallway. Fifteen thousand enabled accounts. Eighty-eight percent of organizations carrying ghost users. Twenty-two percent of breaches beginning with stolen credentials. Two hundred and ninety-two days to detect the ones that do. The infrastructure was built to open doors. It was built to verify that the key fits the lock. It was built to record the timestamp and move on. It was never built to wonder whether the hand turning the key belongs to the person whose name is on the lease, or whether that person left the building months ago and the hand belongs to someone who found the key in a drawer.

Think about the scale: 15,000 active accounts with no live users behind them. 88% of organizations carrying ghost accounts. 22% of breaches starting with a stolen login. And when those breaches happen, they go undetected for an average of 292 days. The system was built to open doors and log the timestamp. It was never built to ask whether the person using the key still belongs there.

Every solution currently deployed to address this gap is a human-speed patch on a machine-speed problem. Every audit is a snapshot in a system that changes between frames. Every deprovisioning workflow is a notification chain with human-dependent links, each one capable of failing silently, each failure invisible to the architecture that depends on it.

Every tool we currently have to fix this runs at human speed, but the problem runs at machine speed. Audits are snapshots of a system that keeps changing. Offboarding processes depend on people completing steps - and when those steps get missed, nothing flags it.

The system cannot tell the difference between sleeping and gone. The attacker, the regulator, and the credential all arrive at the same structural question from different directions, and none of them find an answer waiting.

The system has no way to tell the difference between a user who's just quiet and a user who's gone. Attackers exploit that. Regulators are now penalizing it. And the credential sitting in the system doesn't know the difference either. Nobody finds an answer, because the system was never built to give one.

If the architecture has no concept of departure, who decides when the body stops being a user and starts being a vacancy, and who is pushing the gurney of the latter?

If a system was never designed to handle someone leaving or dying, who decides when a person is no longer active - and who is responsible for managing what happens to their account or data after that?