The most serious religious traditions in the West spent the better part of two thousand years organizing themselves around a single refusal. The name of the God of Israel was not pronounced aloud. After the Second Temple period, the four Hebrew letters — yod, heh, vav, heh — were written into the scroll, read silently, and replaced with Adonai in the mouth, because the name and its utterance were understood to be different events with different consequences. Christian theologians writing in the fifth and sixth centuries argued that positive statements about God, even true ones, diminished their referent by placing it inside a category the referent exceeded. Maimonides, working in twelfth-century Córdoba, formalized the position in Judaism: any affirmative attribute assigned to the divine subtracted from it. And in 1215, the Fourth Lateran Council codified a rule that bound the priest who heard confession to silence under any circumstance, including his own execution, his own excommunication, and the public interest of the state in which he stood [4, 5].

For about 2,000 years, major Western religions built careful rules around what you were and weren't allowed to say about God - including a rule that priests must never repeat anything heard in confession, no matter what.

These are not small doctrines. They are load-bearing. The civilizations that carried them produced libraries of commentary and schools of mystical theology dedicated to one question: what speech, even when true, should not be transmitted, and how should the medium itself be built to prevent the transmission?

These weren't minor rules. Entire civilizations built their systems of knowledge around one core question: what things, even if true, should never be shared?

The current internet inherited none of this.

The internet we use today inherited none of those rules.

Not the seal. Not the via negativa. Not the four letters. Not Canon 983.1. None of the architecture that any long-running civilization developed to distinguish what could be transmitted from what should. The protocols that now move nine billion content moderation decisions across the European Union every half-year were engineered by a different community working on a different problem, and the problem was packet reliability. The solution treated every bit as equal to every other bit. That decision has been holding for fifty-six years. It has held under every subsequent layer. The infrastructure knows the difference between a transmitted message and a dropped one. It does not know the difference between a message that should have been transmitted and one that should not.

The people who built the internet were solving a different problem - making sure data packets arrived reliably. They treated all data as equal. That design decision, made over 50 years ago, is still running underneath everything. The system knows if a message was sent or dropped. It has no idea whether a message should have been sent at all.

The protocol inherited nothing

The internet's foundational rules inherited nothing from those older traditions about when not to speak.

The question the apophatic traditions asked was not whether a thing was true. It was whether the act of saying the thing, even accurately, could do damage the thing on its own did not contain. Pseudo-Dionysius, the anonymous Syrian monk who assembled the most influential framework for Christian negative theology, argued in The Mystical Theology that the statement "God is good" was already a concession to human limitation. Not wrong, but not adequate either. The word good bent God into the shape of a word. Gregory of Nyssa, writing a century earlier, had taken the position further: true knowledge of God consists precisely in the seeker's recognition that God is invisible, that the object of the search lies beyond whatever instrument is searching for it. These were not mystics flinching from rigor. They were arguing, at a level of precision the modern philosophy of language would recognize, that certain speech acts carry architectural costs the speaker cannot pay.

The old religious traditions weren't asking "is this statement true?" They were asking "does saying this out loud cause harm that the words themselves don't cause just by existing?" Several influential early Christian thinkers argued that even accurate statements about God were damaging - because forcing God into a human word shrinks God. These weren't vague spiritual feelings. They were precise philosophical arguments about the cost of certain kinds of speech.

The cost was understood to cut in multiple directions. Saying the wrong thing harmed the speaker; saying the true thing to the wrong audience collapsed the distinction between the sacred and the merely available, and damaged both. The rabbis of the Second Temple stopped pronouncing the tetragrammaton on the understanding that the act of vocalization was structurally incompatible with what the name referred to. A priest hearing confession under Canon 21 of the Fourth Lateran Council was bound "by word or sign or by any manner whatever" not to speak of what he had learned — not even, the modern canon specifies, to save his own life, refute a false accusation against him, or prevent a public calamity [4, 5]. The sacramental seal is inviolable because the sacrament itself depends on the inviolability. Strip the seal and nothing sacramental is left in the room. A priest and a penitent become a conversation between two people.

Getting it wrong hurt the speaker. Saying something true to the wrong audience destroyed something sacred by making it ordinary. That's why priests in confession were bound to absolute silence - even to save their own lives or stop a public disaster. If the seal could be broken under any condition, the entire system it protected would stop existing.

Internet infrastructure has no version of this. It has terms of service, which are contracts. It has content policies, which are operational guidelines; they change quarterly. It has legal mandates, which are jurisdictional overlays and which vary by country. None of these touch the transport layer. They ride on top of it. The enforcement overlay is enormous — 165 million content moderation appeals filed through internal EU mechanisms between 2024 and mid-2025, thirty percent of them resulting in reversals — and reactive by construction. It processes speech that has already moved. It does not withhold.

The internet has none of this. Terms of service are just contracts. Content policies change regularly. Laws differ by country. None of these reach into how data actually moves. Even with hundreds of millions of moderation appeals per year, the system only reacts after speech has already spread. It can't stop something before it transmits - it can only respond afterward.

The sacred has no packet header

The internet's foundational architecture has no concept of sacred or sensitive - no built-in way to flag something as "do not transmit."

Imagine trying to build a seal of confession on top of the current stack.

Think about what it would actually take to build confession-level secrecy into the internet.

The confession would need to be routed through a system capable of verifying the sacramental standing of both the penitent and the confessor, along with the liturgical context of the exchange. None of those are protocol primitives; they do not exist at the transport layer and there is no mechanism by which they could be added without changing the layer itself. The transmission would need to be sealed cryptographically against any civil subpoena, any government demand, any platform administrator with root access, any future legal regime. The data would need to decay at the moment of absolution, leaving no copy anywhere, including the memory of the intermediate routing equipment. And the seal would have to hold across every hop — because the canonical framework, ancient as it is, already considers indirect betrayal a violation of the same prohibition that covers direct betrayal [5, 8]. The priest cannot tell his housekeeper. The priest cannot tell God in a way someone else might overhear. The ISP cannot log the metadata.

The system would need to verify that both people involved are who they say they are, religiously speaking. That capability doesn't exist in the internet's core architecture and can't be added without rebuilding it from the ground up. The data would need to be immune to court orders, government demands, and system administrators. It would need to fully disappear the moment the conversation ended - no copies, anywhere. And the secrecy would have to hold at every step of transmission, because even indirect disclosure counts as a violation under the original rules.

Nothing in the modern stack was designed for this. HTTPS encrypts in transit and promises nothing about persistence. End-to-end encryption protects content from intermediaries while retaining metadata and depending on trust in the client software. The GDPR grants a right to erasure on request, which has nothing to say about speech that should never have been retained to begin with. The closest architectural analog is the memoryless system, which was abandoned at the protocol layer in the 1990s because it made commerce impossible. The traces are the economy.

Nothing in today's internet was built for this. HTTPS protects data while it travels but makes no promises about what happens after. End-to-end encryption hides content but still exposes metadata. The EU's "right to be forgotten" only applies to data that was stored in the first place - it says nothing about data that should never have been kept. The closest design that ever existed - a system with no memory - was scrapped in the 1990s because the business model of the internet requires keeping records. The data trail is the product.

Canon 983.1 of the Code of Canon Law sits on the other side of that economy. A confessor who directly violates the seal incurs latae sententiae excommunication — the Latin phrase means automatic, at the instant of the act, no declaration required [5, 8]. The protocol executes on its own. The human has no discretion; the architecture does. The modern internet has no such primitive anywhere. No packet is ever latae sententiae undelivered. No server is latae sententiae forbidden to retain. The closest thing to automatic refusal is the firewall, which drops on policy — and policy is something humans negotiate, not something the medium enforces of its own accord. The sacred framework assumed the medium was a moral agent. The transport layer was built on the assumption that the medium is a stenographer.

Church law says a priest who breaks confession is automatically excommunicated the instant it happens - no trial, no process needed. The rule enforces itself. The internet has no equivalent. No packet automatically refuses to deliver. No server is automatically banned from storing something it shouldn't store. The only automated refusal the internet has is the firewall, but firewalls run on policies that humans wrote and can change. The church model treated the transmission medium as something with moral responsibility built in. The internet was designed on the opposite assumption: the network just carries things. It doesn't decide.

Transparency was not designed

The internet's "transparency" wasn't a choice anyone made - it just happened by accident.

Somewhere in the last twenty years, the word transparency began to drift, and the drift matters. It had meant something specific in liberal political theory: the deliberate opening of institutional process to public observation, chosen by the institution, at a cost the institution was willing to pay, for a public good the institution was trying to serve. Louis Brandeis used the word this way when he wrote, in 1913, that sunlight is the best of disinfectants. Wilson used it that way. The Freedom of Information Act used it that way. Transparency was affirmative. Scoped. An institution declared itself observable for particular reasons, within particular limits.

The word "transparency" used to mean something specific: an institution deliberately chose to open itself up to public scrutiny, for defined reasons, within defined limits. It was a principled decision made by someone who had the authority to make it.

What now travels under the same word is something else. Nobody chose it. It has no defined scope, no institutional subject, no public good it was constructed to serve. It is the ambient condition of an infrastructure that cannot separate the speech that should circulate from the speech that should not, and therefore circulates whatever arrives.

What we call "transparency" today is completely different. Nobody chose it. It has no limits, no owner, no purpose. It's just what happens when an infrastructure can't tell the difference between what should be shared and what shouldn't, so it shares everything.

In 2018, a team at the MIT Media Lab — Soroush Vosoughi, Deb Roy, and Sinan Aral — published in Science the largest empirical study of rumor diffusion ever conducted on a single platform [9, 10]. Their dataset covered the full life of Twitter from 2006 through the end of 2016: 126,000 verified true and false news cascades, spread by roughly three million people, across approximately 4.5 million tweets. The findings were unambiguous. False news was about seventy percent more likely to be retweeted than true news. The truth took roughly six times as long to reach fifteen hundred people as falsehood did. At a cascade depth of ten — the tenth retweet down the chain — lies arrived approximately twenty times faster than facts. When the researchers removed the bots and re-ran the analysis, the gap held. The drivers were human.

The largest study ever done on how false information spreads on social media found that lies travel faster and farther than truth. False stories were 70% more likely to be shared. Truth took six times as long to reach the same number of people. Ten shares deep, lies were moving 20 times faster than facts. And when bots were removed from the data, the numbers barely changed - it was regular people doing this.

This is what gets called transparency. It is an accident of protocol, named after the fact. The apophatic traditions knew how to distinguish. When they refused to pronounce, they were not performing censorship. They were asserting that the act of pronunciation would create a condition the speaker had no standing to create. A censor believes there is a correct thing to say and a wrong thing to say and claims the authority to separate them. An apophatic theologian believes that certain categories of speech damage whatever they touch, and that the proper response is for the medium to decline. The first stance is about who holds power. The second is about what the medium is, and what it can and cannot be made to carry.

This is what the internet calls "transparency" - but it's really just an accident with a label on it. The old traditions that refused to speak certain things weren't censoring. They weren't saying "this idea is wrong." They were saying "saying this out loud creates something the speaker has no right to create." A censor decides what's correct. The apophatic tradition decided what the act of speaking itself could or couldn't bear - regardless of who was in charge.

The ecclesiopolitan mistake

Here is a term the argument requires: "ecclesiopolitan." It comes from the Greek word for a public assembly where every citizen was in the same room and everyone could hear everything. But that ancient assembly had rules - strict ones - about who could speak and when, and those rules were built into the physical space itself.

Here is the word, because the argument needs it. Ecclesiopolitan. Borrowed from ecclesia, the Greek word for the universal assembly — every citizen in one room, every voice equally audible, every utterance heard by everyone present. The ancient ecclesia was a specific civic context with specific rules for who could speak when, and the rules were architectural: you stood on the platform, or you did not.

The internet turned into that assembly room - but kept none of the rules about who could speak and when.

The internet is ecclesiopolitan by accident. Its early designers were not making claims about the universal assembly; they were making claims about packet delivery. But the transport layer had no concept of audience scope, and so the message I send to one person became, at the level of the underlying protocol, the same kind of event as a broadcast to two billion. The routing differs. The architecture does not.

The internet was built to move data, not to think about who should see it. Because of that, sending a text to your friend and posting to billions of people works the same way under the hood. The addresses are different. The plumbing is identical.

The apophatic traditions were, by contrast, rigorously scoped. The priest speaks to the penitent. Nobody else is canonically in the room, which means nobody else is actually in the room either — the confessional was architecturally shaped to enforce what doctrine required. The mystic utters the name only in specific liturgical contexts, and only after years of preparation that the community supervises. The rabbi replaces the four letters with Adonai because the public reading is a public act and the name is not public speech. In each case, the medium was aware of the audience; the sacred was protected not by forbidding the mention but by building the conditions under which the mention could even occur.

Old religious traditions actually built rules about who could hear what directly into their physical spaces and rituals. The confessional booth was designed so nobody else could listen. Sacred names were only spoken in specific settings, by prepared people, with community oversight. The protection wasn't a rule people tried to follow - it was baked into the structure itself.

Every serious human framework for managing truth has carried a scope primitive of some kind. Scientific peer review has scope: a result is not a result until the journal has taken it, and the taking is gated by named humans with reputations they can lose. Legal testimony has scope: what is said in court carries a protocol weight that what is said at the bar afterward does not. Clinical conversation has scope: the patient's disclosure to the physician is held inside a privilege that does not extend to the hallway. These are not ethical overlays draped on top of free expression. They are the structural conditions the speech depends on. Without them, the same words would mean something else, or would not mean anything at all. The digital commons has no scope primitive.

Every serious system humans have built for handling truth - science, law, medicine - has rules about who the information belongs to and under what conditions it can be shared. Those rules aren't optional extras. They are what make the information meaningful in the first place. The internet has no such rules built in.

It has audience settings, which are configurable, revocable, and routinely bypassed by a screenshot. It has private channels, which become public the instant any participant chooses to make them so. It has ephemeral messages, which persist wherever the recipient's device has kept them. The architecture enforces nothing. Whatever scope appears to exist is a social convention running in user-space, and user-space conventions never survive the first participant who doesn't honor them.

The internet does have privacy settings, private chats, and disappearing messages - but none of them actually hold. Any one person in a conversation can share it instantly. Screenshots bypass every setting. The privacy is a social agreement, and social agreements break the moment one person decides not to honor them.

The court of permanent record

Everything is saved forever.

In June of 2025, the Supreme Court of Brazil ruled that social media platforms were accountable for illegal user-generated content published on their services. Six of eleven justices backed fines for non-removal. The court was attempting, imperfectly, to impose a scope primitive on a medium that has none — to make the platform liable for its architectural inability to distinguish between things it was never designed to distinguish. The European Union's Digital Services Act, which became fully applicable in February 2024 and began enforcing transparency reporting the following year, takes a different angle on the same problem. It forces platforms to document their moderation choices in auditable formats, on the theory that visibility into moderation creates accountability for it [6, 12].

Brazil's top court ruled in 2025 that social media platforms are responsible for illegal content posted by users. The EU passed laws requiring platforms to explain their moderation decisions. Both are attempts to force some kind of "who can see what" accountability onto a system that was never designed with those controls.

These efforts are earnest and, at a structural level, aiming at the wrong layer. They treat the problem as platform behavior, when the platforms are themselves running on infrastructure that cannot give them the primitives they would need to behave any other way. A platform under the DSA can document its moderation. It cannot refuse in advance to transmit a category of speech it has not yet received, because the transport layer has no way to know what the speech is until the speech has already moved. Moderation arrives after the packet. The confessional seal arrived before the speaking. The gap between those two conditions is not a policy gap. Regulation cannot close it, because regulation sits above the layer where the problem lives.

These laws are well-meaning but aimed at the wrong target. They focus on how platforms behave, but platforms can't solve a problem that exists in the deeper infrastructure beneath them. A platform can write down what content it removed - it cannot stop content from spreading before it even makes a moderation decision. The confessional stopped speech before it escaped. Modern moderation only acts after.

Somewhere in a contractor's facility — Dublin, Manila, Austin, Warsaw, depending on which platform and which shift — a woman working at a ticket queue marked Priority 1 has forty-five seconds to decide whether a video of a teenager's suicide stays up or comes down. The video has already circulated. Automated systems routed it to devices in numbers she does not know and cannot see; the platform's transparency report, filed next year, will disclose an aggregate. Her job is to make a determination that the architecture would not make, at a speed that matches the architecture's, while the architecture continues to distribute what she has not yet ruled on. Every content moderator is a priest working after the confession has been broadcast.

Somewhere right that has already been sent to thousands of devices while she's still deciding. Her entire job is to apply a human judgment that the system itself was never built to make, at machine speed, while the machine keeps distributing the content she hasn't reviewed yet. Content moderation is cleanup, not prevention.

The 2025 Pew survey of twenty-five nations found that a median of seventy-two percent of respondents considered the spread of false information online a major threat to their country. In twenty-four of the twenty-five countries surveyed, a majority held this view. This is a public that has intuited the architectural condition without being able to name it. The medium is not built for the load. The load is being carried anyway. Nothing structural is in place to stop it.

A major 2025 global survey found that nearly three quarters of people in 25 countries see online misinformation as a serious national threat. Most people have felt the problem even if they can't describe why it exists. The system is carrying more than it was built to handle, and nothing at the foundation level is fixing that.

What the seal protects

What the confessional is actually designed to guard.

The common misreading of the sacramental seal is that it protects the sinner. It does not. It protects the sacrament — the specific structural condition under which a confession can occur at all. If the penitent cannot be certain that the content of the confession will never leave the room, the penitent will not confess. Without confession there is no sacrament; there is only a conversation between two people, one of whom is wearing a stole [5, 8]. The seal is the architectural precondition that makes a particular speech event possible. Remove the precondition and the event stops being that event. Something else happens, wearing the same clothes.

The confessional seal isn't there to protect the person confessing. It's there to protect the confession itself - the conditions that make it possible. If people can't trust their words will stay in that room, they won't speak. And if they don't speak, the whole thing stops being a confession. The seal is what makes the event happen at all. Take it away and you still have two people in a room, but it's a different kind of conversation entirely.

A scope primitive, at the protocol level, is this kind of precondition. The priest-penitent relationship is one. Attorney-client privilege is another. The physician's duty of confidentiality is another. The researcher's obligation to their subject is another. Each creates a protected space in which a class of speech becomes possible that cannot exist outside the space. Remove the protection and the speech does not become more available. It disappears. The conditions under which it would have made any sense are gone.

A scope primitive - a built-in guarantee about who can see what - is the same kind of thing as attorney-client privilege or doctor-patient confidentiality. Each one creates a protected space where a certain kind of honest speech can happen. Outside that space, the speech doesn't just become less private. It becomes impossible, because the conditions that gave it meaning are gone.

The current internet is a large-scale demonstration of what happens when the scope primitives are absent. The speech does not become more honest. It becomes impossible to hold. Private disclosure turns into public performance. Confession turns into exposure. Whistleblowing turns into harassment. Testimony turns into content. In the older frameworks, each of these was a different speech event, with a different envelope, carrying a different weight. In the new architecture they are all the same event — a string of characters, some metadata, some routing — and the protocols do not care what the event is or to whom it is addressed. The envelope is gone. What is left is content, and content without the envelope means something other than what the speech act used to mean, and the difference is not a small one.

The internet shows us what happens when none of those protections exist. Private thoughts become public performances. Confessions become exposures. Whistleblowing attracts harassment. Testimony becomes content. In older frameworks, each of those was a completely different kind of thing with its own rules and weight. Online, they're all the same - text, metadata, routing - and the system treats them identically. The envelope that used to give each one its meaning is gone, and losing the envelope changes what the words actually mean.

The philosophical disinheritance

We've lost something we don't have a name for.

Every civilization that survived more than a few hundred years produced some framework for what should not be said. Most of them went further and encoded the framework into the medium itself, rather than leaving it to culture and exhortation.

Every major civilization that lasted built some framework for what shouldn't be said out loud. Most of them went further - they didn't just make rules, they built the restriction directly into the tools and spaces people used, so the medium itself enforced the limit.

The Athenians built courtrooms with procedural silences written into the process. The Romans built advocate-client privileges that bound the orator for the rest of his life. The rabbinic tradition built textual apparatus around the four letters such that a Torah scroll with the name transcribed incorrectly is invalid — the prohibition is enforced by the physical artifact itself. The Catholic Church built a canonical structure around the confessional seal that has held, with its core provisions essentially intact, since 1215 [4, 5]. Each of these was an architectural decision about the medium, not an ethical appeal to the user. The medium was the thing that was built to refuse.

Athens built courtrooms with required silentiality duties to clients. Jewish law made a Torah scroll with the divine name written incorrectly physically invalid - the prohibition was enforced by the object itself. The Catholic Church has maintained the confessional seal essentially unchanged since 1215. In every case, the decision was architectural - they built the restriction into the structure, not just into the rules.

The internet made a different decision. It was made in 1969, when the early ARPANET designs assumed a medium without audience scope, and it was inherited by every layer that followed. TCP did not revisit it. HTTP did not revisit it. OAuth did not revisit it. The platforms that grew on top of those layers could not revisit it, because the layers beneath gave them nothing to work with. Everything since — the GDPR, the DSA, the Brazil ruling, the bills filed in Montana and Washington to compel priests to testify about what they hear in confession — has been a patch applied too far up the stack to change the condition underneath.

The internet made the opposite choice in 1969, and every layer built on top of it inherited that choice. TCP didn't fix it. HTTP didn't fix it. Every major law since - including GDPR, the DSA, Brazil's ruling, and even state laws trying to force priests to testify - is a patch applied too far up the chain to change what's broken underneath.

The apophatic traditions were not being humble. They had worked out, over centuries, that some speech events require architectural precondition, and that the precondition cannot be faked at runtime. The seal does not work retroactively. The tetragrammaton is not selectively pronounced. The via negativa is not a style. These were structural commitments the medium honored because the speech they enabled could not exist without the medium's cooperation.

The ancient traditions that avoided saying certain things weren't just being modest. They had learned, over centuries, that some conversations can only exist if the right conditions are built in advance. You can't add those conditions after the fact. The seal doesn't work retroactively. The protection has to be part of the structure from the start.

The modern internet honors none of them, and the speech those frameworks enabled is becoming harder to find. Private reflection has nowhere to settle that will not be logged, aggregated, and eventually surfaced. Confession has no protected channel. The speech act that required architectural protection in order to exist now exists in public or does not exist, and the version that exists in public is a different speech act, wearing the costume of the one the older architectures protected.

The modern internet honors none of those traditions, and the kinds of speech they protected are disappearing. There's nowhere online to think privately, confess safely, or speak with the weight those acts used to carry. The version of those acts that happens online is a different thing altogether - it just looks like the original.

The pattern nobody named

A problem that exists but hasn't been given a name yet.

The era being called an era of transparency is, more precisely, an era of architectural indifference. The word transparency was applied to the condition later, because calling it transparency makes the condition sound as though someone chose it. Nobody chose it. Nobody built the opposite, either, and the opposite of transparency at the architectural level is not secrecy. Secrecy is a policy. The opposite of transparency is scope: the structural capacity of the medium to distinguish the audiences for whom a given speech act is appropriate from the audiences for whom it is not.

The internet age is often called "the age of transparency," but that's a polished label for something nobody actually planned. No one designed the internet to be open - it just never had a built-in way to control who hears what. Real privacy isn't about keeping secrets. It's about a system that can tell the difference between the right audience for something and the wrong one.

Scope is what the priest-penitent relationship enforces. It is what the unpronunciation of the four letters enforces. It is what attorney-client privilege enforces, and what the physician's duty enforces, and what every serious speech framework in the history of human institutions has enforced. Each is a scope primitive encoded into the architecture of a particular kind of speech event, and each has held for centuries or millennia because the architecture is what made the speech possible to begin with.

Throughout history, certain conversations were protected by the structure around them - not just by rules. The priest-penitent relationship, lawyer-client privilege, doctor-patient confidentiality - all of these worked because the system itself made leaking information nearly impossible, not just against the rules. These protections lasted for centuries because they were baked into how the conversation happened, not just written down somewhere.

The internet has no scope. The word transparency is what gets said to avoid saying that. When the Fourth Lateran Council codified the confessional seal in 1215, it was not writing a rule for an institution. It was encoding a protocol for a civilization. The protocol said: there is a class of speech this community needs to be able to generate, and that class will not be generated unless the medium itself refuses to transmit it. The medium, at that point, was the priest — a human being bound by canon law, supported by the automatic excommunication clause, and defended by a theological framework that made the betrayal architecturally unthinkable [4, 5, 8].

The internet has no built-in way to limit who sees what. When we call it "transparent," we're avoiding the harder admission that it was simply never built to protect certain kinds of conversations. When the Catholic Church formalized confession in 1215, it wasn't just writing a rule - it was building a system. The system said: people need to be able to say certain things, and they will only say them if the channel itself makes leaking impossible. That channel was the priest, legally bound, theologically motivated, and structurally unable to betray what he heard.

The modern medium is a data center. It is bound by terms of service, supported by content moderation workflows, and defended by a legal framework that is being renegotiated in real time by governments that disagree about what the framework should do [11, 12, 14]. All of this is contingent. None of it is architectural. The priest dies before breaking the seal, because the seal and the sacrament are the same structure. The data center does whatever the jurisdiction requires, because it is equipment. The distance between those two postures is the whole question.

Today, the "medium" holding our sensitive information is a data center. It's governed by terms of service and legal systems that change depending on the country and the political moment. That's completely different from the priest who would die before revealing a confession - because for him, the seal and the sacrament were the same thing. A data center doesn't have values. It follows whoever is currently in charge. That gap is the entire problem.

If the medium cannot tell a confession from a broadcast, cannot tell a sacrament from a transmission, and cannot refuse to carry what it was never designed to hold — then which has actually happened: did the old frameworks collapse because the new medium outran them, or were they never compatible with a medium built this way, such that the collapse was present in the design before any user ever sent anything?

If the internet cannot tell the difference between a private confession and a public broadcast, and was never built to hold certain kinds of sensitive speech - then we have to ask an uncomfortable question: did our old privacy frameworks fail because the internet moved too fast, or were they always doomed the moment we put them on a system that was never compatible with them? Maybe the failure was there from day one.