Above eighteen thousand feet, the sky in the United States is Class A airspace. No aircraft enters it without clearance. Every pilot inside it is on an instrument flight plan, in two-way radio contact with a controller, transponding a discrete code that places the aircraft on a screen at a center on the ground. Air traffic control separates every IFR aircraft from every other by a defined distance in three dimensions. There is no see-and-avoid in Class A. The medium itself does the separating.

Above 18,000 feet, every plane is tracked, monitored, and kept apart by air traffic controllers. Nobody flies there without permission and a flight plan. The system itself prevents crashes - pilots don't have to watch for each other.

Drop down through Class B, C, D, E. Each tier loosens. Less radar, fewer mandatory communications, weaker separation guarantees. Below the floor of Class E, in the airspace that fills most of the volume above most of the United States most of the time, control ends entirely. Class G is uncontrolled. No clearance is required, no controller is watching, and the rule that keeps two airplanes from occupying the same cubic meter of sky at the same instant is the human eyeball of whoever happens to be flying through. The FAA's Aeronautical Information Manual states it plainly: pilots in Class G are responsible to see and avoid other aircraft.

The lower you fly, the fewer rules there are. At the very bottom, there are no controllers at all - just you and your eyes. If two planes are about to hit each other, the only thing stopping it is whether the pilots can see each other in time.

The internet was built as a single-class system. There was no Class A. There were just packets, and the packets all moved the same way. For thirty years that worked, because the volume was low enough and the participants homogenous enough that the see-and-avoid rule held. Then the volume grew. The participants stopped being homogenous. A different traffic appeared, in different aircraft, flying different procedures, and the medium had no way to separate them.

The internet was originally built so every piece of data was treated the same. That worked fine when not many people were using it and they all behaved similarly. Then traffic exploded, new kinds of users arrived, and the old "just look where you're going" approach broke down.

What's happening now is what happens at any uncontrolled airfield once traffic crosses a threshold nobody planned for. The pilots who can afford to leave the pattern file IFR and climb out. The airspace they leave behind keeps existing. The regulations still apply, the sectional charts still print, but it's no longer where the flying that matters is happening, and the people doing the flying that matters are mostly somewhere else.

When an airfield gets too busy and chaotic, the serious pilots who can afford to just leave and fly somewhere with real structure. The messy airspace they leave behind still exists, the rules still technically apply, but the meaningful flying isn't happening there anymore.

The runway is closing

The old system is shutting down.

In April 2025, the cybersecurity firm Imperva released its twelfth annual bot traffic report. The headline finding crossed a threshold the industry had been watching for nearly a decade. Automated traffic took the majority share of all web requests in 2024 for the first time in ten years, with bots reaching "51% of all web traffic". Malicious automated traffic, the kind engaged in scraping, fraud, account takeover, and API abuse, made up roughly thirty-seven percent, the sixth consecutive year that figure has climbed. Travel sites took the worst of it. Nearly half the requests landing on the booking surfaces of airlines, hotels, and online travel agencies in 2024 came from bad bots, with human visitors barely outnumbering them and good bots a rounding error against both.

A major cybersecurity company found that in 2024, for the first time in ten years, more than half of all website visits were made by automated bots, not humans. About 37% of all web traffic was malicious bots doing harmful things like fraud or scraping. Travel websites were hit hardest - nearly half their visitors were bad bots.

Travel sites are now flown by aircraft they cannot see. Forty-eight percent of the traffic at the gate isn't someone trying to book a flight. It's something pretending to be someone trying to book a flight, doing it at machine cadence, against the same booking surface the actual passenger is using. (Imperva also notes, in passing, that ByteSpider, TikTok's crawler, alone accounted for fifty-four percent of all AI-enabled bot attacks measured in 2024. Hard to know what to do with that fact, but worth marking down somewhere.)

Nearly half the "visitors" to travel booking sites aren't real people - they're bots pretending to be customers, running at computer speed, using the same booking tools real passengers use. One company's AI crawler alone was responsible for more than half of all AI-powered bot attacks measured that year.

What arrives at the gate is also changing in kind, not just in volume. In December 2025, the content firm Kapwing published a year-end analysis that put synthetic, AI-generated material at more than half of all newly published English-language articles, with roughly twenty-one percent of short-form video recommendations on major platforms also machine-made. A separate Graphite analysis from October 2025, drawing on a random sample of 65,000 English-language articles published between January 2020 and May 2025, hit the same crossing at a slightly different threshold and a slightly later month: AI content peaked above fifty percent in late 2024 and has hovered near a fifty-fifty split since. Different sample, different methodology, different exact crossing point. The shape of the curve is the same.

It's not just the number of fake visitors going up - the content itself is changing. More than half of all new English articles published online are now AI-generated. About one in five short videos recommended on major platforms is also machine-made. Two separate research firms, using different methods, reached the same conclusion.

A small newsroom called 404 Media noticed something else in the data. The four reporters who founded the publication after Vice's bankruptcy were watching their stories get scraped, paraphrased by AI text spinners, and republished elsewhere with better search rankings than the originals. They could see, in co-founder Emanuel Maiberg's words to Nieman Lab in early 2024, "how Google is degrading in quality" alongside the parallel decline of their own discoverability against AI-spun derivatives of their work. They responded by paywalling more of their reporting and asking readers to fund it.

A small news outlet found that AI tools were copying their stories, rewriting them slightly, and posting them online - and those fake versions were actually ranking higher in search results than the originals. The reporters could see their own work being used against them. Their response was to put their reporting behind a paywall and ask readers to pay directly.

Set the data classes side by side. More than half the traffic on the open web is now non-human. More than half the new written content on the open web is now non-human-authored. The sectional chart has been redrawn. The airspace the average reader is flying through is no longer the airspace they think they are in.

More than half of web traffic is bots. More than half of new online writing is AI-generated. The internet most people think they're browsing isn't the one they're actually in anymore.

The 402 reawakens

A long-forgotten tool is suddenly relevant again.

On July 1, 2025, the internet infrastructure provider Cloudflare did something that seems, in hindsight, structurally inevitable. The company flipped the default for every new domain on its network from permissive to restrictive: AI crawlers are blocked unless the site owner explicitly grants them passage. Cloudflare powers one of the world's largest networks, managing and protecting traffic for roughly 20% of the web. One-fifth of the public internet stopped being uncontrolled airspace overnight.

In July 2025, Cloudflare - a company that handles traffic for about one-fifth of all websites - changed its default setting to block AI crawlers unless a site owner specifically lets them in. Overnight, a huge chunk of the internet went from unguarded to gated.

Then Cloudflare did something stranger, and more interesting. The company exhumed an HTTP response code that has lived in the protocol stack since 1992 and has been used for almost nothing in the entire history of the web. The code is 402. Its reserved name, since the day it was written into HTTP/1.0, is Payment Required. The early web's authors apparently anticipated that someone would eventually need to charge for a request, reserved the slot, and then nobody used it for thirty-three years. Under Cloudflare's new mechanism, an AI crawler arriving at a participating site receives one of two responses: a 200 OK with the content if it has signaled willingness to pay, or a 402 with the publisher's price if it has not. Cloudflare announced a private beta of pay-per-crawl with itself as merchant of record, allowing publishers to set a per-request rate and collect when an AI crawler accepts the terms. By January 2026 the customizable 402 was generally available across the company's paid customer base.

Cloudflare also revived an old unused internet code from 1992 called "402 - Payment Required." Nobody had ever really used it in 33 years. Now it has a job: if an AI crawler shows up at a site, it either gets a bill or gets the content - depending on whether it has agreed to pay. Publishers can set their own per-visit price.

To understand why this matters, look at what the protocol revealed about the existing economy. Cloudflare's Radar team published the underlying ratios. As of June 2025, OpenAI's crawlers were ingesting roughly seventeen hundred pages for every visit they sent back to a publisher's site. Anthropic's were ingesting roughly seventy-three thousand pages per referral. Seventy-three thousand to one isn't a degraded version of the search-and-traffic deal. It's a different deal entirely, possibly not a deal at all. A small-site operator with a server log open in another tab can run that math: the AI assistant ingests their entire archive in an afternoon and returns, statistically, almost no one. The deal publishers struck with Google in 2002, in which the search engine indexed pages and returned readers in proportion, hasn't survived its descendant. The new crawlers ingest. The traffic that used to come back as consideration in the trade no longer arrives.

Here's why this matters: AI crawlers were taking an enormous amount of content and sending almost no visitors back. OpenAI's crawlers read 1,700 pages for every one person they sent to a publisher's site. Anthropic's read 73,000 pages per referral. The old deal - "we index you, we send you readers" - no longer exists with AI. They just take.

The 402 sat in the standard for the same reason a toll booth at the edge of a road sits unmanned for years before anyone strings up the gate arm: until the toll is worth collecting, the infrastructure of collection is a costume. The web's underlying exchange held without enforcement until it didn't.

The payment tool was always sitting there in the internet's rulebook, ready to use - like an unmanned toll booth on a road. Nobody needed to flip the switch until now, because the old informal arrangement was working well enough. It stopped working, and now the gate is coming down.

The premium concourse

A premium, members-only section is being built.

There's a kind of object that exists inside many large institutions and serves a specific economic function: the parallel facility for people willing to pay for separation. Hospitals have private wings. Universities have donor-only libraries. Airlines have premium concourses, and the concourse is probably the cleanest example because everyone has walked past one. A separate door, a separate line, a lounge with a person at the desk whose job is to know who you are. It's all inside the same building as the regular terminal, on the same runways, but configured so the experience inside the partition is fundamentally different from outside. The cost of admission is a credit card with a particular logo, or a fare class above a particular threshold, or a status tier earned through prior flights. The architecture is identical for everyone outside it. It's different for everyone inside it.

Many big institutions have a special area for paying customers - a hospital's private wing, a VIP airport lounge. It's in the same building as everything else, uses the same infrastructure, but the experience inside is completely different. You get in by paying more, or by proving you've already paid a lot in the past.

The publishing platform Substack, founded in 2017, closed a Series C in July 2025 led by BOND that valued the company at $1.1 billion. By March 2025 it had crossed five million paid subscriptions, more than doubling its 2023 paid base. According to remarks the company's CEO made at a Los Angeles industry event in June 2025, the platform now hosts more than fifty individual writers earning over a million dollars a year from subscriptions alone — not a million in revenue across some larger media operation, but a million in subscription dollars going to one person with a newsletter. The historian Heather Cox Richardson's Letters from an American clears more than five million dollars annually on its own, paid directly by readers who chose to subscribe.

Substack, a platform where writers charge readers directly, hit a $1.1 billion valuation in 2025 and crossed five million paying subscribers. More than fifty individual writers on the platform each make over a million dollars a year purely from subscriptions. One newsletter alone brings in more than five million dollars annually.

The reader-supported tech publication 404 Media, four reporters working from home with no office and no investors, was profitable within six months of launching. Their model is uncomplicated: ten dollars a month or a hundred dollars a year, paid by the reader, ad-free. The four cofounders own the company outright. The Financial Times, in a 2024 piece on the layoffs collapsing the rest of the industry, named them as one of the few new media ventures demonstrably working.

Four reporters with no office and no outside investors launched a news site and were making a profit within six months. Their model is simple: readers pay $10 a month, no ads, the reporters own the whole company. Major industry press called them one of the only new media outfits that's actually working.

There's a temptation to describe what's happening here as a return to subscription journalism, as though the model were a revival of an older economy. The collision is more precise than that. The hospital private wing didn't exist when general hospitals were founded. It came into being because the unmanaged shared ward stopped being able to do what people willing to pay extra wanted, and because there was a price point at which a parallel arrangement could be built to do it instead. Substack, 404 Media, Defector, Hell Gate and the dozens of reporter-owned publications that followed are operating on the same logic. They aren't better journalism in some ethereal sense. They're journalism that has separated itself from the public surface where the bots and the slop are now the dominant traffic. Members only after the gate

This isn't just "subscriptions are back." The premium layer is forming because the free, open internet has been overrun by bots and low-quality AI content, and people who want something different are willing to pay for separation from it. These new outlets aren't necessarily doing better journalism - they're doing journalism that has moved away from the broken public surface entirely.

The platform that has absorbed the most of what was once public conversation is Discord. The numbers are, by 2025, no longer marginal. The platform crossed two hundred million monthly active users in May 2025, growing by roughly a third from its 2023 base of around one hundred fifty-four million. It now supports approximately thirty-two million servers, the great majority of which are not publicly indexed. According to data the company's co-founder Stanislav Vishnevskiy has discussed publicly, the bulk of activity on the platform happens inside servers of fewer than fifteen people. Roughly nine in ten of Discord's private servers fall below that threshold, and roughly nine in ten of total time-on-server occurs inside them.

Discord is now huge - over 200 million users - but most of what happens there is in tiny private chat rooms of fewer than 15 people that outsiders can't see or search.

Two hundred million people are spending time on a platform whose core unit is a room of fewer than fifteen people that nobody outside the room can see. The room has a door, the door has a guest list, and the guest list is maintained by whoever is hosting. Some of these rooms charge admission. Discord launched paid memberships and channel monetization in 2024; server admins can now set fees for access, typically two to ten dollars per month for premium channels and exclusive content.

Most of Discord's activity happens in small, private, invitation-only rooms. Some of those rooms now charge a monthly fee to get in.

This is not a phenomenon adjacent to the larger bifurcation. This is the bifurcation, executed at the user-facing layer, with no architectural pretense that it's anything else. The public web is the apron. The Discord server with the invite link from a paid newsletter is the FBO, the fixed-base operator's terminal across the airport from the commercial gates, where the people who can afford the difference get a different experience of the same physical airfield. Roughly seventeen thousand FBOs serve private aviation worldwide, with about four thousand in the United States. Paris-Le Bourget alone has eight FBOs in competition for the same private traffic. None of this is new. Private terminals at public airfields have been operating, profitably, for the better part of a century, and the bifurcation they represent is a settled fact of how aviation works.

The internet is splitting into a free public layer and a paid private layer. Discord's invite-only paid servers are a perfect example - like a private airport lounge versus the regular terminal, both at the same airport.

The hagiographic framing of independent media collapses some of the time into a simpler observation: the people who can afford to leave are leaving, the architecture they are leaving for asks something of the visitor at the door, and the door does the sorting that the public web no longer can.

The people who can pay are moving to private, paid platforms. Those platforms use fees to filter who gets in, replacing the sorting the open internet can no longer do.

The economics of separation

How the money behind the split works.

The bifurcation isn't a conspiracy. There's no committee that decided the public internet should degrade so subscription revenue could flow to walled platforms. The bifurcation is what infrastructure does when its capacity to serve a certain quality of experience falls below the price point at which that quality could be sold separately. Economists who study airline pricing have a name for it. So do the people who study cable television, gym memberships, and any other service that segments customers by what they're willing to spend. The mechanism isn't new. The result is the same. Quality migrates upward into the tier where it can be paid for, and the lower tier persists for the people who cannot or will not pay.

Nobody planned for the internet to get worse. It happened because when quality became something people would pay for, the market naturally moved quality behind a paywall and left the free tier to degrade.

The unusual thing about the current moment is how openly the separation is being narrated by the people performing it. Cloudflare's CEO Matthew Prince has been the most direct. The company's July 2025 announcement, signed by Prince, framed it simply: the previous internet's economic settlement had broken. For two decades the trade was straightforward. Search engines crawled. Traffic returned. Ad revenue and subscriptions funded the original work the search engines were indexing, and the cycle largely held. It didn't survive the transition to AI assistants. The new generation of crawlers ingests content for the purpose of generating answers inside their own products, and the traffic that used to flow back as the consideration in the trade now arrives as a fraction of a percent of what it was. Publishers are left with the operational cost of being scraped and none of the offsetting revenue. Cloudflare's response: install, by default, a policy that one-fifth of the web now enforces. AI crawlers have to opt in, declare their purpose, and may be charged.

Cloudflare's CEO announced in July 2025 that the old deal - bots crawl your site, they send you traffic, you earn ad money - is broken. AI tools now scrape content to answer questions inside their own apps, sending almost no traffic back. Cloudflare's fix: AI crawlers must now ask permission and may have to pay.

Whether or not pay-per-crawl scales, the architectural shift it represents is the relevant story. A network operator with one-fifth of the web's traffic decided the medium itself had to start refusing requests it couldn't adjudicate, and that the refusal had to be the default. Whatever the previous regime was, it isn't the regime any longer.

Whether charging AI bots to crawl websites takes off or not, the bigger point is that a major internet gatekeeper has decided to start blocking requests by default instead of allowing them - that's a fundamental change to how the web works.

Where this ends up, if it keeps going, is a web where the meaningful exchanges happen behind authentication, the ungated surface is left to crawlers and to whatever the crawlers' outputs eventually return to publish, and the question of which tier a piece of content lives in is the question that determines whether anyone trustworthy ever reads it.

If this keeps going, the real conversations and trustworthy content will move behind logins and paywalls, the open web will be what remains.

What's left in the uncontrolled airspace

What remains on the open, unregulated internet.

Class G airspace isn't unsafe in any technical sense. The aviation regulations that govern it are well-defined, the visibility minimums are documented, and the equipment requirements are minimal because the level of service is minimal. It's just airspace where the medium is no longer participating in the management of the traffic. Pilots in Class G are responsible, by federal regulation, to see and avoid every other aircraft sharing the volume of sky they happen to be in. The regulation works because, in most of the rural United States most of the time, the traffic density is low enough that the rule has time to operate.

Uncontrolled airspace isn't illegal or inherently dangerous - it just has minimal rules and no active oversight. Pilots in that airspace are entirely on their own to avoid each other. It works only because there usually aren't many aircraft there.

Apply the same regulation to the airspace above LaGuardia at five p.m. on a Friday and the regulation doesn't work, in the technical sense. It would still be in force, the FAA would still print it. No pilot could actually obey it, because the traffic density has crossed the threshold where the rule's underlying assumption holds. This is roughly what's happened to the public web. The see-and-avoid rule the early protocols were built on assumed the participants were close enough in number, intent, and cadence to perceive each other and adjust. Whatever attention the participants have to spare for noticing each other is now badly outmatched by the volume flying through. More than half of it isn't human, and a growing portion of what is human has been digested or paraphrased through something that isn't.

See-and-avoid works when there's not much traffic. Apply that same rule over a busy airport and it breaks down completely. The open internet is in the same situation - the volume of traffic, most of it automated, has far exceeded what the original self-management rules were built to handle.

Reddit's co-founder Alexis Ohanian, speaking in October 2025, described the condition in market terms. In a bot-flooded environment, he argued, anything that offers verifiable "proof of life" becomes commercially valuable, and the live-streamed, real-time, unmistakably human signal becomes the scarcest commodity on the network. NewsGuard's analysts reported in May 2025 that more than a thousand news websites were operating with content generated almost entirely by AI, often presenting themselves as legitimate publications. The Columbia Journalism Review tested eight major AI-search products in March 2025 and found they returned inaccurate or misleading answers more than sixty percent of the time, nearly always without acknowledging any uncertainty. The traffic in Class G is operating, but it no longer has an agreed referent for what counts as the truth of where it is.

Real-time, unmistakably human content is becoming rare and therefore valuable. Meanwhile, over 1,000 news sites are now mostly AI-written, and major AI search tools give wrong or misleading answers more than 60% of the time, usually without admitting uncertainty.

The card the article comes from poses a question worth setting on the table: is a library that asks you to be quiet the same thing as a club that charges you to enter? The answer the aviation system itself supplies is no. A control tower that asks for your call sign before you enter Class B is not the same kind of thing as an FBO that charges for a hangar. They look similar from outside. The first is a separation rule available, by clearance, to everyone in the regulated category, regardless of cost. The second is a service available, by payment, to whoever can pay. The current internet is producing both at once, and it's getting harder to keep the two categories straight.

There's a difference between a rule that gives everyone equal access with a little friction - like calling in your location before entering busy airspace - and a service that only the wealthy can afford. The internet is now doing both, and the two are getting harder to tell apart.

The transponder check

Verifying who and what is in the system.

The architecture being assembled in 2025 and 2026 isn't the architecture anyone designed at the protocol's birth. The 402 response code that Cloudflare resurrected was written into HTTP in 1992 and left dormant for thirty-three years because no major actor in the network's economy needed it. The subscription tier on Substack and 404 Media isn't a return to print-era publishing economics; it's a different exchange, conducted on infrastructure that prices admission and verifies the payer. The Discord server with the paywalled link from a practitioner's newsletter isn't a return to the bulletin-board systems of the 1980s; it's a private terminal at a public airfield, separated by a different door.

Today's internet architecture wasn't designed this way from the start - it evolved. Payment codes that sat unused for decades are now active. Substack and Discord aren't just digital versions of old media models. They're new structures where the infrastructure itself checks who's paid and lets them through.

Above eighteen thousand feet, the pilot transmits a discrete transponder code and the controller knows which aircraft is which. The medium can separate them because the medium can identify them, and the identification's encoded in the equipment the aircraft must carry to be in the airspace at all. The internet has begun, slowly, to grow analogous primitives. Verified bot programs. Cryptographic message signatures. Pay-per-crawl headers. None of these is universal, and none has the regulatory weight of a transponder requirement in Class A. The trajectory isn't toward a single shared sky. It's toward tiers. Higher tiers will require the equipment. Lower tiers will operate on see-and-avoid until the traffic density makes the rule unworkable, and then a little longer after that.

In the highest levels of controlled airspace, aircraft must carry equipment that identifies them to controllers. The internet is slowly building similar tools - verified bots, signed messages, crawl fees - but none of them are mandatory yet. The result will be tiered airspace: upper tiers require ID, lower tiers are on their own until the traffic makes that unworkable.

There are two things an airfield can do when a runway can no longer separate the traffic that needs separation from the traffic that doesn't. It can build a second runway with a different clearance procedure. Or it can let the unmanageable traffic land somewhere else and stop pretending it was ever managing it. The web is doing both at the same time, in real time, on infrastructure that wasn't built to do either, and the people pricing the new tier and the people maintaining the old one are looking at the same control tower screen and seeing different airspace.

When an airport can't handle all its traffic safely, it either builds a new runway with different rules, or stops accepting the unmanageable traffic and admits it never really controlled it. The web is doing both at once, on infrastructure that wasn't designed for either, with different stakeholders looking at the same situation and seeing completely different things.

If the old runway is the one most people are still landing on, and the new runway is the one most of the journalism and most of the practitioners and most of the conversations worth having have already moved to, what do we call the airspace everyone has agreed to keep flying through together while pretending it is still a single sky?

If the open internet is where most people still are, but the trustworthy journalism, professional expertise, and real conversations have already moved behind paywalls and invite links, what exactly are we calling the shared open space we're all still pretending is one internet?