It must have sounded very good indeed. How else would most of the states have agreed to give up their long-standing rate-of-return regulation of their local telephone monopolies, if not in exchange for something better?
A so-called alternative form of regulation, also known as rate caps, was offered to them in exchange for upgrading their wireline telephone networks to all-fiber-optics operating at broadband speeds. Sound like recent news? No, it wasn’t. The big Bell bait-and-switch took place in the early 1990s. And the new networks were supposed to have been in place almost everywhere by now.
So what happened? What were these new networks supposed to be, and why weren’t they built?
Bruce Kushnick of the New Networks Institute has written about the regulatory and financial aspects of the deal in his book “The $200 Billion Broadband Scandal
.” But the name of his 2005 book is out of date. The current edition is called “The $300 Billion Broadband Scandal.” That’s how much extra money the Bell companies have made by getting rid of rate-of-return regulation, in exchange for bringing 45 Mbps wide open common carrier fiber optic networks to every home in their service territories.
Of course, those networks don’t actually exist. They weren’t built for a variety of reasons. The most obvious one, which Kushnick expounds, is that the Bells never really intended to build them; it was regulatory window dressing. For this, telecom attorney Christopher Savage has summarized his observations into Kushnick’s Law:
A regulated company will always renege on promises to provide public benefits tomorrow in exchange for regulatory and financial benefits today.
The past is a foreign country; they do things differently
Most of the price cap deals were made in the 1992-1993 timeframe. In telecom terms, that was a very different place indeed, like the “foreign country” in the famous quote from L. P. Hartley’s The Go-Between. The most obvious difference is that the Internet was largely a government-controlled research network, not open to the public, and it was nothing like what we see today. The last vestiges of government control were ending and commercial ISPs were just starting up, but they were tiny and speeds were, by today’s standards, glacial. The other obvious difference is that the Telecom Act of 1996 had not yet passed, so the Bell monopolies were still protected by law in most states. Absent a public Internet, cable modems also had not yet gotten past their inauspicious rollout, in a few places, as a “telecommuting” technology.
Yet the telephone carriers were offering 45 megabits to the home. The killer application was not Internet; it was television. They were offering to compete with the cable industry. And if you think that the telephone industry has seen a lot of changes since 1993, you haven’t looked at cable!
Until the Cable Act of 1992, local authorities were permitted to offer exclusive franchises to cable operators. Competition was simply banned in many places. That rule was lifted in 1992 but the economics have not changed much. Some cities did not have exclusive franchises but they also didn’t have competition. What usually happened was that more than one company would stake out ground and build until it ran into plant built by a competitor. Then they’d go elsewhere. It was a race to be the first provider to a given location, which was always a better investment than being the second provider anywhere. This left some cities with patchworks of cable systems, which often resulted in swaps and purchases, as systems then consolidated.
Cable systems in 1992 had very limited capacity. Since they were exclusively run on coaxial cable, whose attenuation rises steeply with frequency, they were limited in bandwidth to the range of about 54 to 350 MHz. This left them room for fewer than 50 channels. (Boston’s high-end system from Cablevision worked around this by using two parallel cables.) And some of the day’s big cable operators were not (to be polite) well-known for fine maintenance policies. So allowing the telephone companies to build new fiber optic networks to compete with cable seemed to be a very pro-consumer idea. And to prevent the phone companies from controlling the programming, these new networks had to be open: Rules dictated that programs could be supplied independently, not exclusively chosen by the network operators.
Cable TV, however, was not the only application. Data was certainly on the table. Not “broadband” as we know it today, and not simply “Internet”, but high-speed common carriage of bits.
Since the first 1968 Computer Inquiry, the FCC (News
) had distinguished between “basic” and “enhanced” services, and the Computer III rules in effect in 1992 required all local telephone companies to maintain accounting separations between them. If a telephone company offered an enhanced service, such as any kind of content (which could be voice, video or data such as Internet), then it had to offer the underlying basic services to its competitors on the same terms that they were offered to its affiliate. And under rate-of-return regulation, the price they charged was supposed to provide them with a limited profit margin, such that the rate of return (not margin on sales) was about 12 percent. However, that was an overall rate of return. Since they claimed to lose money on basic residential and rural telephone service, they were entitled to make up for it with higher margins elsewhere. This resulted in all sorts of pricing shenanigans, many of which persist to this day.
So that explains the telephone companies’ promises. All of their networks had to be “open”; television was the only broadband application that people understood. What people don’t recall is what the technology of the day looked like, or how the networks were meant to be built.
ATM wasn’t where you got your money from the bank
The key high-speed network technology of the early 1990s was formally called Asynchronous Transfer Mode, or ATM for sort. Not many people remember it but there was a bit of an investment bubble in ATM companies right around then. ATM had been proposed a few years earlier as the key technology for an international program called Broadband ISDN (B-ISDN). The common copper-wire-based ISDN technologies that actually did roll out in the 1990s were formally called Narrowband ISDN, while B-ISDN was designed for all-optical networks — fiber to the home (FTTH).
B-ISDN standards specified two interface speeds, 155 Mbps and 622 Mbps. When these were being firmed up around 1986, they appeared to be rocket science, but Moore’s Law was in full effect, and it was assumed that the technology to mass-produce B-ISDN would be available by the time the old copper wire telephone networks were replaced with glass. That was anticipated to be the distant future, the late 1990s.
The first ATM switching products came to market around 1990. They didn’t conform to B-ISDN specifications. Fore Systems’ pioneering product ran at 100 Mbps. While B-ISDN was being designed by the International Telecommunications Union (ITU) and numerous national standards bodies, a new, faster-moving group, the ATM Forum, became more prominent. And soon the name B-ISDN was forgotten. Many ATM Forum members were promoting the technology as a local area network, as an upgrade from 10 Mbps Ethernet. Bit rates in the DS3 (44.736 Mbps) and OC-1 (51.84 Mbps) range were proposed as a lower-cost stopgap, but still sufficient to carry switched broadcast-grade video.
Such systems lacked the connection-control signaling that the ITU had planned to develop (but never completed). And when Fast Ethernet came to market, promising a lower price point than ATM for the high-volume LAN market, the bubble was popped. ATM did not actually die: Most ADSL systems are based on ATM, and to support it, ATM sales rose, not fell, over the following decade.
So by 1992, when the Bells were asking to be released from rate-of-return regulation, ATM systems existed, but they were a long way from being mass-market priced. B-ISDN would be a few years away from reality even if it were being developed. But it wasn’t. A tidal wave swept over the industry and B-ISDN’s Achilles heel was revealed.
Fear of eating your own cash cow
B-ISDN’s fatal flaw was that the telephone companies had no way to price it. This was, in large part, because B-ISDN promised to be too good. It was designed to simultaneously replace the telephone network, compete with cable TV, and provide high-speed data transmission.
Telephone calls and some kinds of data applications require low-delay low-loss transmission at a steady, predictable rate. Most data applications can tolerate greater amounts of delay, jitter and loss, because they can use retransmission to fill in for gaps, but their demand is unpredictable, and they want higher peak data rates. The Internet was designed for this latter type. A network optimized for one is not ideal for the other.
ATM thus offers a range of Quality of Service (QoS) options. A Constant Bit Rate service carries a given speed at very high priority and low loss, suitable for telephone calls. (Verizon (News
) has, in fact, recently replaced many of its local network’s tandem switches, the ones that interconnect calls to other switches and carriers, with ATM-based systems. The change is imperceptible to callers; it even carries modem calls at high bit rates.) A couple of Variable Bit Rate options were designed for somewhat predictable flows such as video, while an Unspecified Bit Rate service carried information on a low-priority basis. This latter one, by far the easiest to provision, predominates on ATM networks today.
For the telephone companies to sell high-rate services capable of carrying video, they would have to be at a low enough price to attract consumers. This posed a conundrum: How could a high-speed service be open (and restricting its usage to video would have violated this rule) and affordable, yet not cannibalize revenues of high-priced telephone calling services? When local calls (64000 bits per second) are sold at about two cents a minute and toll calls at a much higher price, how can a multi-megabit video-access service be sold at the lower price needed to be competitive?
The Bell company and Bellcore engineers who helped invent ATM were quite competent at engineering, but questions about product pricing were not their job; indeed they were not supposed to even think about such things. At a Bell company, the people who handled those marketing and pricing questions were the lawyers. This led to an impasse; the promised high-speed ATM services simply could not be brought to market to the satisfaction of the Bells.
Internet economics: No other price beats free
And then the Internet happened. Between 1993 and 1996, thousands of ISPs set up shop, taking advantage of common carriage rules that allowed them to set up dial-up modem pools with local phone numbers. The Bells protested and tried to get the rules changed to have such calls treated as toll, not local, but as with the failed 1987 attempt to impose this so-called “modem tax”, they were rejected.
Early ISPs often charged for connect time, but their price model was otherwise simple – everything was included in one price. By 1997, flat monthly rates were more common, and broadband Internet services – ADSL and cable modems – were starting to take significant market share.
So while the Internet was becoming a household fixture, it operated on its traditional “best effort” basis, with no particular method of rationing bandwidth except for packet drops. And drop they must, because the TCP/IP protocol stack depends upon packet drops for its speed control. That’s why the 100 Mbps Ethernet port on your laptop doesn’t blast a file at 100 Mbps into your 3 Mbps DSL modem. It bursts at that speed, but slows down as it notices, and retransmits, dropped packets. So long as everyone runs TCP and follows the rules, it is reasonably fair, too, thanks to this “slow-start” congestion management procedure added to TCP in the late 1980s. And applications that don’t use TCP retransmission just get whatever they get. No guarantees, and no charge. This worked well enough – users and applications adapted to the typical capacity of the network.
Internet access becomes “broadband”
As “broadband” consumer access spread, speeds rose and packet drop rates fell. So the probability that voice over IP would work tolerably well, across any given public Internet path, rose. And today even video, potentially consuming very high capacity indeed, is flowing across the Internet, as Web services like Hulu (News
) gain viewers. Not that ISPs are happy about it. Internet services had been based on sharing available resources for a fixed price, not traffic-engineering resources to meet demand for billable services. These new services take advantage of the retail price of Internet usage – zero.
But some activists, skilled in manipulating sound bites and politicians who live by them, are treating this as an entitlement of nearly constitutional proportions. “Ban the Cap” movements protest ISPs, especially cable modems, who dare suggest that 100 or 250 gigabytes/month (enough for more than a movie a night) is a reasonable limit for a fixed-rate consumer service. And the FCC is suggesting explicit regulation of ISP behavior, on behalf of “neutrality,” which could explicitly prevent ISPs from managing bandwidth in ways that, horrors to Betsy, might favor TCP-conformant data applications over streaming video, or disfavor applications (again, mostly used for video distribution) that game the Slow Start algorithm by launching many parallel TCP sessions at once. And there is natural suspicion that cable ISPs do not want “free” Internet video to compete with the channels that their parent companies sell as part of their cable services.
In the B-ISDN model, these dissimilar types of applications had no technical conflict with one another; they could all be accommodated via separate traffic classes. And because the rules of common carriage were clear, there was no conflict between content and carriage, cable vs. Internet. Of course pricing would have been more explicit: ISPs could have used it for their last-mile access, but the pricing of other services might have been itemized and tied to their QoS.
We can only speculate how such networks would have worked out. We paid for them, but will never see them. The Internet we have is a good prototype too, but it was not designed to do half of what the lost generation of broadband common carriage was meant for. Trying to stretch it into what we imagine it should be will lead to more conflict, as regulators try to bottle up a fast-moving target. And thanks to the switcheroos that the Bells pulled on us, we’re essentially paying for it twice.
Fred Goldstein, principal of Ionary Consulting, writes the Telecom Policy column for TMCnet. To read more of Fred’s articles, please visit his columnist page.
Edited by Marisa Torrieri