@internet - Multicast Your Fate to the Wind



Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site My PGP Public Key


After Hours Reality Check Magazine A Season in Methven Our Host Send Me Mail


Home Articles STARK REALITIES About This Site

I have a soft spot in my head for Novell Corporation.

I cut my networking teeth with a Novell Gold reseller, back in the days when nobody outside the Novell channel had ever heard of CNEs. I'm still a member of the San Francisco Novell User Group and I occasionally attend Novell's local dog-and- pony shows.

That's how I found myself chatting with Eric Schmidt, Novell's still-relatively-new Chairman and CEO yesterday afternoon. Schmidt came to Novell from Sun Microsystems, where he was chief technology officer. Before he joined Sun, he was a researcher at Xerox's PARC facility--the cradle from which sprang the whole microcomputer revolution. The man has credibility with a capital C. At Sun, he was the guy who financed and protected the band of believers that created Java and thereby sparked an entire industry. He's as Internet-savvy as any software executive on the planet and he's been paying a lot of attention to network bandwidth issues lately.

Schmidt was in town to show Novell's latest dog and pony to important customers and I wormed my way into his talk and the wine-and-cheese mixer that followed because I wanted to brace him up about Novell's hopelessly inept marketing strategies. We got around to talking about the explosion in Internet traffic and then Schmidt casually dropped a statistic that absolutely floored me.

He pointed out that, in February of this year, the PointCast Network passed the one-terabyte-of-data-per-day milestone.

Just think about that figure for a moment. That's one trillion bytes of Internet traffic generated in a single day BY A SINGLE SITE.

I'm usually a pretty tolerant guy, but I think that's utterly obscene.

The reason that PointCast is such a giant, snorking bandwidth pig (and it's a problem that virtually all current consumer-oriented "push" services share) is that its content distribution is based on a unicast connection model. No matter how much of PointCast's content is redundant to any given subset of its subscribers, each of them must make a separate IP socket connection in order to receive it. And that's an incredibly extravagant use of Internet bandwidth.

Suppose two of your subscribers have the exact same PointCast preferences. For instance, they might both want CNN, the San Jose Mercury News, sports and their horoscopes and California Lotto results. Even though the content they've subscribed for is identical in every respect, they each have to initiate separate TCP/IP socket sessions in order to receive it. That's a 50% waste of bandwidth.

Now multiply those two subscribers' redundant data by two, three, five or perhaps 10 million PointCast subscribers and you begin to see the dimensions of the problem. The unicast model is just sinfully wasteful of bandwidth for highly-redundant content, such as PointCast's.

Which is why I immediately thought of IP multicasting.

Don't Know Much About History

IP multicasting is based on the fundamental ideas put forward by Steve Deering (who, as it happens, is also largely responsible for shepherding the IPv6 development process through to a completed standard). In December, 1985, while he was still at Stanford University, Deering and David Cheriton co-authored RFC 966, "Host Groups: A Multicast Extension to the Internet Protocol", which first proposed a multicast extension to the Internet Protocol, described Class D addressing (the set of addresses reserved for IP multicasting) and laid the foundation of today's IP multicasting technology. The following year, Deering refined that proposal in RFC 988, "Host Extensions for IP Multicasting", which specified version 0 of the Internet Group Management Protocol, the extension that permits IP multicasting to work its magic.

Deering's 1988 RFC 1054, "Host Extensions for IP Multicasting", and RFC 1075, "Distance Vector Routing Protocol", which he co-authored with Craig Partridge and David Waitzman, incrementally advanced the state of the multicast art. Then, in 1989, Deering authored yet another version of "Host Extensions for IP Multicasting," RFC 1112, which, as of this writing, is still the definitive document on the subject.

In a nutshell, Deering's brainstorm was to define a set of IP addresses (which range from 224.0.0.0 to 239.255.255.255) that could be assigned to multiple hosts concurrently, unlike standard Internet addresses, which must be globally unique. A multicast host is simultaneously assigned not only its "standard," exclusive IP address, but also Class D addresses, one of which must be the all-hosts address, 224.0.0.1, while the other(s) may include zero or more addresses from the range 224.0.0.2 to 239.255.255.254, each of which represents a distinct multicast group. It may be a member of any number of multicast groups simultaneously and may dynamically join or leave groups (adding or dropping Class D addresses to itself) via IGMP messaging.

Of course, this means that any multicast-capable host must be capable of supporting multiple virtual IP interfaces on the same physical network interface. Since all new Ethernet cards do so (and so do the Windows 95 and Windows NT 4.0 dialers), that's only an issue for older network cards.

The beauty of IP multicasting is that, like IP broadcasting, it allows an arbitrary number of hosts to receive a single data stream, while, unlike broadcasting, it permits those hosts that do not wish to receive the data stream to ignore it. That means, for example, that a virtually unlimited number of IP multicast users can "tune in" to a single video data stream, instead of the video server needing to make a separate socket connection and send a separate data stream to each subscriber.

Oh, but wait, there's more: although most industry and general press stories about multicast present it as the Internet equivalent of television or radio broadcasting (i.e. a one-to-many data distribution system), by default, any member of an IP multicast group can transmit to the group as a whole. That changes the distribution paradigm from the one-to-many exemplar to a many-to-many model. In other words, rather than TV or radio, native IP multicast more nearly resembles CB radio, instead. Multicast groups are roughly equivalent to CB radio channels and, like CB, all multicast group members can originate, as well as receive transmissions.

And that many-to-many, bandwidth-efficient model is a compelling one for folks who work in highly collaborative environments.

The Mbone's Connected to the Backbone

Like the World Wide Web, the Mbone (or Multicast Backbone) is essentially a creation of the high-energy particle physics community. The particle physics boys get to play with really expensive toys--colliders, like the Stanford Linear ACceler-ator (SLAC), to help them search for ever-more-elusive new particles, terawatt lasers for controlled nuclear fusion ignition research and Class C rooms crammed to the walls with supercomputers and terabyte disk farms. They also tend to deal in pretty serious imaging applications and, in order to support collaborations between research facilities, they need serious bandwidth for the insane quantities of data with which they deal.

Even so, their pet computer scientists are always looking for ways to foster better collaboration and more efficiently use the considerable data networking resources with which your tax dollars endow them. The need to provide a friendlier and more transparent data interface drove Tim Berners-Lee to develop the World Wide Web for the folks at CERN-the European Laboratory for Particle Physics. Likewise, the desire to enhance collaborative tools, such as real-time video conferencing over existing data networks, led to the implementation of basic Mbone technology by Steve Casner, then at USC's Information Sciences Institute, of an assortment of basic Mbone collaborative tools by Van Jacobson's Network Research Group at Lawrence Berkeley National Laboratory, and of multicasting kernel patches and software by researchers around the world.

The demonstration platform they created rapidly evolved from a curiosity to a heavily-used collaborative environment. It's been used for everything from monitoring space shuttle launches to providing real-time "over-the-shoulder" consultation by U.S. heart specialists to Russian cardiac surgeons working on live patients in the operating room. In the process, the Mbone community has uncovered a host of unforeseen complications and an equally-large array of equally-unanticipated applications for IP multicast technologies. And, naturally, they've been hard at work trying to solve the knottier problems--with a fair degree of success.

Do You Know the Way to San Jose?

The most complex stumbling blocks in the multicast sphere lie in the arena of routing.

The original model assumed that every multicast-enabled network would be lousy with subscribing hosts and that it had bandwidth to burn. This so-called "dense-mode" assumption in turn resulted in the original multicast routing model, which used a technique called "flood and prune" to build a spanning tree. Announcements of available groups (propagated via the mandatory-subscription all-hosts 224.0.0.1 address), and the data streams for those groups, would simply flood the network, much like an actual broadcast transmission. Then, as subscription responses were received by the originating host, those network segments that had no subscribers would be pruned--which is to say the routers that connected those segments to the larger network would be told not to transmit data streams for multicast groups with no subscribers on that segment.

That model worked fairly well in the hothouse climate of the Mbone, and various routing protocols--beginning with Distance Vector Multicast Routing Protocol (DVMRP), continuing with Multicast Open Shortest Path First (MOSPF) and culminating in the more recent Protocol Independent Multicast-Dense Mode (PIM-DM)--were developed based on the dense-mode paradigm. Unfortunately, out in the real world of the Internet, bandwidth is often seriously constrained (even a fully-bonded, dual B-channel ISDN connection is an anorectically-thin pipe by Mbone standards) and, at least for the moment, the population of multicast-enabled clients is anything but dense. Another approach to the routing problem was needed if IP multicast was ever going to be able to scale to fit the Internet environment.

The first alternative multicast routing protocol to be predicated on more realistic assumptions about user density and available bandwidth was known as Core Based Trees (CBT). It changed the model from the original negative-option, data-driven one (basically, "You get this data stream unless you tell me you don't want it"), to a subscriber-initiated one ("You can have this data stream if you tell me you want it"). It also allows any CBT-based router between a multicast source and a new group member to acknowledge a join request--which cuts way down on superfluous ACKs. A still newer approach--Protocol Independent Multicast-Sparse Mode (PIM-SM)--offers both increased flexibility and the promise of a single, interoperable standard for both dense and sparse modalities.

All of these protocols are steaming down standards tracks, but PIM looks like the horse to beat.

And, Of Course, Harry the Horse Dances the Waltz!

The Second Annual IP Multicast Summit was held February 8-10 in San Jose, California. You can always tell when a technology is about to take off like a bat out of Bosnia, because the geeks give way to the snake-oil salesmen. That's exactly what happened at the Doubletree Hotel this February. Last year, every panelist and every presenter had his or her propeller whirling at turbofan speeds. (Well, okay, the Microsoft presenter gave a pure product marketing talk-but he was pretty much the only exception to the rule.) The acronyms flew like volleys of birdshot and the attendees' standard of sartorial elegance was running shoes and pocket protectors.

This year, on the other hand, it was hype that filled the air and the dress code ran to tasseled loafers and power ties. None of the marketeers brought handouts, of course, and it seemed to me that everyone got into the marketeering act.

The UUNET presenter, who was scheduled to talk about implementing IP multicast in a real-world environment, instead pitched the multicast tunneling service for which UUNET charges its customers outlandish prices. His presentation wasn't about technology it was about a product.

The Bobbsey Twins from Real Networks and Microsoft were even less shy about grinding out propaganda for their respective enterprises.

Forty-five minutes of content-free pitches for Real servers and NetShow was thirty minutes too much for me. I had to escape and seek the company of engineers.

Luckily, I stumbled over the very anodyne I was after, as the engineers in question abandoned the same sorry presentation I'd just quit. In short order, I learned that every internal router in MCI's network runs PIM-SM and that MCI is far from alone in quietly implementing IP multicast in the real world of commercial ISPs. At least four other network engineers were party to that ad-hoc discussion, all of them employees of ISPs of various sizes. They're all running multicast and they're all running PIM-SM.

That's not all. The Multicast Summit featured an actual exhibit floor--an attraction it was unable to muster last year. It was small, as were all the booths, but at least it was there, and it featured real vendors with real multicast-based products for sale. Now, it's true that nearly half of them were streaming video products--a market niche that I suspect will turn out to be too small to let all those folks become millionaires--but I saw some really innovative and useful non-video services, too. And all of them were based around reliable multicast.

Hey, Bulldog!

Reliable multicast is darned important because IP multicast is not reliable by default. Much like UDP-based applications, standard multicast drops packets on the floor, instead of suffering the performance decrement that negotiating retransmission would incur. That makes it unsuitable for content-sensitive applications, such as file distribution.

At one end of the spectrum, there's GlobalCast Communications, which is hoping to use its partnership with Cisco to leverage a hitherto-proprietary suite of protocols for reliable multicast--Reliable Multicast Protocol, (RMP), Scalable Reliable Multicast (SRM) and Reliable Multicast Transport Protocol (RMTP)--into Internet standards. Other than those protocols and their common API, it has no products, per se, but GlobalCast plans to make money licensing its protocol suite to developers for use in their own offerings.

At the other end of the spectrum, StarBurst Communications uses its patented (and proprietary) Multicast File Transfer Protocol (MFTP) in a family of Sun Solaris, DEC Unix, SCO UnixWare, OS/2 and Microsoft Windows-based server and client products to enable its customers to distribute software and data to a large number of hosts simultaneously. Its customers include both GM and Ford, who use StarBurst's products; to send software updates respectively to 8,500 General Motors' dealerships and to more than 6,000 Ford dealerships in North America.

Do the math.

If 6,000 Ford dealerships, for instance, need 10 megabytes worth of data updates per day--not an unreasonable figure if you think about inventory, availability and pricing changes for that many cars--a unicast distribution system would need to move 60 gigabytes of data every day. Because Ford uses IP multicast via a satellite distribution system, it only needs to transmit 10 megabytes of data per day.

I thought that might get your attention.

Now, of course, Ford doesn't really get away with a mere 10 megabytes a day. Because it has to retransmit dropped packets (and only dropped packets) to individual dealerships after the general multicast distribution concludes, it actually winds up sending on the order of 60 megabytes per day, instead. That still saves Ford three full orders of magnitude worth of data transmission.

I'd Love to Change the World

Now let's return to the subject of the PointCast Network and its obscene waste of Internet bandwidth. If Ford's experience is any guide, adopting a multicast distribution model could potentially slash PointCast's bandwidth requirements from a terabyte-per-day to no more than a gigabyte-per-day. And PointCast has nearly three orders of magnitude more subscribers to service than does the Ford dealership network, so the actual reduction in PointCast's traffic could well be as much as four orders of magnitude.

Maybe more.

So, what's the holdup? How come PointCast hasn't swapped over to multicast distribution months ago?

Chances are, it's partly your fault.

Until IP multicast is ubiquitous through out the Internet (and, mind you, I'm not talking about UUNET's brain-damaged multicast tunneling here--I'm talking about the real McCoy), PointCast can't switch to multicast distribution. Given the multipath nature of the Internet, and the hit-or-miss implementation of IP multicast, there'd be no guarantee that PointCast's subscribers would get their sports scores and CNN headlines, period.

So, here's what you can do to help alleviate the Internet's PointCast-induced suffering and, in the process, provide your customers with access to another world's worth of cool, useful and entertaining applications:

Enable IP multicast on your routers and servers.

Between the IP Multicast Initiative web site and the Mbone web site, there's more than enough information out there for you to upgrade your systems to handle multicasting. Use PIM-SM and you'll help create a de-facto standard in the process. You'll discover that Cisco's IOS version 11.3.x can handle multicasting and it's a safe bet there's an OS patch that will safely let your servers understand multicast, too.

If your upstream provider doesn't support IP multicasting, lean on it to start. Offer to accept a tunneled feed until your provider is confident enough to implement native multicasting.

Your users will thank you. Eventually, you'll congratulate yourself on your wisdom and foresight. And the Internet as a whole will benefit, not just because we've all joined together to help slay the PointCast bandwidth dragon, but be cause the solution to that problem will simultaneously solve a host of similar problems and enable an even larger collection of new technologies and products that we can't yet even imagine because they need ubiquitous multicast.

The revolution will not be televised. But it just might be multicast.

(Copyright© 1998 by Thom Stark--all rights reserved)