More Peer-to-Peer (and Peering Too)
I want to return to my prior post from Tuesday questioning the utility of peer-to-peer file distribution. That post has spurred a number of responsive posts (from my colleague Lior Strahilevitz here, from Ed Felten here and from Brett Frischmann here), plus extensive comments from Tim Wu and others. (Tim and I co-taught an innovation policy seminar in the Spring, so this is part of a continuing conversation.)
Both Lior and Ed focus on the question of control. I think that that is exactly the right issue, but we should see what to make of it. Unfortunately, I think we need to start with questions of telecommunications and computer engineering before we can turn to law and economics. I say unfortunately as I suspect that I’m at a comparative disadvantage relative to Ed (who is as you may know a Princeton computer science prof) and Tim, who spent a number of years in the Valley at a network equipment firm. Nonetheless, in the great tradition of lunch at the University of Chicago Law School, I will plunge ahead fearlessly.
Ed Felten focuses on the question of whether even “centralized” sites such as Google are really centralized. He notes that that Google’s site probably uses a distributed computing architecture. By that he means that Google is not just one giant hard disk somewhere. Instead, Google has racks and racks of servers and these servers are at locations throughout the country. This is a distributed architecture, even though from the user’s standpoint, it acts as if the site is run by one giant computer.
How Google is organized is an interesting question of computer engineering and manufacturing costs. Think of this as the question: how should we organize computer storage? What size should a component be given how it can be produced most efficiently and given the cost of communicating among the components? Is it more expensive to communicate within a component or across components? All interesting issues, but I don’t think this is really the issue for our discussion of the uses of peer-to-peer file distribution.
Instead, and Ed and Lior both make this point, the real issue is control. However Google is engineered, we have a single point of control. If the boys at Google—Larry Page and Sergey Brin—get up one day and decide to flick a switch, they can turn off Google. This is one of the key differences between a product and a service. Even Microsoft, with all of its vaunted power, can’t turn off the already-distributed copies of Windows.
We saw an interesting example of this control over the last few days, and this takes us from peer-to-peer to peering. The backbone of the Internet is a series of interconnected networks. Packets move about the network, just as cars do on the interstate, but the interconnections between the networks are done through contracts. Two contract types are important: peering and transit. In a peering contract, no money changes hands between the networks. Instead, the deal is: “You take my packets, I will take yours, and we will call it a wash.” Peering avoids the transaction costs of metering. The other arrangement is transit: count the number of packets exchanged and assess a fee.
For peering to work, the traffic flows between the contracting networks need to be relatively symmetric. If that symmetry is broken, problems may result, and this is what we saw this week. As described by c|net news.com (“Blackout shows Net’s fragility”), Level 3 Communications had a peering arrangement in place with Cogent Communications. The peering became unbalanced and Level 3 told Cogent that that it needed to start paying. When no agreement was reached, Level 3 turned off the connection, and Cogent’s customers, including the Museum of Fine Arts in Boston, couldn’t get to parts of the Internet.
Control is key, and as both Ed and Lior note, peer-to-peer means decentralized control. As the Level 3 example suggests, even peer-to-peer will be dependent on the underlying rules for organizing the Internet, but with true p2p, we will avoid another single switch that can be turned on or off. So as Lior notes, Ian Clarke’s freenet software is designed with the idea of avoiding centralized control. The fear is China, where governmental authorities seek to exercise broad control over the distribution of ideas and content. If that is the fear, then we went to spread control, and peer-to-peer software is a good approach to that. But as Bruce Boyden notes in his comment on Lior’s post, the U.S. isn’t China.
So I circle back to my original question: what content should we distributed p2p and why? In that context, I should say a few words about BitTorrent, raised by Tim Wu in his comments. This takes us from organizing computer storage to organizing bandwidth.
The idea behind BitTorrent is simple, at least if I understand it. For consumers, think of broadband as two different pipes running into your home. One is a downloading pipe, the other an uploading pipe. For most consumers, the downloading pipe is much larger than the uploading pipe. That creates a problem for peer-to-peer distribution. Say, to make the numbers simple, the downloading pipe is 10 times the size of the uploading pipe. If I am at home downloading a song from Lior’s home computer, 90% of my downloading pipe sits idle. BitTorrent recognizes that, so instead of downloading the entire song from Lior’s computer, it finds 10 computers with the song and downloads 1/10th of the song from each of the computers. I use their entire uploading bandwidth, while using all of my downloading bandwidth.
The BitTorrent software obviously needs to make sure that it is not downloading the same snippet from each song from each each computer, so I get the full song rather than 10 copies of one part of it. (In truth, for most top 40 music, it isn’t obvious it would make a difference.) (Further parenthetical: that is just a line really, as I listen to nothing but top 40 (at least of various decades) and its ilk, unlike most of my colleagues who seem to see each other constantly at the opera.)
But as I hope that discussion makes clear, the utility of BitTorrent is based on the artifact of the uploading/downloading asymmetry of consumer broadband. I assume—but don’t know for sure and would be delighted to learn more—that this asymmetry is not an important organizing principle for centralized download sites, such as iTunes. This is not to say that I can’t imagine commercial companies using BitTorrent for distribution: if they can economize on their own bandwidth costs by taking advantage of consumer bandwidth, do so. I take it this is exactly Tim’s point, when he suggests that a peer-to-peer infrastructure democratizes distribution costs.
But that takes me back to my original point, namely, that I don’t see us doing that currently for fee-copyright content, and I was surprised to see that we really aren’t doing that for public domain content or even for photographs. Maybe we will do that for fat files, such as sharing our home movies, but then we have to switch from questions of technology and supply to the issue of demand. There is no easier way to clear a room than to break out the home movies, and I suspect that is true whether we are in one physical room or one giant virtual room.