Network model. In a peer-to-peer network, all nodes are equal. Every user or machine in the network acts at the same time as a client and as a server. To be more precise, they aren't clients or servers. They are what they are, a peer in a network.

P2P, peer to peer : the same thing.
Though it has received a great deal of press recently, peer-to-peer file sharing is not at all a new idea. Peer-to-peer file sharing has been a feature of desktop operating systems since System 7 on the Macintosh and since Windows for Workgroups on the Windows platform. In each of these systems, a computer may participate in a network of peers -- called a "zone" or "workgroup" -- and exchange files with the other computers in the network.

The Macintosh peer-to-peer file sharing protocol is called AppleShare, which is confusingly also the name of the Macintosh file server application (which pre-dates the peer-to-peer version). The Windows protocol is known as SMB to its friends, and by vague names such as "Microsoft Networking" to everyone else. Both can be run on GNU/Linux and other Unix-like systems; AppleShare through netatalk and SMB through samba.


One of the chief problems of a peer-to-peer system is that each peer must somehow discover the other peers in the network. In both Apple's and Microsoft's original systems, discovery took place via broadcast traffic. This has two limitations: it is wasteful of bandwidth, and is not feasible for use on networks larger than a small LAN. Among modern, popular Internet-based peer-to-peer systems, the Gnutella model resembles this method, but substitutes relayed information for broadcast announcements. Still, this does not entirely solve the scalability problem.

Later versions of Microsoft's SMB addressed the broadcast problem in several (somewhat mutually incompatible) ways: master browser capability, WINS, and Windows NT Domains. Each of these abandons the pure peer-to-peer system in favor of a greater or lesser degree of centralization. Napster's semi-peer-to-peer system structurally resembles the NT Domains system, in which a central server provides not only discovery of peers but also authentication of user accounts.

What is new about popular peer-to-peer and semi-P2P systems such as Napster, Gnutella, and Freenet is that they offer searching facilities as well as simple file sharing. On an AppleShare or SMB network, you must know which peer holds the file you want before you can retrieve it. Under the currently popular systems, one form or another of file-searching facility is built into the protocol.

An O'Reilly book printed in March 2001. It discusses the context and overview of P2P issues, as well as projects and technical topics. Sections and their contributors:
Chapter discussion added as I read the book
A Network of Peers: Peer-to-peer models through the history of the Internet
Nelson Minar
Marc Hedlund
This chapter discusses the evolution of the Net, tracing it from an open, trusted, but limited society to the widespread, security conscious creature it is today. Notes that asymmetric bandwidth and firewalls and dynamic addressing and NAT are all handicapping peer-to-peer technologies, and thus, should go to make way for the inevitable success of P2P. Mostly sounds whiny, though.
Listening to Napster
Clay Shirky
Remaking the Peer-to-Peer Meme
Tim O'Reilly
The Cornucopia of the Commons
Dan Bricklin
SETI@home
David Anderson
Jabber: Conversational Technologies
Jeremie Miller
Mixmaster Remailers
Adam Langley
Gnutella
Gene Kan
Freenet
Adam Langley
Red Rover
Alan Brown
Publius
Marc Waldman
Lorrie Faith Cranor
Avi Rubin
Metadata
Rael Dornfest
Dan Brickley
Performance
Theodore Hong
Trust
Marc Waldman
Lorrie Faith Cranor
Avi Rubin
Accountability
Roger Dingledine
Michael J. Freedman
David Molnar
Reputation
Richard Lethin
Security
Jon Udell
Nimisha Asthagiri
Walter Tuvell
Interoperability Through Gateways
Brandon Wiley
Editor/Afterword
Andy Oram

One powerful application for peer-to-peer systems is in the field of crypto analysis.

Back in 1992 a team of four people started to work on a method to crack a message that is encrypted using RSA with a key length of 129 bits. These people were Derek Atkins, a student from MIT, Michael Graff, a student at Iowa State University, Paul Leyland at Oxford University and Arjen Lenstra, the renowned mathematical expert at Bellcore.

They realized that they needed help in order to have enough computing power to crack the code. To solve this problem, they called on volunteers to run a piece of software on their PC which works on part of the crack, and feeds the results to a central server for further analysis. More than 1600 machines from all continents except Antarctica worked on the problem. The code was cracked after 7 months – revolutionary at the time!

This concept has matured over the years and today can be found in services such as distributed.net. These services however, only use a small fraction of the PC’s available on the Internet….

Imagine the computing power if you bundle code-breaking software with each Kazaa download. The Kazaa client has already been downloaded more than 300 million times since it was launched. At any point in time there are approximately 3 million users on the Kazaa network. In other words there are 3 million PC’s that can be used for ‘real-time’ crypto analysis!

According to United Devices, if you assume that each PC on the Kazaa network is equipped with a 1 Ghz Intel Pentium 3 processor that is on average 50% utilized, the potential computing power of this network is 1500 Teraflops – equivalent to 17361 Sun Fire 15k Servers.

I am no cryptography expert but I think this amount of computing power is sufficient to take ‘a crack’ at some of the modern encryption technologies.

Peer-to-peer is the way everything will work in the future. Every single device will be able to perform the tasks that every other machine can perform. Computing power and memory are so cheap that it is cost-effective to make every node in a system an independent device able to handle any aspect of the network infrastructure. The only differences between devices will be speed (due to the quality of the processor) and user interface. A smart refrigerator can have roughly the same computing power as a TV set, the two devices just interact with the user in different ways.

This is also a core philosophy in "smart dust" distributed-sensor networks. For example,Crossbow Technologies has developed "Self-Forming Mesh Networks", a way for a large group of small, low-power sensors to form a wireless peer-to-peer network and create a huge sensor web able to report on anything happening in its coverage area. I was recently at the Embedded Systems Conference, where they had a demo going where they dropped a handful of quarter-sized sensors across the exhibition hall and showed the real-time monitoring in their booth. Since signals are relayed from sensor to sensor, power levels are kept at a minimum, and operating lifetimes are increased from days and weeks to months and years.

The entire concept of ubiquitous computing works as long as every device involved is powerful enough to provide infrastructure support at every level, creating a flexible communications net that is resilient enough to survive crashes within it as well as fast enough to provide near-real-time results.

Log in or register to write something here or to contact authors.