One thing many people complain about is destructiveness of Peer to peer clients on a local network. While some people can say "I don't care let it rip" and others are no longer allowed to use it. The problem still exists. The real problem is the programmers seem to misunderstand this real threat to networks. But first one thing I will prove...


File sharing as a concept is a great idea, Napster executed the idea perfectly with music and it appeared to work, after Napster was shut down many others popped up much broader and caused what we have now. Factioning. There are those who call all file sharing with spyware bad, there are those who call anyone using the Morpheus idea bad, some complain Gnutella networks are too slow. The system admins CAN'T stop people from using these systems until they get smarter which some are and some have stopped caring about it. These systems also are legal but what goes on is illegal. But these are just complaints, the real problems are still not discussed.

Heres the facts as found from my computer and others. Peer to peer programs as a whole send packets to find other softwares and such. Larger networks send more packets for searches and so on. Sounds harmless, right? Not really when my Kazaa took down a cable router that 12 people had been using for years easily sharing and enjoying. Though this was AFTER I stopped leeching 50 files at a time and was down to 10.

The real problem of these systems are not that they are taking up to much bandwidth, but the amount of packets that are produced are mind blowing. Within 1 minute 500 packets are easily created, the sheer number technically shutting down the router and stranding those who need http access, including myself. Many people blame other factors of the peer to peer for sucking the speed and they are right per say, but when you can't pull up a webpage it's not the speed of Kazaa that is killing it, it's the amount of packets themselves. I was just one user think what 10 users will do to a good network.

The solution? I really don't know. All I know is every time I run a software package that does "file sharing" I am raping the resources that I finally got moving off the college campus. And that's not right.

Just some food for though with a little technology spin on it. If anyone knows of a good Peer to peer that doesn't do this let me know, I'm still searching.

If you want proof yourself? For Windows run Show Traffic and run Kazaa. Linux users can easily do this with a program called Trafshow, both are VERY cool to look at it and mess with, and are completely legal as they only show the packets they don't sniff them.

The most significant problem that I've found about peer to peer networks is not with the actual file transfer algorithms or data transfer algorithms. Those work, if not efficiently, without completely destroying the network. The really bad thing is the search algorithm.

Although Napster did have a significant impact on networks, it wasn't all that bad, compared to distributed, serverless, peer-to-peer filesharing networks. Why? Napster used a client-server model. Every client posted their list of available files to a central server. Searches were handled by the server. The server could use normal data transfer to return the lists to the client.

How do distributed peer-to-peer networks handle searching? Take Gnutella as an example. Every host has a list of a few peers. When a search request comes in, it analyzes its collection, responds, and forwards the request to each of its peers. The principle works that the number of hosts accessed will grow exponentially. This has an upside: many hosts are available and accessable. It also has a downside. The queries lead to an exponential packet implosion on the searching host. There are so many clients sending responses (and queries) back to the host (for there are more likely than not looping connections occurring) that a network can often be brought down under the load.

Log in or register to write something here or to contact authors.