The most significant problem that I've found about peer to peer networks is not with the actual file transfer algorithms or data transfer algorithms. Those work, if not efficiently, without completely destroying the network. The really bad thing is the search algorithm.
Although Napster did have a significant impact on networks, it wasn't all that bad, compared to distributed, serverless, peer-to-peer filesharing networks. Why? Napster used a client-server model. Every client posted their list of available files to a central server. Searches were handled by the server. The server could use normal data transfer to return the lists to the client.
How do distributed peer-to-peer networks handle searching? Take Gnutella as an example. Every host has a list of a few peers. When a search request comes in, it analyzes its collection, responds, and forwards the request to each of its peers. The principle works that the number of hosts accessed will grow exponentially. This has an upside: many hosts are available and accessable. It also has a downside. The queries lead to an exponential packet implosion on the searching host. There are so many clients sending responses (and queries) back to the host (for there are more likely than not looping connections occurring) that a network can often be brought down under the load.