A design principle first explicitly put forth by Jerome H. Saltzer, David P. Reed, and Dave Clark in their paper "End-to-End Arguments in System Design" (ACM Transactions in Computer Systems, November 1984.) The end-to-end argument is about where to put functionality: at which layer in a networking stack, or in which component of a distributed system.
Boiled down to its essence, the argument says that some properties (like reliability) can only be correctly and completely implemented at the endpoints of communication--- usually the application layer. Thus, providing that function at a lower layer is at best an optimization.
The canonical example provided in the paper is file transfer. A checksum on individual packets of data can't protect the file's integrity, since the data might be corrupted on its way to disk. The only way to get true reliability is to read the file back off disk, do a checksum, and compare with the sender's checksum calculation. If it fails, resend the entire file. Transport layer checksums are just an optimization to let you resend smaller chunks of data.
Sometimes the argument is characterized as "dumb network, smart edges" to contrast with the telephone network, where all the functionality resides in the switches. But the end-to-end argument doesn't say the network must be dumb, just that it shouldn't interfere with higher-layer properties. Anything sufficiently compelling for performance can still be placed in the network.
However, the Worse Is Better argument says more about how the Internet actually works. End-to-end solutions may be technically superior, but in the real world most applications treat TCP's reliability as good enough, most network engineers view firewalls as a useful security tool, and NAT has a lot more popularity than IPv6.