I know these nodes have huge potential for GTKY-ness, but I've actually had this idea bouncin' around for a while and it could be useful.

Everything2 works pretty damn well. It's a little slow, but it works. Gnutella and other P2P systems don't work, at least not in the sense of 100% data availability which is what E2 demands. Distributing updates and maintaining disparate servers seems like a recipe for disaster (as Gartogg points out). The database must remain centralized, and I for one am willing to pay for that luxury. But I think that the situation could be improved by providing a barebones interface to the database. A web service if you will. I know this is not a new idea, but I think it bears repeating in the context of server load.

How much server capacity could be saved is entirely dependent on how much processor time is spent on actually pulling data from the database and how much is spent formatting the pages. As a full-time Web developer, I gotta imagine that a significant amount of time is spent on each page just stitching together the nodelets. Optimizing performance without decentralizing the database is essentially a caching problem. While I presume E2 has extensive internal caching, why can't the client do the caching? Obviously Web browsers don't provide any kind of sophisticated caching, but a custom client could do it quite handily. If it were well-written it could absorb a huge chunk of page processing load as well as reducing page loads overall. Some features are more condusive to this approach than others, fortunately the focal point of E2 (the writeup) is very cacheable.

Here's how it could work:

Create XML tickers for every possible logical unit on the site. XML is a bit text-heavy, and might not be the most efficient format if we're really trying to shave bandwidth, but let's go with what we have for now. The client would build a page by loading several tickers (or a meta-ticker that would initialize the session with the user's choice of page and nodelets). The client would then cache everything. Anything that was cached would not be reloaded until a set amount of time had passed or the user explicitly reloads that piece. The slight inconvenience of reloading individual nodelets would easily be offset by the Everything addiction factor. Having a cached client would be nice because you could easily move between anything you had previously loaded and only reload the exact piece you wanted (catbox anyone?).

Add a few slick interactive features (local bookmarks, scratchpad) and suddenly you'd have a tool that sold itself to the experienced Everythingian. Use Mozilla for emulation of the Web interface and cross-platform goodness. This project would have to be supervised by the E2 developers, of course. But if the caching mechanism was robust and clearly-defined, this tool would improve bandwidth and user experience. Whether true distribution could provide a better bandwidth/processor savings is open to a lot of factors such as distributed server availability and the mechanisms of synchronization, but developing a robust client is a sure bet.

These ideas could also be used in the implementation of a pseudo-distributed-E2-server. Specifically a Web application that performed these caching functions to a bunch of users at once rather than one at a time. This could further reduce the central server load, but would depend on having a sufficient number of these middleware servers and would provide none of the cool instant interactivity that the individual client could provide.