You know what I hate? When the May issue of your favorite magazine arrives on your doorstep on April 21st.

That having been said: I'm going to be tooting my own horn here a little bit, so if you're not down with that, just hit me with the minus sign and we'll be squaresies. Otherwise ...

A Frank Talk About Pageload Times

I've been sticking a bunch of pageload timing code into the site, for testing purposes and especially to look for places where pageloads can be improved from a code side. In particular, I've decided to focus on nodelets first because of their ubiquity and commonness. Not only are they on every page you load, but they're on every page every other user loads, so they have a lot of impact on pageloads and the site speed in general.

As you can imagine, E2 load times are varied by many things, but the single factor that most affects E2 page loads is the time of day you're viewing the site. If you view the site during the "daytime hours" of North America, chances are your page loads are a lot slower than the "nighttime hours." I don't know if this more a function of traffic across University of Michigan (being on a University pipe in the age of BitTorrent seems exceedingly foolhardy!), across all of North America, or simply within the E2 site itself. There are some ways to test that, but I'm not interested in all of that right now.

In general, the time breakdowns for loading every single nodelet are as follows:

  • Server Time 05:00 - 13:00 (1AM - 9AM EST): 1.4 seconds - 2.8 seconds, with occasional bumps up to 3.6-4.5 seconds (usually when New Writeups dumps the cache and fetches an update)
  • Server Time 13:00 - 21:00 (9AM - 5PM EST): 2.1 seconds - 4.5 seconds, with occasional bumps up to 6.0-8.0 seconds (and some (RARE) instances of 9.0-10.0 seconds - 10 seconds just to load nodelets!)
  • Server Time 21:00 - 05:00 (5PM - 1AM EST): 1.9 - 4.4 seconds, with occasional bumps up to 5.8-7.6 seconds (and, again, some RARE instances of 8, 9, even 10 seconds)

The reason I'm giving all this data (besides appeasing the number-crunchers) is so you can understand that if I reduce a pageload by .1 seconds, that's great, but if I can reduce it by 10% every time, that's even better on average. Anyway, for the remainder of my discussion, I'm going to use my 21:00 - 05:00 data (5PM - 1AM), because that's when I ran most of my tests of old nodelet speed vs. new nodelet speed.

I Feel The Need - The Need For Nodelet Speed!

(This is gonna involve some technical discussion, half so I can remember exactly what I did (in case everything breaks and we need to undo it) and also to log some of the changes that need to be made.)

What I did to make E2 faster was speed up all of the nodelets as best I could. Some of the nodelets are pretty much as fast as they're gonna get:

  • Things like Notelet, Recent Nodes and Personal Links are stored in your user VARS and are loaded on every page - plopping them down on your pages takes maybe 1/50 of a second each, even on the slowest load.
  • UPDATE: I sped Recent Nodes by up to 75% by removing a somewhat arbitrary piece of code which redirected recently visited writeups to the parent node. This is admirable, but comparing a .2 second page load to a .05 second page load, you can just click the "see all writeups" link yourself if you so choose.
  • Other nodelets, such as Quick Reference and Vitals, are just static pages of links, so they also load pretty much as fast as Perl can parse them.
  • Random Nodes is a special nodelet, because it's generated pre-compile, and cached for a long time (I think it's 5 minutes now, not 10 like in dem bones's node on the subject), so 99.9% of the time it's as fast as the aforementioned nodelets, and then once in a while, it's 0.1 second slower. No big deal
  • Statistics is just like Random Nodes, only it updates even less. Again, no big deal.
  • Current User Poll just pulls the text from an existing node, which is a pretty straightforward procedure. This one takes the least amount of time after all the nodelets already listed. Improving this one is possible, but not a huge concern.
  • Surprisingly, Chatterbox is the next fastest nodelet. There is definitely a big jump in load times between Chatterbox and the Current User Poll and the rest, but it still rarely takes up more than 0.25 seconds of load time, and usually leans towards 0.1 or so seconds. More importantly, though, while the chatter code could probably be improved, it is pretty good as it stands, and is certainly not the bottleneck I thought it would be.
  • ReadThis comes in next, and while it's not the quickest code, it's not the biggest problem. I'll probably tackle this next.

This leaves four nodelets: Epicenter, New Writeups, Other Users, and Everything Developer. These are the problem children of nodes - sometimes these nodelets by themselves would each contribute 1, 2, even 3 seconds of load time (on a slow page load). They would singlehandedly whomp pageloads into the ground on a consistent basis, especially when the site is busy - as you can imagine, a busy site means more activity on Other Users, New Writeups, and updates to Epicenter. (Everything Developer was a bit more mysterious ...)

Now what you have to understand is that nodelet loads do not exist in a vacuum. If the entire page delivery system is slow, each nodelet is slowed down. What is true, however, is that every nodelet - almost to a fault - loads at the proportionally same time. Here's a general key for proportional time of each nodelet:

Nodelet                   Time  
Epicenter 10-20%
New Writeups 20-30%
Chatterbox 5-15%
Notelet neg.
Quick Reference neg.
ReadThis 2-3%
Random Nodes neg.
Everything Developer 10-20%
Other Users 10-20%
Vitals 3-5%
Statistics 3-5%
Recent Nodes 2-3%
Current User Poll 3-5%
Personal Links 2-3%

Now this is based on loading every nodelet - if you removed Everything Developer, for example, all the other times would go up proportionally, but your entire nodelet load would go down, because you're removing about a 15% chunk of your average nodelet load time - something like half a second on every pageload. It's not going to make every page burn, but it's a start.

So from now on, when I talk about reducing "to a 10% load", I'm talking in reference to this list. Our goal is to get the entire load down by making every load as close to "negligible" as possible : )

(For editors and admins, the admin nodelet is about the equivalent of the Chatterbox in terms of pageload times.)

Everything Developer Nodelet Acquires Ring of Agility +5

My first surprise was how long Everything Developer took to load compared to other nodes. It pulls most of its data from pre-compiled variables before pages even load, so it should've have been pretty fast. Breaking down the time loads between each section of Developer, I finally found the bottleneck: the Patch Manager code.

What was happening was the code was set to find the next 7 patches that weren't implemented or closed. The problem was it pulled down every single patch in its query, found the first 7 using a hard-coded comparison against the status (not 'closed' and not 'implemented', essentially) - and then continued to just use a "next if count==7" statement to bypass every remaining patch by hand.

Solution? Add the closed or implemented check to the query, limit it to 7, and just print everything it returns. The Developer's pageloads were reduced pretty significantly after that.

I also fixed a couple of broken links in there - the nodelet is still pretty load-heavy, so I would recommend it only for people who use it with some frequency (heck, even I don't use it that much.) I would also especially recommend that you collapse any of the sections you don't read or use, as a collapsed section gets skipped over more quickly in the code. If you collapse everything, it turns a 12% load into a less than 3% load.

Total Speed Gains: 0.3-0.5 seconds per load.

Other Users Nodelet Loses Weight, Feels Great

The Other Users nodelet is a curious beast, because while it is fairly straightforward, there's a lot of calculation to make it do what it does. One of the hidden calculations (for eds and admins) is a number indicating if a user has been with us less than 31 days. Lord Brawl had supplanted some older code with a method of breaking down a user's createtime string into Perl integers, checking them against the time, and using that to determine the number. Luckily for us, MySQL is much more proficient at doing date math than Perl is, and by swapping in some query math for Brawl's (excellently accurate) math, Other Users was cut down (for eds and admins!) from a 10-20% load to a less than 8% load.

For non-eds and admins, this page is pretty much optimized (although it contains some "fluffy" things, like the edev badge and the random actions code.)

Total Speed Gains: 0.4-0.8 seconds per page load.

Epicenter Nodelet - Awesomeified!

One of the neat underrated E2 features is our weblog abilities. Any usergroup can start a weblog, and get a link on their Epicenter that they can click on while at any writeup, and have it added to that group's weblog, so everyone can enjoy it. It's a great way to find and organize content without forcing that kind of philosophy on every one. A real win-win.

The major problem with the weblog code (at least for admins) is that it ran the same weblog code on every document page (including restricted_superdocs, blech) for *every* weblog (because admins are omnipotent, naturally.) Even if you used the "hide" function, it would still load up the -ify htmlcode for every weblog - it was a guaranteed 1 second pageload on every writeup, superdoc, Scratch Pad or home page visit. In short, majorly annoying.

Luckily, the hidden weblogs are contained in VARS that load with every page, so I just added one line of code to skip the -ify! code if the weblog was skipped. This code used to be within the -ify! code, but the pre-eminent strike paid off: Epicenter loads lost that 1 second pageload for admins (and people who are members of a lot of usergroups with weblogs!)

Note: You have to hit the "hide" button by the "-ify!" link to have it skipped. So only do this if you don't use the usergroup weblogs that often. You can always un-hide weblog links later.

Total Speed Gains: At least 1 second for admins.

New Writeups Nodelet - The Holy Grail of Pageload Madness

Finally we come to the real bad boy, New Writeups. New Writeups is, again, fairly straightforward for most people, but the machinations to create it are fascinating and troubling.

Basically, new writeups is real simple - pull the 50 newest writeups, and then spit out the author, a link to the full e2node and the writeup itself, rinse, repeat, exeunt. But there are a number of things mucking up the works:

  • the writeup database only records the parent e2node's id, not its name, so that has to be pulled from the database every time, too;
  • the writeup database also doesn't record the author, so that has to be pulled using the node table and the writeup_id;
  • that link to the specific writeup? The writeup database only records the writeuptype's id, so the actual name ('idea', 'person' ,etc) has to be uncovered as well;
  • For editors and admins, the code to show whether something is hidden (or not) and nuked (or not) has to be taken into account, too.

Needless to say, individually running this code against one particular writeup isn't so bad - it can be as fast as .006 per pageload, sometimes as slow as .06 - but when you're pulling down 50 writeups, that's a range from .3 seconds to 3 seconds per pageload - and it's not always consistent from writeup to writeup when doing the voodoo it does so well.

So, the major fixed that has been applied is moving a lot of the check-database, get-data, check-database-base-on-data items into one big SQL statement. So now the author, writeup_type title, and e2node title get pulled down and cached, instead of having to be redetermined with every pageload. Interestingly, the SQL statement actually runs a little faster, too, because it doesn't have to join the writeup and node tables any more - it pulls two things from node in subselect queries, and then everything else comes from writeup.

In general, this has a positive effect on the pageloads of New Writeups. Usually, it runs somewhere in the neighborhood of 20-30% faster (ie reducing its pageload from 20-30% to about 15-25%) and when it's cached, it's pretty blazing, relatively speaking.

Total Speed Gains: 0.5 - 1.5 seconds.

Conclusion: Things Can Get Better

Well, it looks like all told, there was a gain of about 1.2-2 seconds per pageload during the nighttime - somehow that number seems a little high, but the numbers pretty much bear it out. Again, that's just 1 second off the nodelets, so all the other aspects of e2 that make it slow, are still making it slow.

There are still some things that need improving with the nodelets. One resource we are not utilizing enough is the speed of SQL queries and its much better ability to filter data than an endless series of unless and if statements.

As I mentioned earlier, the best thing you can do to reduce pageload is to:

a) turn off nodelets you don't use.
b) collapse sections of nodelets you don't use.
c) hide -ify! links you don't use.
d) turn your "show New Writeups" down as far as your mind will let you - 10, if you can help it.

I hope we can make some more speed improvements to the nodelets and the rest of the site - it's taken quite a bit of work just to make these meager changes - ironically because of the long pageloads. Things aren't perfect yet, but there's no reason E2 can't be as fast as most of the other major sites on the Internet.

Happy noding!

Log in or register to write something here or to contact authors.