List Grid

Blogs Media Lab

Optimizing page load time

We launched the new walkerart.org late on December 1, and it’s been a great ride. The month leading up to (and especially the preceding week starting Thanksgiving Day, when I was physically moving servers and virtualizing old machines) was incredibly intense and really brought the best out of our awesome team. I would be remiss […]

We launched the new walkerart.org late on December 1, and it’s been a great ride. The month leading up to (and especially the preceding week starting Thanksgiving Day, when I was physically moving servers and virtualizing old machines) was incredibly intense and really brought the best out of our awesome team. I would be remiss if I didn’t start this post by thanking Eric & Chris for their long hours and commitment to the site, Robin for guiding when needed and deflecting everything else so we could do what we do, and Andrew and Emmet for whispering into Eric’s ear and steering the front-end towards the visual delight we ended up with. And obviously Paul and everyone writing for the site, because without content it’s all just bling and glitz.

Gushy thanks out of the way, the launch gave us a chance to notice the site was a little … slow. Ok, a lot, depending on your device and connection, etc. Not the universally fast experience we were hoping for. The previous Walker site packed all the overhead into the page rendering, so with the HTML cached the rest would load in under a second, easy. The new site is heavy even if the HTML is cached. Just plain old heavy: custom fonts, tons of images popping and rotating, javascript widgets willy-nilly, third-party API calls…

Here’s the dirty truth of the homepage when we kicked it out the door December 1:

12/1: 2.6 MB over 164 requests. Load times are pretty subjective depending on a lot of things, but we had good evidence of the page taking at least 4+ seconds from click to being usable — and MUCH longer in some cases. Everyone was clearly willing to cut us some slack with a shiny new site, but once the honeymoon is over we need to be usable every day — and that means fast. This issue pretty quickly moved to the top of our priority list the Monday after launch, December 5.

The first thing to tackle was the size: 2.6 MB is just way too big. Eric noticed our default image scaling routine was somehow not compressing jpgs (I know, duh), so that was an easy first step and made a huge difference in download size.

12/5: 1.9 MB.

On the 6th we discovered (again, duh) lossless jpeg and png compression and immediately applied it to all the static assets on the site, but not yet to the dynamically-generated versions. Down to 1.8 MB. We also set up a fake Content Delivery Network (CDN) to help “parallelize” our image downloads. Modern browsers allow six simultaneous connections to a single domain, so by hosting all our images at www.walkerart.org we were essentially trying to send all our content through one tiny straw. Chris was able to modify our image generator code to spread requests across three new domains: cdn0.walkerart.org, cdn1, etc. This bypasses the geography and fat pipe of a real CDN, but does give the end user a few more straws to suck content through.

Requests per Domain

www.walkerart.org 26
cdn1.walkerart.org 24
cdn0.walkerart.org 24
cdn2.walkerart.org 21
f.fontdeck.com 4
other 7

 

By the 8th we were ready to push out global image optimization and blow away the cache of too-big images we’d generated. I’m kind of astounded I’d never done this on previous sites, considering what an easy change it was and what a difference it made. We’re using jpegoptim and optipng, and it’s fantastic: probably 30% lossless saving on already compressed jpegs and pngs. No-brainer.

12/8: 1.4 MB, almost half of what we launched with.

Next we needed to reduce the number of requests. We pushed into the second weekend with a big effort to optimize the Javascript and CSS. Earlier attempts using minify had blown up and were abandoned. Eric and Chris really stepped up to find a load order that worked and a safe way to combine and compress files without corrupting the contents. Most of the work was done Friday, but we opted to wait for Monday to push it out.

Meanwhile, I spent the weekend pulling work from the client’s browser back to the server where we could cache it site-wide. This doesn’t really impact bytes transferred, but it does remove a remote API call, which could take anywhere from a fraction of a second (unnoticeable) to several seconds in a worst-case scenario (un-usable). This primarily meant writing code to regularly call and cache all of our various Twitter feeds and the main weather widget. These are now served in the cached HTML and it’s negligible in the load time, instead of 200+ ms on average. It all adds up!

 

CSS Sprite for Header and Footer nav images (it has a transparent background, so it’s supposed to look like that):

 

 

So Monday, 12/12, we pushed out our first big changes to address the number of queries. Eric had combined most of the static pngs into a CSS Sprite, the javascript and CSS were reduced to fewer files, and the third party APIs were no longer called in the browser. Really getting there, now.

12/12: 1.37 MB, and 125 requests

Happily (as I was writing this) Eric just pushed out the last (for now) CSS sprite, giving us these final numbers:

12/13: 1.37 MB, and 110 requests! (down 53% and 67% respectively)

This isn’t over, but it’s gotten to the point of markedly diminishing returns. We’re fast enough to be pretty competitive and no longer embarrassing on an iPad, but there are a few more things to pick off around the edges. We’re heavier and slower than most of our museum peers, but lighter and faster than a lot of similar news sites. Which one are we? Depends which stats I want to compare. :)

We used the following tools to help diagnose and prioritize page loading time issues:
http://tools.pingdom.com/fpt/
https://developers.google.com/pagespeed/
http://developer.yahoo.com/yslow/