Over the past couple of years the study of Web page performance optimization has matured, and various rules have emerged. Steve Souders kicked it off with his original 14 Rules. Then his former team at Yahoo! expanded this to 34 Best Practices (yikes!). Microsoft has 12 Steps of their own.
Some of these optimizations are for working around browser behaviors (i.e. ordering page content in certain ways, etc.). These are certainly important, but I think the most impactful performance optimizations fall into two categories:
1 – Make things smaller – this means both making the size of the page smaller in terms of bytes (e.g. via compression), as well as making the page smaller in terms of the number of objects that comprise it (e.g. combine 10 individual images into one sprite file).
2 – Get things closer to the user – this means distributing content to servers that are physically closer to the user (e.g. via CDN usage), as well as ensuring static content is appropriately cached within the user’s browser.
But where should you start? What are the most important (biggest bang for the buck) optimization techniques to improve performance? Well, it depends 🙂
One commonality for all interactions between a user and a Web site is the network connecting them. The characteristics of this network will help you determine where to start your optimization work. Does it have high or low bandwidth? Does it have high or low latency (i.e. round trip time)? Why does this matter?
The formula below (simplified for illustrative purposes) shows the rough relationship between network and page characteristics, and page load performance.
The first term shows the impact of network latency and the number of objects on the page, while the second term shows the impact of page size and network bandwidth.
For example, if you have a very low bandwidth connection (think dial-up), that second term can become very large and be the primary driver of page load time. In this situation, a good way to improve load time is to reduce page size.
On the other hand, if your network bandwidth is very high (think Cable Modem or FIOS), the second term approaches zero (meaning page size has little impact on load time). In this case, the best way to improve load time is to reduce the number of page objects.
So, to know where to start your optimization efforts, you really need to understand the network connectivity of your users.
Looking back at the two main optimization categories described above, the discussion thus far has concentrated on the 1st category. Or has it?
The use of a CDN, suggested in the 2nd category, to get content closer to the users is really about reducing network latency. See formula above.
And, having proper cache-control headers on page objects is certainly important. But don’t fall into the trap of thinking you can ignore First View performance (when no page objects are in the browser cache) thinking that it’s only a one-time performance hit because subsequent views (Repeat Views) will load most things from the cache. In reality, a surprising number of Repeat Views behave more like First Views (Browser Cache Usage – Exposed).
Last, but not least, something that doesn’t fit neatly into either of the two categories above: persistent connections (a.k.a. keep-alives). Without these, the client must establish a server connection for every object on the page. That’s an extra round-trip (well, technically 1.5 round-trips) for each HTTP connection, and many more for each HTTPS (SSL) connection.
It’s always helpful to see real numbers to asses the impact of various optimizations. Pat Meenan and Ryan Doherty have applied some of the techniques in a step-wise fashion and measured the impact.