I haven’t posted anything here for a while. Because: reasons.
Leaving that old post on the front page was a little embarrassing, so here’s something *new* :-).
But seriously, if I find the time and subjects of interest to begin blogging again, this is where you’ll find it. Until then, see you in the funny papers.
P.S. As always, my contact info is here.
One of the features of HTML 5 is called localStorage.
localStorage was originally intended to overcome many of the limitations of cookies for storing context information on a user’s computer. But, like everything else on the Web, folks are starting to use it in new and interesting ways.
In a recent article, Steve Souders discusses his discovery that Bing and Google mobile sites are using localStorage instead of object caching, to store JS, CSS, and even images.
One of the problems folks developing mobile Web sites have discovered is that mobile browser caches are very small, so objects get aged out of the cache quickly, diminishing caching benefits. The resulting additional object traffic can be especially painful on the typically higher latency, lower bandwidth mobile networks.
It seems that localStorage is becoming the ‘new caching’ in the mobile world. But will it also make an appearance on the desktop?
By default, each origin receives 5 MB of space on the user’s computer. It’s easy to see the attraction of having a dedicated 5 MB of storage, instead of dealing with a shared cache, and various browser cache management mechanisms.
Is this the panacea that it seems? Some folks are citing performance issues with some browsers due to the synchronization needed to access localStorage.
Nevertheless, it’s a very interesting alternative (or compliment?) to object caching. And time will tell if it has legs.
Velocity, THE Web Performance and Operations Conference, was a runaway success again this year. It was sold out with 1,200 attendees – making it larger than the previous two years, combined.
As usual, the technical talks were outstanding. But I found some of the non-technical talks of particular interest.
So many of the challenges we face (performance-related, or otherwise) often have roots in cultural or organizational issues rather than technical issues. Several presentations explored this, and offered suggestions. These were my favorites:
– Creating Cultural Change by John Rauser (Amazon), Video
– Excerpts from Choose your own adventure by Adam Jacob (OpsCode), Video 1, Video 2
– Moving Fast by Bobby Johnson (Facebook), Video
You can watch them all in less than an hour. Go get some popcorn…
At last year’s Velocity Conference, many folks shared evidence of the impact of page speed on Web site usage.
Since then, others have provided even more evidence. Like Every Millisecond Counts and Making Facebook 2x Faster, from the engineers at Facebook. And Proof that speeding up websites improves online business, from the guys at Strangeloop Networks and Watching Websites.
More recently, the folks at Mozilla shared interesting results on the impact of page load speed and conversion rates in a series of posts (part I and part II).
Oh, and the groundbreaking news that Google will now include page speed as a factor in search ranking. 😉
Many Web pages are comprised of content from a variety of sources. The base page (i.e. the HTML of the main page) may draw in content from one or more Content Delivery Network (CDN) providers, multiple ad providers, widget providers, partners, etc. When a page shows degraded performance, how do you quickly identify who is responsible?
Stoyan Stefanov came up with the idea of a Performance Advent Calendar, and is posting an article each day on Web performance. I have the honor of contributing this guest post, repeated below.
When trying to quantify the performance of a Web site, we most commonly mean the response time. The two most common methods of gathering response time data are from Field Metrics and Synthetic Measurement.
To quantify the performance of Web sites, there are lots of metrics:
And metrics provide a great deal of insight. But sometimes we get lost in the metrics. Sometimes metrics alone don’t tell the whole story. What we need is a picture.
If you’ve spent any time making Web sites faster, you’ve undoubtedly come across the work of Steve Souders. In addition to working at Google, speaking about Web performance across the world, and teaching a class at Stanford, he finds time to write books!
Steve’s first book, High Performance Web Sites, was one of the first of its kind to assemble and codify the best-practices for improving Web site front-end performance. His latest book, Even Faster Web Sites, picks up where the first left off, and dives even deeper into performance optimization techniques.
Again this year, the Velocity Conference was a huge success. Exceeding last year’s attendance, more than 700 people came together in San Jose to geek out on Web Operations and Performance.
A focus this year was the impact of Web site performance on business metrics, such as revenue – putting some teeth into why performance matters.
We know the various rules for improving Web site performance. But are sites employing them? Which rules are used most? What impact are they having?
The webpagetest.org site (powered by AOL PageTest) has been active for nearly a year now, and has collected performance metrics for more then 20,000 Web pages.
Pat Meenan recently reviewed those results, looking at the cummulative distributions of various metrics and optimization scores. The results are pretty eye-opening.