More on Build Processes for the Web
My last screencast was about iteration and build processes for web development. Here’s a bit more on the build process.
The big 4 for web performance are (in this order):
- Smart caching. The fastest resource request is the one that never happens.
- Image optimization. Images are still the biggest pipe suck for the majority of sites.
- Concatenate and compress. Fewer HTTP requests and smaller files sizes are better.
Chris Coyier gave one of his usual amazing talks on just this subject.
The first two problems are part of your server configuration. If you’re on Apache, grab the
.htaccess file in html5boilerplate and put it in your
httpd.conf file. It’ll fix the first two problems and a bunch of others. The last two, ye developer, are in your court.
Image optimization can save a remarkable amount of bandwidth and cut down page load time significantly. The best tool I’ve used is Trimage (Linux), which is lossless, handles PNG and JPG, and can be automated via the command line as part of your build process. You only have to run it on an image once - running it successive times on the same image won’t do anything (hence lossless). As such I’ll usually comment it out of my build processes once the application is mature. Trimage was inspired by ImageOptim on OS X which you can use there; I haven’t found anything I like on Windows yet.
Concatenation and Minification
You’re doing two things here. The first is reducing the number of HTTP requests. The number of simultaneous requests to a server varies by the browser, and currently range from 2 to 8. Once you hit that number the requests have to get in the queue, which slows page load. The second thing you are doing is minifying the concatenated file, removing any unnecessary characters/white space from the code. That lowers the size of the file over the network.
A common misunderstanding is that you don’t have to minify code if it is getting gzip’d before it leaves the server, as that will smush it up for you anyway. That is not the case; you should do both.
Original: 28,781 bytes
Minified: 22,242 bytes
Gzipped: 6,969 bytes
Min+Gzip: 5,990 bytes
Side Note: Invalidating Cache on Individual Files
There are various ways to deal with this. My preferred method is adding a phony GET parameter to the request.
<link rel="stylesheet" href="css/main.css?foo=2735">
How I Do It
I do this in two stages. I do concatenation as part of my iteration process so I can point the HTML links directly to what the final location will be. I use Guard for this, which also preprocesses my SASS into a single CSS file.
# sass to css
Windows gets a PowerShell Script and Linux gets a bash script.
# Generate Random Number for Cache Busting
index.html via some regex text replacement. The only difference is the
bash script also optimizes images. Once the build script is run, the application is ready for deployment on a web server.
Easier Than It Looks
This looks like a lot of work, and setting it up the first time can be. After that it’s a piece of cake. When I want to work on an app I fire up Guard, which does my livereload and preprocessing/concatenization. When I’m ready to deploy, I hit CTRL-B in Sublime Text and it minifies, does cache invalidation, and optimizes images. Getting the iteration and build processes set up for a new project takes minutes. And the most important thing: it’s automated. For a busy dev, no matter how compelling the benefits, if you can’t automate it, it probably isn’t going to happen.