How to Make Screencast Tutorials

NOBODY READS YOUR HELP DOCUMENTATION

Don’t believe me? If you have online help, knock it offline. If you have desktop software with a .chm or whatnot, rename the file or replace it with an empty one. See how long it takes before someone notices. You’ll be waiting a long time. You might be waiting forever.

I have to admit to being guilty of the help-doc-illiteracy that is running rampant. This is only true of visual, GUI/mouse type interfaces. I have no problems reading a man page on a Linux command. I think there’s a gremlin in my brain that believes a GUI is supposed to be intuitive, and if I have to read docs on a GUI then it must be a bad GUI. There are all kinds of reasons why this thinking is wrong, but my gremlin is persistent. When I ask some of my users if they read the help documentation I painstakingly made for them, the vacuous looks I get in return lead me to believe lots of people have a similar gremlin.

What people will do, however, is watch a quick video. Intelligent people will drop every scrap of written documentation they have to watch a YouTube video of somebody finding a creative way to bash themselves in the privates. So here’s what you need to make your own screencast tutorials.

First, you need something to capture a video of a part of your screen. On Windows I’d recommend CamStudio, and on Linux I like recordMyDesktop (running all the words in your software title together must be hip now). Both of these screen recorders are free and open source.

Both of these software packages are very easy to use. They let you define an area of the screen to record and output a video file (AVI for CamStudio and OGG for recordMyDesktop). The two ways to annotate your tutorial are by voice (microphone input) or by text. If you choose text, remember to make it big enough to be readable should you want to resize the video, and learn the shortcut for pausing recording - you’ll need to pause to put different text on the screen. You can also dub audio later, using something like Audacity.

If you have something you like and you’re making a desktop application, consider yourself done. But if you want to show the video on the web, you’ll need to convert the video format to something that’s Flash friendly. You could try uploading it to YouTube/Google Video, but I’ve found their video quality to be too rough for tutorial purposes. So let’s look at embedding it in our own site.

We need to convert the video format twice. First, we’ll convert it into Flash video format (FLV), which anybody with a Flash plugin can view. Next, we’ll convert it to higher resolution format (H.264), which requires something like Flash 9.0.115 or higher. We do both so we can support users that haven’t upgraded their Flash plugin, say, ever.

There are lots of tools that support audio and video format conversion, some of which will cost serious coin. Personally, I like FFMPEG, which is a FOSS command line tool available for Linux or Windows (Windows binaries here). If you’re using Ubuntu, enable the medibuntu repositories and update FFMPEG - the version that ships with Ubuntu has its h.264 codecs borked.

To convert from AVI to FLV and h.264, use commands like this:

ffmpeg -i input.avi -vcodec flv output.flv
ffmpeg -i input.avi -vcodec h264 output.mov

There are a bazillion FFMPEG options you can set to alter video and audio quality. If you need to resize the video while converting (say to 320x240), add this option in the command:

-s 320x240

If you want to adjust quality for size reasons, try:

-qmin=35 -qmax=40

For our high quality h.264 video, we need to do one more thing. For Flash to be able to progressively stream the video (start playing as it loads rather than waiting for the whole thing to download), we need to rearrange some meta internals - basically moving file meta information in the file to the beginning of the file (mov atom or some such nonsense). We do this with qt-faststart, which you’ll have on Ubuntu and you can get compiled for Windows here. It’s a simple command line tool, with just your input MOV and your output MOV:

qt-faststart input.mov output.mov



You can also get an Adobe Air version of qt-faststart here.

Now we have our video files ready for streaming. Which means we need a Flash player. I recommend Flowplayer, which, you have probably guessed correctly, is FOSS. Flowplayer is dead simple to use, and will natively stream FLV and h.264 files.

The Flowplayer download includes lots of examples, and you can look at the source of the page I wrote for one of my sites here (it normally comes up in a lightbox in Geospatial Portal). The only tricky code to look at is the url variable bit - that’s where it detects if h.264 is supported by the user’s Flash plugin and picks the video file accordingly. Other than that, it’s just a tiny smattering of JavaScript.

You will always have some customers that will refuse to look at any documentation, whether it be written, video, or live performance art. But having a video will make your application meta more accessible to most of your users, which in turn helps keep your users out of your office. I have another gremlin in my head that is extremely pleased with that prospect.