For your FREE TIPS

8 Essential Tips to Success
When Using Video Webinars
Learn the
When Setting up a

Video Compression Secrets: Smaller Files, Better Quality

How you compress your video and what form you compress it to makes a difference. But what do we know about compressing video except to use the Adobe Media Encoder or a standalone program like Sorenson Squeeze (or DivX, Microsoft, Apple, etc.) and several other video compressors? Does the file type you create matter? Yes. Does the file size matter? Yes. Let’s talk about file size first.

What affects file size?

Any timeline-based project (Figure 1) must be compressed in order to put it on YouTube, Vimeo, your servers, or wherever you need to store and access it. Furthermore, while you can stream an uncompressed video file (.avi), it’s not easy. The bandwidth requirements are such that your ITdepartment will not have anything nice to say about it.

The video you upload to YouTube gets compressed the way YouTube wants to compress it, no matter how you compressed it originally. Although the server sizes seemingly approach infinity at YouTube, when it comes to uploading, the only worry about file size is how long it might take to upload a large file. It’s the streaming part that matters. YouTube, and a few of the other video services, also automatically deliver different video codecs to different platforms like iOS or Windows. If you’re delivering to multiple platforms, it’s worth a look.

Video file size depends on many variables: HD (High Definition) vs. SD (Standard Definition), frame rate, color depth, even the amount of movement in the video. There are three measurements that count for a lot in determining the final file size.

First, the actual pixel dimensions of the finished product. (Figure 2) Standard video (SD) is generally 720 X 480 pixels—that equals 345,600 total pixels per frame. If you can do native HD video, the dimensions are 1,920 X 1,080, which equals 2,073,600 pixels. That’s a lot more pixels per frame to stream.

An explanation of containers and codecs

When it comes to your rendered video, there are two kinds of software that are in play. The first is the development environment or format. This is a container that the server sends to the browser. The viewer’s browser needs to have the correct decoder add-on installed in order for viewers to see the video. There are several of these containers, but most of the time we use the Flash (.flv or .swf), QuickTime (.mov), or Windows Media Video (.wmv) containers. To be sure, there are more, but I’m going to keep this explanation simple to keep the complexity down.

The second part of a video is the codec inside the format. A codec consists two parts—the encoder, which is what you use while you’re encoding the video from your timeline or file. The second part is the decoder; this part resides on the viewer’s computer as an add-in to decode the video inside the container. The codec is what actually encodes and compresses the video when you’re rendering it.

A very common codec is H.264. It’s called a block codec because it looks for differences in blocks of video as it encodes and plays back. H.264 is used for HD video compression. Now, there is no reason H.264 can’t compress standard-definition video—note that this is true for all codecs. Most of my clients work in SD video or NTSC to keep streaming rates lower. You should also note that if your finished video is going into an Articulate Studio or Storyline project, Articulate will not show HD video … yet. In fact, the dimensions for Articulate have been 640 X 480, an unusual size that fills the template screen.

A second widely used codec is Windows Media. This codec can be used with H.264 and is the standard codec for Blu-Ray discs. Really. It can have other codecs in it besides H.264, but for purposes of this article I’m going to use the .wmv because it is easy to encode from the timeline in Premiere Pro.

Conceptually speaking, compressing video is actually pretty simple. If you look at a video from one frame to the next, some pixels change and some are exactly the same as in the frame before. Why encode a pixel that’s exactly the same as a pixel in the frame before? That’s a waste of bytes. So the software essentially ignores the pixels that don’t change and changes the ones that do change. If only it were that simple. There are very complex algorithms that go along with this, but the essence is the same.

By Stephen Haskin


Posted in Video Tools Tagged with:
Essential SSL