The Ultimate Video Recording, Encoding and Streaming Guide

Your choices and my recommendations

A – 3 video codecs

Decent reference here:…Debate-Googles-VP9-Vs.-HEVC-H.265-103577.aspx
And here:
Netflix announcement here:

Option 1 – H264:

H264 is widely used and gained most of it’s following because it is the successor to H263 and its relation to XVid and DivX. At the time of writing this, H264 is the current industry standard and is used on Blu-Ray discs. Of the 3 options here, it offers the lowest compression amount (with some room for wiggle) meaning that for a given quality, you use the most data. However it is compatible with YouTube standards and they will accept the video without necessarily recoding it into VP9, meaning your final product online hasn’t lost a bit more quality by the time the user sees it. As for what profile to use, High 5.1 is pretty solid. Most modern hardware decoders will do high profile 5.1, some don’t but can do it through software and when you watch on YouTube the app itself or Chrome/Firefox/Edge will do it through software, so as long as your CPU can keep up it’s fine. High 5.2 is the latest and would get a tiny bit more compression and allow higher FPS (4K @ 50fps compared to 5.1’s 4K @ 25fps) but again that doesn’t matter for software decoders only hardware and oddly enough, a surprisingly TINY amount of hardware devices decode 5.2, while 5.1 is pretty stock standard these days.

Pros: Fastest to encode, lots of free software for tinkering, YouTube doesn’t always recode, most compatible on devices.

Cons: Uses the most data.

Recommendation: Currently most people should use this for 1080p game footage. The other codecs get better quality when bitrates are tight, or save much more data when the footage is larger (4K 60FPS) but at 1080p 30fps this competes very well.

Option 2 – H265:

H265 is less widely used because it’s new. It is capable of much better compression so the data rates can be great, but because of the processing power required to do this, encoding times are much longer and some smaller or older devices aren’t fully capable of properly playing back this video because even the decoding power required is pretty decent. Finally, because YouTube doesn’t play H265 videos, if you upload one, it will be converted into H264 and VP9 by YouTube, meaning you lose a little quality there for the final viewing. Some hardware encoders will “encode” H265 but they are pathetic at it and it’s not worth looking into. Regarding profile, at the time of writing this it’s basically Main or Main10. Main is just fine, you can however use Main10 if you like but it doesn’t make a difference in my opinion. There isn’t much above that since the format is still pretty new and since we’re lucky enough to have hardware designed for playback already it’s a no-brainer.

Pros: Small data rate, handled by Handbrake well.

Cons: Plays back poorly on old or weak devices, loses quality when uploaded to YouTube due to second conversion.

Recommendation: Use for backing up your DVDs. Over 25% disk space saving on Ghost Whisperer Season 5 Disk 6 at RF20 compared to H264. 1080p gets you similar data rates for transparent quality.

Option 3 – VP9:

Google invented VP9 as a successor to H264 and an alternative to H265. It’s free, so no licensing fees for hardware or software companies. Slowly but surely, YouTube are converting all of their videos into VP9, leaving only the most compatible H264 ones until last. On the plus side, compatible VP9 videos are not currently getting converted when uploaded to YouTube. Their data rates are just as good as H265 but for some reason I don’t fully understand, the quality and stream rate is a little more reliable. Also, decompression takes less processing power, so it works on more devices. Not to mention it’s already built into Google Chrome and Mozilla, as well as any YouTube app. Profile wise not many devices support hardware decoding of VP9 but it’s pretty irrelevant because the decoding is so efficient even weak CPUs can do it through software. The main drawback is the difficulty in creating these videos. Handbrake can do it, but the libraries it uses only use a single thread. Meaning H265 will use all 12 threads on my CPU taking it to 100%, but VP9 only clocks out at 10% total CPU usage and takes about 5-8 times longer to do. This may change with future releases but for now we’ll focus on FFMPEG use through the command line. All the facilities this codec has to offer are not yet unlocked, so the data rate isn’t as good as it COULD be yet, but it’s on the way. Ultimately, the difficulty in getting smaller files for same quality is too great for me to recommend this. If you’re doing 4K@60fps or doing lower res at TEENY TINY bitrates then by all means use it and you may see PLENTY of benefits. However for the purposes of this tutorial I recommend don’t bother (see section E for why).

Pros: Good quality and small data rate, widely compatible on devices, least complex to decode, currently doesn’t get converted on YouTube.

Cons: Underdeveloped therefore takes much longer to encode, difficult to use and can’t make use of full compression facilities yet so often files aren’t even smaller than H264 when made on home PCs.

Recommendation: If you felt the need to read a guide about this then it’s probably beyond you. VP9 was made by Google to be single threaded. YouTube encodes thousands of videos simultaneously. So while we want to use say 8 threads on encoding a single video, they would use 8 threads on 8 videos. This improves speed for them because they don’t have to divide-conquer-reconstruct the videos. End result is that it’s not designed for the home user and it shows. If you’re a fanatic about quality then by all means use this as I’ve included an FFMPEG part. Otherwise, leave it alone.

B – MKV vs MP4 vs WebM

The reason why I say use MKV is because frankly, it’s the future. MP4 containers can contain a certain amount of stuff. MKVs can contain more. They can have a menu (apparently), multiple camera angles that can be displayed simultaneously or switched between with the viewer’s remote control, they can have audio encoded in Opus… the list goes on. The number of features available in MKV containers is so much greater than any other container, they will ultimately outlast anything else that currently exists. That’s why we should start using them now. If you have a gameplay video you want to keep, is there really an advantage in using MKV over MP4? No. But why make a habit now that you’ll one day change?

As for WebM containers, they are basically just a cut-down version of MKV. Take a free video codec stream like VP8 or VP9, with a free audio codec stream like Opus or Vorbis, put them into an MKV container. This is the effective definition of WebM, if you rename the rile extension to WebM then it’s totally valid. It’s just an MVK file where all the contained streams are free. Ideally, all files will be WebM, but people use royalty codecs, so just use MKV for consistency, because it accommodates both.

So I say use MKV, because why use any other container?

C – Handbrake and AviDemux

I love Handbrake. It works on Windows, Mac and Linux. It offers very up to date options including one of the most recent, audio as Opus in MKV containers. Since this tutorial focuses on getting great quality, it makes sense to use Handbrake because each user can tweak the settings to really push their individual computer to get the best videos they can.

Regarding Premiere. Down below in section 4 of this guide, I’ve taken a video and encoded it a bunch of different ways to compare encoding settings. You’ll notice there’s only one setting for Premiere listed. It was the BEST compression that my version of Premiere could do. Every setting was turned up to the top and it just couldn’t compare to the Handbrake encodes. Sadly, Handbrake is not a video editor. To do anything remotely fancy you really need to consider using something like Premiere. But if you care about your quality, export your work as lossless, then encode that lossless video into your final file format using Handbrake. What I do is, I use AviDemux to join the “uA Intro” movie onto the start of my game footage, and to cut out any unwanted parts of the video (like pauses or between matches). AviDemux can clip, append and edit video without removing quality but instead by literally copying the streams bit by bit into new files. Premiere can’t do this, by it’s nature of allowing complicated editing, it requires recoding the entire video which loses quality every time that happens.

The main drawback to Handbrake in my opinion is if you want to go down the VP9 route. The libraries for VP9 are still in development and on their official website they state that you should expect slow encoding times for the near future. The Handbrake devs aren’t a fan of this, so they have only put in a small amount of effort into the VP9 side of things. You can still do it of course, but it’s extraordinarily slow and the options aren’t very customisable. So for this guide at the time of writing, I’ll be showing you VP9 through the command prompt with FFMPEG.

D – Constant Quality vs Constant Bitrate vs Variable Bitrate

This guy provides an excellent explaination:

TLDR? Well, here’s the scoop…

When you use a Constant Bitrate or “CBR” you apply a certain number of bits per frame. Frames can be grouped to provide better compression, but each group will have the same amount of data. This means that when the video doesn’t need many bits for a while it will still use them and have fantastic quality, but when it needs extra bits (like in fast moving scenes) it won’t get them and you’ll see a quality drop.

When you use a Variable Bitrate or “VBR” you’re specifying an average like in CBR but there’s room to move. Some VBR codecs will let you specify a target average of say 8000Mbit/sec PLUS OR MINUS 1000Mbit/sec. So in these cases, sometimes the codec will use less data when it doesn’t need it, but allow some more bits when the video really does. This causes less fluctuation in the quality but still maintains the target file size.

Both of the above are examples of Average Bitrate or “ABR” and is sometimes called “Target Bitrate”.

ABR gets huge benefits from using 2-pass encoding. The first pass analyses the video to find all the places where it needs the most data and where it needs very little. Then on the second pass, it remembers these details and applies a Variable Bitrate more effectively because it “knows the future”.

Constant Quality or “CQ” is different. You specify the QUALITY you’re after then it will use the bits it needs to in order to achieve that. The profile you set (or individual parameters) will determine how hard your encoder can try to get the quality it’s looking for. For example, it can search for same-colour pixels within a 16 pixel radius, or a 24 pixel radius. If the encoder achieves the quality it is set to, it can keep looking around to find ways to do it with less data until it reaches the parameter/preset limit. If it can’t compress the video well for a while, it will just use more bits to maintain the quality. It doesn’t need 2 passes to do this, 1-pass will always work.

The link above contains the following example for a TV series:

  • Constant Quality RF22: Episode 1 = 278MiB, 2 = 349MiB, 3 = 363MiB, 4 = 304MiB
  • Average Bitrate 798kbps: Episode 1 = 323.5MiB, 2 = 323.5MiB, 3 = 323.5MiB, 4 = 323.5MiB

Both methods can achieve a total size of 1294MiB but in the ABR method, Episode 1 got more bits than it needed, 2 & 3 didn’t get enough and 4 was probably pretty close to right.

So with ABR, Episode 1 looks great! 2 & 3 however are below the standard we want.

With Constant Quality, we always get the quality we planned for!!!

There’s more information on CQ in another post by the same guy:

TLDR? What he explains here, is that when you use a fast preset on your encoder, it winds up with a large file size. Slower presets will get you smaller file sizes. The preset tells your encoder how hard to try to get the quality, so even if it reaches the right quality but still has time to search, it will try find ways to decrease the data required even further.

But there’s an anomaly, sometimes using slower presets (or larger parameters) you will get a LARGER file size! Why? It’s because sometimes with CQ the encoder will search within the parameters you allowed it but couldn’t find enough quality, so it just saves the frame. But if your parameters are searching far and wide for more options, it can locate tiny pieces of extra quality to fit in and once it finds them, it will use however many bits it needs to fill the target.

In the end, he says that the “Placebo” preset in Handbrake x264 is actually about 0.25 better quality on the scale than its nearest setting “Very Slow”. In my examples in the next chapter, I can see this happening occasionally. Also I use a setting called “Max” which isn’t an x264 preset, but rather a customised list of parameters filled out to their highest values, with maybe one or two exceptions for sanity. Almost every video I have ever tested with “Max” settings looks unbelievably good on RF20 and sometimes the file size is larger than Placebo to allow this, but sometimes it’s smaller because it found a better way to compress.

You can see this guy’s results at a few RF levels in the video below:

There’s an additional detail. Quantization Parameter or “QP” is where every frame is the same quality while Constant Rate Factor or “CRF” will apply lower quality to fast motion and higher quality to low motion. The strategy is to make it more appropriate to a human eye, which sees fast motion as a blur (so CRF will allow blurring) but focus on still images. Most people believe that the x264 CRF values are generally 2 levels of quality better than QP, but that’s largely because of the calculation difference. The end result is that a CRF video can look to humans the same as a QP video but with less data. To a computer they don’t look the same, but to a human they do. This isn’t like the audio purist argument of “lossy sound is still lossy”. QRF makes videos look MORE like real life than QP to humans. Compared to QP, CRF will perform better on focused still motion where humans will notice an improvement, yet allow blur where humans will expect it, so in the moments where you are watching and CAN see a difference, CRF is better. There’s a good explanation here:

My recommendation: Constant Quality. Specifically QRF. We want the video to look up to standard, bitrate is a secondary priority. Of course, because it’s only 1-pass we also save encoding time. The rest of this guide will usually reference this (with the exception of the Premiere encode which didn’t have this facility and streaming where it’s not appropriate).

E – Considerations for the future

Sadly, nothing lasts forever. The Alliance for Open Media consists of Google, Amazon, Cisco, Intel, Microsoft, Mozilla and Netflix. This AOM wants to finalise a bitstream format called AV1 by the end of March 2017. Within a few months after that, we will see new codecs released and YouTube will proceed to convert their videos to this new format. Future wise, it will allow higher resolutions at greater colour depth (12-bit last time I checked, for 4-16 times as many colours) and require considerably less data than VP9/H265 at 4K@60fps (they’re hoping for 50% less). In terms of today’s videos, the savings might not be as much, but they will be there to some degree. It will also be compatible with HTML 5, WebM and Opus so those things will not change.

Here’s my prediction… H264 video will be the first to go. YouTube will convert all their H264 video straight to AV1 and they will start doing so within 3 months of the bitstream being finalised. Then, they will start with the top resolution popular VP9 videos. Anything over 1080p will go first, then they will start climbing down the quality ladder. By the time they get to 720p most of your videos will be out of date and we will all be encoding greater than 1080p in AV1 anyway so no biggie. But bear in mind, if you’re uploading at H264 now, your videos will be the first to get recoded and lose some quality.

Stay tuned here for updates regarding AV1 🙂

My final recommendation is to encode H264 RF 20 at Very Slow for uploads to YouTube and if you wish to keep a copy on your own computer, make a separate H265 for your own collection. It’s a lot of encoding, but it will save you Hard Disk space. If you’re not worried, then just use the H264 for both.



Unreal Aussies run many events over the year to help connect and build the Australian gaming community. If you are interested in helping out in any current or future planned events or wishing to offer some more ideas for us to explore - let us know!

About Us

Unreal Aussies is for passionate gamers from all walks of life. Games come and go, but the people still remain. From meetups to tournaments, hardcore teams to charity streams, Unreal Aussies core mission is to make gaming more fun as part of a community than it can ever be alone.