Final Cut and Compressor Encode and Decode

A few of the video editing people have been saying they are running into issues with encode/decode wanting to use a GPU. This is not a guide, this is just info.

You should always start with a format that can be used by the hardware. If it is not, the CPU will be used. Audio settings don’t matter at all. You can use whatever format you like. It’s handled by the CPU anyway. I used a 35 second clip in Final Cut and exported it in Compressor. All settings were left the same, except the encoder type. I use Xsplit to record my gameplay in Windows using CQP 21, 60fps, x264 in an MP4 container. This gives me videos with a bitrate of ~150 mb/s.

I started with the IGPU enabled to decode for the first run. I then rebooted with a shikigva flag to cause the decoder to fail for the second run.

Encoder Method VDA Container Time Image Video
x264 IGPU, RX580 Pass .mov 32s view view
x265 IGPU, RX580 Pass .mov 67s view view
x264 CPU, RX580 Fail .mov 229s view view
x265 IGPU, RX580 Fail .mov 65s view view

Interestingly, the final run used the IGPU when it said decoder failed. I verified this by running some other test where the IGPU is used and it failed in each aspect. I genuinely wasn’t expecting that. So that file says “single gpu” when it was in fact both. Using Final Cut with the IGPU decoder disabled, I noticed there was no lag, delay, or choppy playback with skimming. Quick look worked just fine as it instead used the RX580. The quality was better when the IGPU wasn’t used with acceptable export times. Ideally, you should want to use the CPU for some of the work as that yields better results. Allowing both GPUs to encode / decode really makes it look messy. This is why pros use CPUs with a lot of cores. X264 dual GPU looks terribly blocky.

Are you having issues using both GPUs for the job?

submitted by /u/cbabbx
[link] [comments]