Video wall cover

Video Encoding Tested: AMD GPUs Still Lag Behind Nvidia, Intel

Posted on


The best graphics cards aren’t just for playing games. Artificial intelligence training and inference, professional applications, video encoding and decoding can all benefit from having a better GPU. Yes, games still get the most attention, but we like to look at the other aspects as well. Here we’re going to focus specifically on the video encoding performance and quality that you can expect from various generations of GPU.

Generally speaking, the video encoding/decoding blocks for each generation of GPU will all perform the same, with minor variance depending on clock speeds for the video block. We’ve checked the RTX 3090 Ti and RTX 3050 as an example — the fastest and slowest GPUs from Nvidia’s Ampere RTX 30-series generation — and found effectively no difference. Thankfully, that leaves us with fewer GPUs to look at than would otherwise be required.

We’ll test Nvidia’s RTX 4090, RTX 3090, and GTX 1650 from team green, which covers the Ada Lovelace, Turing/Ampere (functionally identical), and Pascal-era video encoders. For Intel, we’re looking at desktop GPUs, with the Arc A770 as well as the integrated UHD 770. AMD ends up with the widest spread, at least in terms of speeds, so we ended up testing the RX 7900 XTX, RX 6900 XT, RX 5700 XT, RX Vega 56, and RX 590. We also wanted to check how the GPU encoders fare against CPU-based software encoding, and for this we used the Core i9-12900K and Core i9-13900K.

Video Encoding Test Setup

Most of our testing was done using the same hardware we use for our latest graphics card reviews, but we also ran the CPU test on the 12900K PC that powers our 2022 GPU benchmarks hierarchy. As a more strenuous CPU encoding test, we also ran the 13900K with a higher-quality encoding preset, but more on that in a moment.

TOM’S HARDWARE TEST EQUIPMENT



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *