CPU vs GPU 2: Render Harder
It’s been quite a while since we discussed this topic.1 This time I wanted to try something a little different. Let’s start with a simple question: what causes faster render times? The answer, essentially, is to solve this equation, as fast as possible, over and over again:
Rendering is all about calculating simulated particles of light. The faster you can perform the calculations, the lower your render time. Being that CPUs were originally designed for general-purpose serial computation (i.e., one instruction at a time), they were not meant to excel at this sort of task. GPUs, on the other hand, were intended to perform these calculations concurrently.3 While CPUs have a handful of high-frequency cores that process instructions like the one shown above sequentially, GPUs have thousands of tiny, low-frequency cores that can run many sets of these calculations at the same time.
Although it is true that multithreading can provide a boost for CPU rendering, if you want the lowest possible render times, you need a high-end GPU. But there’s another factor to consider: how well your rendering software takes advantage of all your hardware. For example, if the software developer didn’t specifically write code to tell the GPU to help with those calculations, it won’t matter how many video cards you have in your system. That’s why it’s important to know beforehand if your chosen software will benefit from a beefier GPU. Octane, Redshift, and V-Ray are all examples of software that does, in fact, take advantage of all those CUDA cores.
Why has GPU acceleration become so common? Well, as per core frequency rose on CPUs, power consumption and heat generation rose with it. Without a new method to mitigate those factors, innovators were forced to look elsewhere to increase overall performance. It soon became all about parallel processing (i.e., the thing GPUs are good at). With the release of NVIDIA’s CUDA platform, GPUs became even more efficient than CPUs for some general-purpose computing tasks. And with the new Turing architecture, NVIDIA’s RTX™ cards house dedicated cores that enable real-time ray tracing, as well as Tensor Cores that use OptiX AI denoising technology that deliver 3x the performance of previous-generation GPUs.4 Additionally, once the hardware was available, you have the software side further optimizing for GPU acceleration. For example, V-Ray switched from mega-kernel to multi-kernel architecture last year and made GPU rendering nearly twice as fast.5
Using a hybrid renderer like V-Ray is great, as it allows you to utilize the hardware of your entire system to its fullest potential, which translates to faster rendering. In the table below, you’ll see that, while high-core processors are indeed fast, they’re even faster when combined with a high-end GPU. With a video card like the NVIDIA® GeForce RTX™ 2080 Ti, you can often cut rendering times in half. In short, the competition between CPU and GPU is over, and the two must now work together toward a brighter, rapidly rendered tomorrow.
That said, there are some limiting factors to keep in mind. For example, the memory bandwidth of a modern GPU can potentially slow things down when rendering very complex scenes, or also if you’re using the same video card to for your display, and are using multiple displays, which is very common for professional workloads these days. This is where a multi-GPU setup would provide a substantial benefit. The new APEXX T4 from BOXX, with a liquid-cooled 2nd Gen AMD Ryzen™ Threadripper™ processor and room for four full-size dual-slot GPUs, would be a great workstation for that type of workflow.
V-Ray 3 User-Submitted Benchmarks 6
Assuming the software is designed for GPU acceleration (e.g., Octane, Redshift, V-Ray), render times are often much shorter with a high-end GPU.
1 2014, to be exact. Those were simpler times.
2 It may look intimidating, but it’s actually very simple...I assume.
3 This is known as parallel computing. Not to be confused with concurrent computing, which it is often confused with.