We have all faced the limitations of running complex python scripts locally that take hours or days to complete. But what if we could take the
While the speed and processing power of computers have increased exponentially over the past few decades, it seems that we can never get enough completing our computing tasks. One form of computing that is still as hungry as ever is that of Physically Based Rendering (a.k.a. Ray Tracing or Path Tracing Rendering).
While the original demands of rendering video were met as compute power and speed increased, artists simply took the opportunity to increase the complexity of their video compositions. In effect, this continuous evolution of complexity is outpacing the progression of increased compute power. Complexity requires more processing power and memory.
What is a Render Farm?
A render farm is often referred to as a set of high-performance computers that are networked to perform high fidelity graphics rendering, or CGI, typically used in movies or television broadcasts. In recent years, we have seen an increasing number of these services becoming available. But, what if we could build our own version and have tighter control over our workflows and budgets? More importantly, what if we could customize and improve the solution to meet our own needs? Of course, building your own solution from the ground up can be a rather time-consuming and costly exercise. Imagine the work that goes into just building and maintaining the servers needed to create such a render farm.
Luckily, with the maturity of VM solutions and services such as AWS, Packet, and Google Cloud, we now have instant and easy access to scalable compute resources. However, setting up these services and performing automation is still proving to be challenging with the hundreds of tools and the amount of documentation to read before getting started.
Throughout these multi-part blog posts, we will demonstrate how you can build a render farm with only 2 tools – Dis.co, and Packet. To execute the rendering, we’ll be using Blender but you could easily replace this with your favorite rendering engines such as Autodesk Maya or Zbrush. In the end, our goal is to create a workflow that is designed to work across any application that can benefit from distributed computing.
But first, let’s look at some of the current challenges of rendering computer graphics.
Introduction to Physically Based Rendering
Physically-based rendering (PBR) is an approach in computer graphics that seeks to render graphics in a way that more accurately models the flow of light in the real world. Think animated films such as Cars, Toy Story 3 and other 3D movies in the modern eras. Here’s an example of how physically based rendering was used in the Web Summit 2019 presentation, “Your Home in 2025”, by Samsung NEXT’s CIO David Eun. The virtual scene on screen as shown below was constructed using the Global Illumination technique to create realistic coloring and look.
This 26 seconds photo-realistic living room scene took a week to render on a laptop at the 4k resolution.
Physically-based rendering includes using realistic shading and lighting models in conjunction with measured surface values to accurately represent real-world materials such as metal. For example, in the movie Cars, this approach was used to create the shine on the cars and the reflection of other cars on the metal.
Advances in computing power and memory have increased what is possible with PBR; however, challenges remain. A new approach to solving the immense power and memory requirements of PBR holds the potential to transform the process.
Why is Rendering So Slow?
If we look at the graphics rendering industry, it seems that there is a curse on the graphics designers, animators, and engineers. The project will never deliver on time no matter how hard they try! And the computers are often the blame for the delays.
One well-known observation that was made by Dr. Jim Blinn is that the render time of computer graphics seems to remain constant despite the amount of increased processing power that’s available. What that means for most projects is that it will still take hundreds of hours to render our computer graphics jobs even decades later.
But the real question may be “what are the specific tasks that use up all the processing resources, and how one can break such a paradox (or is that even possible)?” Let’s take a deeper look into the rendering pipelines and techniques that were developed in the past decades and see how they are evolving.
Image source: Boxx Blogs
As was mentioned earlier, Ray tracing or physically based path tracing was introduced back in the early 1980s. To render one frame often takes hundreds of hours and thus rendering a movie may take hundreds of years.
For example with ray-tracing, we can enable one of the rendering techniques called global illumination that can bring the subject – which can be a person, object or both – to life and creates a true-to-life feeling which completely changes how we perceive the subject(s) in a scene.
Another use of ray-tracing is creating art. Instead of using brushes, for example, an artist can think in terms of light – rays of light that is – to create a scene. The scene below was created in its entirety with the use of ray-tracing – Blender’s Cycle Engine.
One famous example is how Pixar pushed the boundary of ‘possibilities’ by creating a computing infrastructure (a render farm) to enable their vision. At that time, the movie Monster University was the very first movie that fully utilized the global illumination technique. Despite the effort of building ‘the supercomputer’ at the time, the project still took 2 full years to render with a network of over 2000 machines. To put this in perspective, that’s already the top 25 supercomputers available at the time with over 24,000 CPU cores. That’s impressive!
Today, we have significantly improved these algorithms as well as the hardware support to reduce the rendering time of such a technique. Particularly, the new graphics processors such as NVIDIA RTX now support real-time ray tracing engines. However, once we put animations and complex scenes in to play, the rendering quickly adds up to weeks or months unless we find better ways to scale our solution.
For example, if a frame takes half an hour to render, a merely minute-long video will take 0.5*60*60 = 1800 hours or ~10 weeks to complete if we render at a speed of 60 frames per second. That amount of time is definitely unacceptable for meeting any deadlines and also does not include any time required to make design iterations. However, if we divide up the work and run the rendering tasks on 200 machines, we can now complete the job in less than 10 hours which allows the designer time to perform a daily review of the content.
So one may wonder, what can we build today and what state-of-the-art solutions exist in 2020?
The Trilogy: Dis.co + Packet + Blender
With over half a million downloads per month, Blender is one of the most popular, free and open-source 3D creation tools in the market today. While Blender is a powerful tool that enables designers to create photorealistic results with the Cycle Engine, the quality comes at a cost of quality, performance, and time. To address that issue I want to show you how to combine Dis.co and Packet to speed up such a process.
Sometimes, building the right tools at the right time with the right demand can create a disruptive change to the industry. Although it is hard to predict what the future holds, it is apparent that the amount of capital and engineering efforts required to create a render farm with much better capabilities is much lower than it would have been 10 to 15 years ago. So we need to find a better solution to scale Blender rendering. That brings us to Dis.co, which is a new serverless, parallelization platform.
With the logistics of our server farm being handled by Dis.co, we’ll have an easy and flexible way to scale any compute solutions automatically by offloading tasks to on-demand servers. For this example, Dis.co is going to help manage all of the resources we’ll need to scale our virtualized server farm based on our rendering needs.
Finally, the workhorse of this solution will be Packet, which is a bare-metal server solution used to access high-performance computers in minutes. By combining the Dis.co and Packet together, we have access to incredible parallelization of the best high-performance bare-metal server configurations around.
In part 2 of this series, we’ll teach you how to set up a render farm with all technical steps and also measure the performance of the system. If you are interested in learning more about PBR and advancements in current ray tracing rendering be sure to check out these additional resources:
- Physically Based Rendering
- Ray Tracing for the Movie ‘Cars’
- The Path to Path-Traced Movies
- NVIDIA OptiX Ray Tracing Powered by RTX
To see how Dis.co can accelerate video rendering for your agency or company, please visit try.dis.co/render.