What's the future of 3D rendering?
Rendering is changing: the traditional CPU render model is being challenged, and Mike Griggs suggests that it’s time to re-examine your render solutions.
The traditional workflow of 3D creation is a complicated bespoke recipe in which an image or animation is created in 3D software, and then - after extensive previewing - rendered and finished in a compositing program. This can cause all manner of headaches if the ingredients in the recipe don't mix well together, or if you don't cook them for long enough.
You see, the traditional CG workflow has a flaw: the most CPU-intensive element - rendering - often happens in the middle of a project's lifespan. This means either that rendering is split into multiple elements to allow for change control, which means much more work at the final compositing stage of the process; or a potential reduction in render quality (killing reflections on certain materials, for example) to allow the job to get out on time.
The traditional method also depends on a render process that throws all of the resource-hungry workflow at the computer's CPU, leaving large parts of the system that had been used in the creation of the scene sitting doing nothing - the graphics card in particular.
However, this traditional way of working is starting to be challenged on a number of fronts from advances in software development, increases in hardware power and faster broadband. This is enabling artists to move the 'render squeeze' to a point of their choosing in their workflow more dependent on the job. Two disruptive technologies in particular are enabling this change: GPU rendering and cloud computing.
Rendering on the GPU
To most 3D artists, having a powerful graphics card is a must, but typically this has only been true during the asset creation phase. While the idea of GPU rendering - using the computational power of the graphics card to aid with final rendering - isn't exactly new, there's still a perception that it isn't ready to compete with the traditional CPU model.
Chris Ford, business director of RenderMan, details a couple of the key issues. "To date, the biggest issue with GPU rendering has been constrained memory and I/O, making it difficult to handle scenes referencing the huge amounts of geometry and texture data that is typical for RenderMan," he says.
"However Nvidia has recently announced a GPU capable of supporting 24GB VRAM [the GK110], which is quite respectable, and we do see a lot of promise. There still remain significant challenges such as the lack of standardisation around CUDA, OpenCL and so on, or that code needs frequent updating for specific hardware revisions, but we are evaluating GPU rendering closely over different parts of the render pipeline."
This split of standards between GPU rendering standards is a key reason that many software developers have yet to pick sides. Developers such as Autodesk, however, are leveraging the GPU throughout the entire creative process. Marc Stevens, Autodesk's media and entertainment vice president and senior technical lead, lays out its solution.
"There are two sides of GPU rendering we're looking at," he says. "We're investing a lot in our interactive viewports to try to give users the most interactive and representative context to make creative decisions. The other side is GPU-accelerated final frame rendering. GPU definitely offers one big benefit here: speed - the sooner you get your result, the sooner you can make your decisions.
"But there are some challenges still out there - scalability, the cost of doing GI/reflections/refraction, accuracy of the image, render-time procedurals and so on. For a lot of these areas, CPU rendering is still the go-to solution today."
Most render providers are adding GPU solutions to complement their existing CPU render output: mental ray has iray, and V-Ray has the V-Ray RT renderer to give user feedback in the creation process. "V-Ray RT GPU is an important part of our development roadmap," says Lon Grohs, vice president of business development at Chaos Group.
"The feature set has grown to include motion blur, instancing, automatic texture resizing, and in V-Ray 3.0, Render Elements. We view the introduction of Render Elements as the last piece of the puzzle needed to make final-frame rendering on the GPU a reality. We're working with customers now to bring this to the big screen."
Build it and they will come
So developers are getting interested in GPU rendering, but are the artists following? "There is a strong interest from the user community as well as our software partners," notes Greg Estes, industry executive for media and entertainment at Nvidia.
"Good examples of GPU rendering solutions include Chaos V-Ray RT, Otoy Octane Render, cebas finalRender, Art & Animation Furry Ball and our own Nvidia iray. There is also an emerging GPU renderer called Redshift [which supports Maya and Softimage] that looks very interesting."
Although a 12GB Nvidia Kepler card may still be out of the reach of most 3D artists, the development of GPU rendering solutions is quickly catching up to the output of traditional CPU rendering solutions. It also offers a cheaper upgrade path than CPU rendering: if you wanted to double your workstation's rendering output capabilities, just add another GPU to your workstation, which can improve your machine's speed in scene creation and rendering output all in one.
Rendering in the cloud
'The cloud' has become such a ubiquitous term it's difficult to know what 'rendering in the cloud' is supposed to mean. At the moment there are two specific types of integration with the cloud. The first is to use online services to act as render nodes for existing rendering solutions, via either a dedicated cloud rendering company or a bespoke solution. Meanwhile, more intrepid users are creating their own render solutions using cloud infrastructures, as exploited
by companies such as Netflix and NASA.
Aside from data security concerns, cloud rendering has always been hampered by two things. The most important is transmission speed, both to and from the cloud. Online render farms need all the assets housed on their servers, including high resolution textures, master geometry and so on, but most ISPs restrict their upload speed to a fraction of the download, making it very time-consuming to upload all the assets. The second issue is complexity: network rendering solutions are often complex pieces of software which connect multiple computers.
To help with this problem, network farm RebusFarm has come up with a solution, as marketing and communications manager Margarete Kitel explains. "We have developed a submit plug-in called Farminizer," she says. "It checks the scene for any kinds of wrong set-ups and in case of errors advises the user what to change.
"Afterwards it prepares the scene for the farm and exports it to the Rebus manager. There the user can upload to the farm, then start the render job and see the progress. When the render is finished the user can download the rendered files on their computer."
Blurring the lines
Zync is a tool developed to offer another method for rendering in the cloud, which blurs the previous hard boundary between local and online rendering, as CMO Todd Prives explains. "With more traditional off-site farms, you have to work to a certain file structure and package your stuff. From its core Zync is designed to be an extension of your local farm, so that you don’t have to change the way you work at all - everything is just automatic in terms of being able to model, animate and comp, just as if you had a local farm."
Many render developers have demonstrated various versions of GPU renderers running in the cloud, but most have just been technical demos. Otoy's Octane Render Cloud edition, due by the end of 2013, will be among the first to come to market; Otoy's CEO Jules Urbach outlined the benefits of cloud rendering for its users. "With the cloud the rendering is basically instantaneous, and that's what we were showing off at the Nvidia conference in March. Sure it's using 100 GPUs, but that's no big deal, as [the cost of] a GPU is maybe a dollar an hour."
With Render Cloud Edition, Otoy is introducing support for Alembic, the geometry format that is becoming a default interchange standard between 3D applications. The key thing about Alembic, aside from being able to store animated 3D geometry, is that it's an open standard, created by ILM and Sony Pictures Imageworks.
Therefore, unlike other formats that are owned by specific developers, it is being developed to accommodate workflows rather than profit margins.
Urbach explains what happens when you move to the cloud. "Alembic allows us to take the data from any of the Octane apps into the cloud, and then render – whether it's 4K, 8K, stereo or holographic, it all gets done in a matter of seconds, versus minutes or hours or who knows what on the desktop CPU."
Get in the cloud?
With Octane now offering a GUI-driven, browser-based solution, does this mean we could see renderers that are based purely in the browser? That's exactly what Lagoa offers: an unbiased, browser-driven rendering engine based in the cloud. Lagoa also uses social interaction and collaboration, via comment systems and direct interaction with the scene in every aspect of lighting, material and geometry. It's powered by WebGL, and with this technology increasingly being adopted by mobile browsers, 3D artists have the potential to start lighting scenes on their tablets or even smartphones.
Long-established render companies are also entering this market. Lon Grohs, vice president of business development at Chaos Group, mentions a new product when discussing V-Ray in relation to cloud rendering. "At V-Ray we've developed a solution called V-Ray Cloud, which is a web-and mobile-ready platform for instantly deploying cloud rendering."
V-Ray Cloud is scheduled to appear as a browser-based render solution, as an addition to the online 3D application Clara.io. This offers a full 3D app in the cloud, using V-Ray's standard .vrmesh and .vrscene scene formats to enable users to mix files from desktop to the cloud. This is a potential landmark in cloud computing, a production-proven render solution with years of history integrating with a new platform. The message is clear for 3D artists: now is the time explore cloud rendering.
Mike Griggs is a freelance concept 3D, VFX and motion graphics artist working across TV, exhibition and digital design.
This article originally appeared in 3D World issue 175.
Liked this? Read these!
- The best 3D movies coming in 2014
- Discover what's next for Augmented Reality
- Download free textures: high resolution and ready to use now
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Get top Black Friday deals sent straight to your inbox: Sign up now!
We curate the best offers on creative kit and give our expert recommendations to save you time this Black Friday. Upgrade your setup for less with Creative Bloq.
The Creative Bloq team is made up of a group of design fans, and has changed and evolved since Creative Bloq began back in 2012. The current website team consists of eight full-time members of staff: Editor Georgia Coggan, Deputy Editor Rosie Hilder, Ecommerce Editor Beren Neale, Senior News Editor Daniel Piper, Editor, Digital Art and 3D Ian Dean, Tech Reviews Editor Erlingur Einarsson and Ecommerce Writer Beth Nicholls and Staff Writer Natalie Fear, as well as a roster of freelancers from around the world. The 3D World and ImagineFX magazine teams also pitch in, ensuring that content from 3D World and ImagineFX is represented on Creative Bloq.