Gooseberry cycles render times

So what do you guys think of the current Gooseberry cycles render times and how it would pertain to your projects of similar nature. I think 2 hours is a bit much for somewhat simplistic character scenes in open environments.

I think there is a reason why most gallery entries are still images :).

I can’t hear well what Ton says, that’s too much I suppose? And two hours on which hardware?
Anyways, if I were him I’d set a task force with at least two experienced devs full time on Cycles, both on missing features and speed up. Sergey is doing great but has many others top projects to handle.

My 2c

Yes, it very much depends on many variables, but for that kind of scene on other render engines I used professionally (mental ray, vray, arnold, mantra) for a 2k frame I would expect 30mins at most on CPU. I would love to hear other peoples experiences on similar project with cycles (full frame CGI with characters).

I read that Rango render times where 8 hours per frame, so I think the 2 hours per frame for a smaller image on non-top-of-the-line render blades is perfectly OK.

I’m no stranger to frames taking several hours to render, first real render I did was in '89 on Amiga 500 with real3D, raytracing and all - took 2 weeks to complete in 640x512 16 colors. And I’m aware of animation feature film render times, but for a simplistic character with no displacement and no background to take 2 hours? If they were talking about the foggy island scenes with the volumetric atmosphere in 4k then ok, 2h is pretty much on the low end.

My question is more in the line of, is Cycles even usable for work like commercials and animated series without unlimited* render farm resources? We used to render 30s commercials with full CGI characters and environments on our 20 studio computers with vray/mental ray without a problem in a day. I don’t think that is doable in cycles as is right now. Please feel free to share experiences.

*unlimited meaning scalable to a delivery timeframe without budget concerns

Just curious, I haven’t followed the Gooseberry project much… could those times be because they are rendering in 4k resolution?

Well the motion blur is killing it they did say they were going to take a look at it to see why motion blur is so slow.

Deformation blur is the biggest bottleneck in the production workflow right now, followed by hair, and to a lesser extent volumes (since they’re needed far less often). Aside from those things, and massive data set handling, Cycles is very much on par or close to being on par with the commercial path tracers out there. Completely missing features like render time displacement not considered, of course.

Wonder if they used adaptive sampling to reduce the time, (any faster ready tile is a time win).
For the figure part as in 1:09:48, i know AS would speed it up dramatically (its a simple background), but also for clothing and used skin; those seam simple difuse shaders (but i’m not sure, ive seen sometimes they make crazy complex materials, for tiny subtile effects, it would still resolve but win might be less noticable there. But i dont know if they like to release something till a certain noice level, or want ot be completely noice free… (even then i’m unable to spot AS to 80% and find the noise)…but its a matter of purity and quality they might want to achieve… i dont know.

It would really depend on financial constraints though.

The BF has a lot more funds for paid development work than it used to have, but it’s still not even close to the point where they can hire the number needed for work at every major area of Blender at once (especially full-time).

That’s the thing with FOSS, in most cases you would simply not be able to hire a major team of developers at the drop of a hat like Autodesk and friends can.

Cant say exactly in terms of rango… but large studios usually use one core for each render instance, and run many instances per machine… this way anything that is single threaded still takes advantage of using a multithreaded nature of the machine (in terms of cycles, loading into memory / spatial splits bvh / some composite items / saving out is all single threaded). More then likely the gooseberry artists were using the entire 8thread /4 cores avaliable to the renderer.

This hasn’t been the case for a while now. Especially for any studio using Arnold, which is designed from the ground up to be massively parallel. The new Renderman is the same way. The old way was only a necessity because the threading of Renderman, until very recently, was god awful. Most new movies render off on bays of 16 to 32-core machines.

I guess my information must be old! Thanks for clarifying! (though rango production was 2010 iirc… could still be using the old method? more than likely they used renderman back then)

2 hours per frame really isn’t too bad compared to the major CGI houses (ILM, etc.). A Blender Guru podcast with Mike Farnsworth (Arnold developer, previously with Tippett Studios) said they only started to worry when a render exceeded four hours or so.

Is there a good reason that they need to use motion blur in cycles? Most studios do that in post because of the render time increases. I presume they want to do so in order to improve it’s built in motion blur (efficiency wise)? But in all seriousness, getting nice motion blur in post takes up far less time than getting any engine to do it usually.

Until you need motion blur behind transparency or in reflections, or with something like hair. Then only deep compositing might help you and studios that have a pipeline set up for that also have the muscle to handle motion blur at rendertime. So I think the statement “most studios blur in post” should be “most small studios blur in post”. But comped motion blur is nice until you get the look down for the final render. Also, doing both motion blur and DOF in post adds even more trouble.

Huh, I dunno I’ve seen some larger studios just add Motion Blur in post. Generally you can get a vector motion pass (I don’t think that’s the exact name, I’m a bit too lazy to check to make sure though) to help the motion blur become more accurate. At least I know this is possible in Vray. But anyway, thanks for explaining it to me it does make sense in those cases I presume.

2 hours per frame is not that long? For a simple shot of a “talking head”? Come on, guys! Without render farm you can forget to render your animation at home and believe me that the prizes of professional render farm services are quite expensive. Sure it’s not tha long compared to “to the major CGI houses” but I don’t think that they are at the moment interested in using Blender at all. And imagine how these render times will sky rocket if there will be much more complex scene on the way.

@RealityFox Vector pass is also only 2D so it helps but it is not a magic bullet. Imagine a semi-transparent pixel at the edge of an object that is blurred by DOF: What z-depth does it have? You can not assign a single value because it is a lens artifact, a mixture of different points. One reason why high-quality DOF/motion blur can only be done in post. But you are correct in that post work in this area is good enough for the majority of smaller TV productions and advertising and pretty much everything in the scope of the average blender user.

@mookie3d Who ever rendered any “talking-head” animation with the features gooseberry uses (SSS, hair, pathtracing, motion blur) on a single workstation? State-of the art animation rendering on a single computer in a decent timeframe never was possible and never will be until we get quantum computing maybe.

hey!
just to make this a little clearer, since that weekly I’ve managed to get this shot to render in under 16 minutes on my machine (12 threads i7 with 16 gigs of ddr4 ram). meaning noise-free at roughly 1000 samples. Most of this comes from using ‘expensive’ shaders like Hair and SSS with 2 Bounces that make the noise clear up slower. So this can and will definitely be optimized. I’d say it’s a totally normal render time considering the stuff going on in the frame.

In my other experiences with Blender animation projects so far, we went from BI rendered shots in Elephants Dream (14 mins of a character shot) and BBB (grass shots took up to 1.30 hrs) to Cycles in ToS and Caminandes 2 (20 mins to 2 hrs max). For this you should take into account that Cycles is a path tracer and BI is just a good old hybrid scanliner. I’d say the 'slowdown‘ is justified since it gives more accurate renditions of your light setup.

If you cannot afford a render farm, you simply have to turn down the bounces, exclude ‘expensive’ shaders and buy two GPUs and there you go. But also if you do have a render farm, you want your previews to render fast, so we’re definitely not going to over board with the… err… expensiveness. I really like fast renders.

Anyway, as far as I know Sergey is working on improving motion blur speeds, etc. As a last resort we will (reluctantly) go back to rendering with Vector blur. (reluctantly because you tend to split of your shot into too many passes, and it takes an enormous comp just to assemble it again. splitting scenes like BI isn’t one of cycles’ strengths, and shouldn’t be)

.andy
(the guy without hair in the video that pretends to know stuff about rendering)