Will Cycles ever be dedicated CPU and GPU?

Surely there are some parts of a render a CPU can pitch in to that will be faster than if it had rendered on the GPU, or at the very least get to it before the GPU and thus be faster anyways. I know it’s possible, though I’d have to guess this is a very challenging feat that would likely take more time to program than it’s worth at the moment.

Do you think it will ever be so?

If the cost of overhead to swap back and forth from cpu to gpu isn’t too high, maybe. Though I think that’s a pretty big hurdle to leap.

It is too high, that’s the issue. Keeping both in sync means that you must constantly be sending checks back and forth which has enormous overhead for any scene worth rendering. You can do little things like check intersections on the GPU and feed that info to the CPU for shading, but I doubt that there will be a CPU + GPU renderer that shows a linear increase in speed as long as current motherboard configurations exist. Current computers simply aren’t designed for this task.

Well if you think about it as in them sharing the render… but you could indeed do 2 separate renders as long as everything is in the scene it should be the same CPU or GPU.

Think of it as in render layers send 1 off to GPU and one off to CPU which should not cause any performance hits (not much anyways if your CPU is being maxed out everything will be a little slower)

Here is a scene that could use that


If we take the cubes that are sss and mask them out we can have the CPU render the floor and GPU render the cubes at a much higher sample count


We can do that now if we wanted but it would be nice if there was a nice system to set that up for us aka a drop down box in the render layers to render multiple renders at the same time and pick which device you want it to render on and tile size

That’s crazy brilliant! Have a set of nodes made for a “CPU GPU split” in which you’d only have to specify what you want the CPU to do to save some time. Blender would have to have a mode built into it for this sort of thing. That way you could actually use both CPU and GPU seamlessly.

Best way to do this right now would probably be mask layers… but this wont respect any sort of material transparency going on.

they probably will not have a mode built in for this as this is quite a hacky workaround…

Well my image is using masking layers the only problem is we cant render 2 things at the same time so its not “sharing the load” its just rendering one on GPU and then one on CPU

Simple, open two blenders with the same file that’s already been set up. Set one window to GPU, the other to CPU.
(This works right? Can’t test right now.)

Yes it should work but his point is its a mess and transparency is a problem you will also need to tweak render settings for cpu and gpu

And if your talking about rendering a whole frame on cpu and gpu that may not work so well every now and then there are major differences in how gpu and cpu look which will cause flickering between frames (at least in past experiences they may have fixed it all by now)

but gpu volumes are no where near cpu volumes so dont even try that

I did the same thing about two years ago when SSS and the hair shader had just come out. Most of the scene was on one layer and I rendered it with GPU. Then, the character’s skin and hair where rendered on a second render layer with a much higher sample count. Really, the only thing we need to have is a way to tell the render layer to render with either the CPU or GPU and to render concurrently. :wink:

Yeah, yeah, I know. That last part is the catch. :smiley:

http://www.indigorenderer.com/node/1950

Why is it so difficult for Blender Devs to provide openCL support to Cycles when other renderers can do so? Is it only funding that is limited or is it constraints set by the existing coding framework?

OpenCL Cycles works fine, just not on AMD cards. It’s disabled by default because it is somewhat slower than CUDA on Nvidia cards, or the native x86 backend on CPUs, and can’t be compiled on AMD cards, so there’s not much reason to use it. You should (at least in theory) be able to use the OpenCL backend to render with CPU and GPU together, by essentially using your CPU as though it was another graphics card.

As for why it doesn’t work on AMD cards, the subject has been beaten to death here and on the Blender wiki. The answers are there if you’re interested, but the tl:dr is AMD’s driver sucks.

Other renderers don’t offer a full production feature set, and AMDs drivers/ocl compiler suck.

Additionally, that Arnold test had nothing to do with Arnold aside from the developer who created it.

I would be more interested in having the GPU be able to leverage the systems ram memory when it runs out of on board memory. This would allow even moderate GPU to render complex scenes and save time over the CPU render of the same scene.

Again, motherboard architecture would be the limiting factor in that arrangement. It takes a lot more time to get to system ram than it takes to get to video ram. The tight knit architecture of a video card is a large part of the GPU performance boost.

It’s not as simple as flipping a switch that says ‘use system ram for gpu’. The reason it hasn’t been done is that no one has found a way to make it boost performance. Do you want your gpu to render slower than your cpu, but have the same amount of available memory? I didn’t think so.

I own an octocore, with an older gforce 630m
i’m not even sure if an octo is still a lot these days
But I think days of single cores are numbered.

How about animation rendering.
What if the next frame is rendered to whatever is ready be it the CPU or the GPU (and if both then favor one side)…
They both can work on different frames, and if one is faster it wouldnt even matter.

  • also for animation rendering i’ve often wondered might rendering math speed up, if it knew the previous frame ?
    i mean a lot is solved in the previous frame, maybe the camera moves a bit but for large parts usually the subject stays the same.
    Well maybe not for smoke sims, but think of a character walking on a streat or so things dont change a lot then.
    But even smoke doesnt move that fast usually so pixels would be usually in withing 80% of their previous color range.

I have done concurrent renders on cpu and gpu for animation with 2 running instances of blender. It works pretty well, especially if you are going to be running it overnight. It certainly feels like it is using 100% of your system resources when it is running that way though, so multitasking while multirendering is out the window.

I have also done this but its not the speed up everyone here is looking for… this is a work around that takes more time to set up than most of my renders xD

It looks like the people who made the bake tool already made these exact changes we were talking about to the render layers except this is for baking

look at 1:03

I think this method could be adapted and be used in multiple instances of blender, one using GPU one using CPU.