2.72 Cuda Problems

I am not sure what is going on. But tonight I have been experiencing unique problems with my 2.72 installation of Blender. First I noticed it was taking more time than usual to open texture images. Then tonight I tried to render a fairly simple scene. 500K vertices, with particles at 1000 with 400 children. Blender shuts down the render complaining of running out of CUDA memory. Which I am watching and it does run out.

I have done much more complex scenes than this with no problems before. Anyone have any ideas?

8gb RAM
GTX 560 TI 2GB
Core i5 2400 CPU

Do some basic troubleshhoting
Find out what in your scene is causing the lack of memory
Simplify your scene - render and adjust, repeat
Texture sizes - reduce size and retest
Reduce total particles - render and adjust, repeat
etc

I have done that, the two things that are causing the problems is a rug with particles, and a plant. Which still does not make any sense due to the fact I have done much more complex scenes before. More vertices, more particles, a lot more on both.

Didnt realize that I did not have 2.72b. Trying that one now.

That did not work, still the exact same problem.

Blender is probably telling you the truth, you are out of memory on your 2GB card. Let’s do the math… 1000 * 400 = 400,000 particles. That seems like a lot to me. And what are your particles? Are they geometry with texture?

If you have rendered this same exact scene on the same computer with a previous version of Blender and it worked then just use that version of Blender. Then submit your scene as an example for a bug report.

But my guess is you may have actually changed the scene, ever so slightly and that change caused more memory usage which bottoms out on your low end GPU.

Have you tried rendering with the CPU?

Hi,
Maybe your textures are bigger than usual and fill your memory.
Just try CPU rendering and see how many memory is used then.

Rendering with CPU on my Mac Book now. Somewhere I must have made a change, cause the memory used says slightly over 2Gigs. Little over 2 and half. 2664.07M to be exact.

Above comment says it, a little over 2 and half Gigs.

Hi, they had added some new features for Cycles and GPU in 2.72.
This make the Cuda kernel bigger and the VRAM memory footprint higher. even you don´t use this features.
So it is possible to render a scene with 2.71 but not with 2.72.
I cant even render the default cube with enabled Experimental kernel for SSS on GPU on my GTX 560Ti 448 1.28 GB. :frowning:

Cheers, mib

Thank you for that info, I knew there was more to it than the possibility of adding or changing something to my scenes. It looks like I am going to have to update my card, or the very least get a second 560 Ti 2GB Card. I hope the devs figure a way out to not make the Cuda kernal bigger when those new options are not even enabled.

It is hard to do that right now as it currently works on a megakernel status… and features that you enable and disable, you need to recompile cycles each time… which would mean creating one version of cycles for each permutation… which would mean that the download size would get ridiculously big and compile time get ridiculously long.

Right now, they are compiling two kernels… one supported and one experimental… even that, takes ages to compile.

They are looking at splitting the kernel up into microkernels but that refactor is not something that happens overnight.

Feel free to use older versions of blender to render / work with. 2.70 / 2.71 brought some memory usage reductions… 2.72 brought some memory usage increases… it will fluctuate… nothing is static…

You are welcome, just as heads up, VRAM is not added with 2 or more cards.
The whole scene has to fit in to each card.
I would go for a 4 GB card, the GTX 670 4 GB are cheap and on par with GTX 760.
You could use the 4 GB card for display and have all of your 2 GB card free for Cycles.

http://www.blenderartists.org/forum/showthread.php?350975-The-new-Cycles-GPU-2-72-Benchmark&p=2751984&viewfull=1#post2751984

Cheers, mib

I have the same problem. GTX 480 with 1.25GB (or something) and I can’t render anything with experimental enabled. With supported it’s fine.

Don’t know the problem, is this really it? Kernel just too big?

With every feature, the kernel size increases… Bigger features (such as SSS) chew up more memory usage, even if the scene does not use that feature.

experimental kernel, has a few more features enabled for the GPU compared to supported… which is the reason why it doesn’t work on smaller cards.

Splitting up the kernel is something may happen, but wont happen overnight.

How big is the experimental kernel now? It’s pretty amazing that I can’t render a single thing even with over 1GB. I accept that this is true, just wondering if there is anything I can to do help the situation. I would like to be able to use SSS but my CPU is horribly slow in comparison…