2.68a Error - CUDA kernel compilation failed, see console for details

When I attempt to render my current scene via GPU it stops after .02 seconds completely blank. When I enable rendered viewport shading it renders nothing and gives me the error, “CUDA kernel compilation failed, see console for details”. I can render it via CPU just fine, but it unacceptably slow. I can render other scenes, however, using the GPU. Where can I access Blenders console log? I’m running Blender 2.68a under OS X. I also reinstalled the CUDA toolkit just to be sure.

Thanks for the help,
Matt VG

What is your GPU hardware? I’m on a macbookpro 6,1 and I can’t do GPU since 2.65. The Nvidia 330M got left back. Shader model 1.2 is seemingly too ancient for the devs to optimise for. For a while on 2.65-2.66 I was able to compile successfully by changing the cmake config line where it said

sm_21,sm_22,sm_31

deleting all that and putting sm_12 worked for a while but not anymore. Repeat: not anymore.

You will have to closely look at which shader model your specific video card supports. Hate to bring bad news but it sounds like obsoletion happened.

Thanks for the response Dustractor, but I can render other scenes on my GPU just fine so I suspect something about my texturing is crashing the GPU. I was hoping to find out more about the error so I could track down the problem, but no luck thus far.

My rig:
OS 10.8.4
3.49 GHz Intel Core i7
16GBs RAM
x2 GeForce GTX 570 1280MB

Hi, may you run out of memory (VRam), 1280 MB is not much if you use big textures for example.
Try to render your scene with the card not connected to display, this save 3-400 MB.
I am not a OSX user but you could start blender app from a terminal/xterm.
Since 2.67 Cuda kernel are precompiled, nothing to compile for the user system.

Cheers, mib.

Thank you Mib2berlin. I figured since I have two 1.28GB GTX 570 cards that textures wouldn’t be an issue. I only have a couple textures that are 4096, the rest are smaller. I will decrease the texture sizes and see if that works. The thing is, it was rendering mostly fine up until yesterday. It had started crashing - seemingly randomly - on render a few days ago, but up until yesterday I could eventually make it render.
My materials are rather complex and probably need to be simplified as well.

What does starting Blender from the terminal accomplish? Would that give me an error log?

UPDATE:
Resized all the textures and Blender still crashes on render. The preview render, however, now works and I no longer receive the CUDA kernel compilation error.

I ran Blender from the Terminal and found the crash log. Here’s what little it says:

Blender 2.68 (sub 0), Revision: 58536

backtrace

0 blender 0x0000000100138884 blender_crash_handler_backtrace + 70
1 blender 0x0000000100138abb blender_crash_handler + 451
2 libsystem_c.dylib 0x00007fff9594094a _sigtramp + 26
3 blender 0x0000000100000000 __dso_handle + 0

I think I’ll boot into Windows and see if I have the same issue.

The VRam of two cards is not added, you have only 1.28GB.
Your system use about 3-400 MB vram for display and if you use both cards for render you have only ~800MB for cycles.
If you use the non display card only you have the full 1.28 GB for cycles but slower render performance.

Cheers, mib.

Ah, I see! Then I need to figure out which card is the one I’m using for my display.

UPDATE:
I used the trial for iStat Pro to watch GPU memory usage as Blender renders the preview. You can also find which GPU is being used under OS X by opening up the System Information app Applications > Utilities.

Very little memory is used for the preview render and Blender still crashes on “full” render. The only difference I can tell between the preview and the full render is the number of particles. But I turned the particles all the way up for the preview render and it still renders just fine.

Ok. I just switched to Progressive Render and full render works now. So, something’s screwy with tiled render. I had changed my tile size to 512 since there’s evidence that size renders fastest. When I changed it to 256 tiled render works. So that apparently was the issue. I’m guessing it’s an out-of-memory problem, but not RAM nor VRAM.

Oh, man; 27 seconds is sooo much nicer than 5+ minutes :slight_smile:

Thank you everyone for your help.

UPDATE:
I spoke too soon. Sigh…

Does this mean anything to anyone?

Thread 1 Crashed:
0 libsystem_kernel.dylib 0x00007fff8cc560fa __psynch_cvwait + 10
1 libsystem_c.dylib 0x00007fff95956fe9 _pthread_cond_wait + 869
2 org.blenderfoundation.blender 0x00000001017091f1 IlmThread_2_0::Semaphore::wait() + 163
3 org.blenderfoundation.blender 0x0000000101705321 IlmThread_2_0::(anonymous namespace)::WorkerThread::run() + 57
4 org.blenderfoundation.blender 0x0000000101708774 IlmThread_2_0::(anonymous namespace)::threadLoop(void*) + 36
5 libsystem_c.dylib 0x00007fff959527a2 _pthread_start + 327
6 libsystem_c.dylib 0x00007fff9593f1e1 thread_start + 13

UPDATE 2:
Well, it looks like I was dealing with two separate issues.
The first was an out of memory error with CUDA when trying to preview render. I can preview render if I switch to the GPU not running the monitor.
The second issue was with an apparently corrupted particle system I was using that was causing the full render to crash. I deleted and recreated the particle system and I can now render.

Is there any way we can get better error reporting in Blender?