How to make 2 hours render to ten minutes render

A little trick I have discovered for Cycles (although it might work in BI too). I have tried this on this scene from The Architecture Academy:

500 samples, 1080p
Time Before: 1:45
Time after: 0:12

A huge difference. I will try to explain what I did. Consider the fact English is my second language.
Here’s the idea:
Before the rendering, Cycles collects all the information it needs about the scene in order to render. If the required memory is more than your GPU (or CPU) limit, it will fail to render, saying: “CUDA error: out of memory”. Luckily, my GPU didn’t have enough memory, as the textures were very big, and my GPU has only 2GB. So I tried to resize the textures in order to use less space. For example: every book in the shelf had a texture with HD resolution but its area in the frame was very small. So I took every book texture one by one and resized them. At that point I was able to render the scene in one hour and 45 minutes. A lot! So I continued resizing all the textures massively. After all that hard work, I got to rendering in 12 minutes! Such a huge difference!
So I thought, why does Cycles not do that automatically? Here is what it needs to do in the Pre-Render process:

  1.  Copying all the textures
    
  2.  Checking the area size any object takes in the frame.
    
  3.  Resizing the texture according to that size.
    
  4.  Render the scene using the small textures.
    
  5.  Delete the small texture and bring back the others.
    

This might save, as you see, more than an hour to the render. I think it is not so impossible to program, and the computer won’t need more than a few seconds to do this. Unfortunately, I am not a programmer, but it will be awesome if some of you guys can program it. A similar thing can be done with Subsurf amount, to reduce polygons amount.

Cycles needs to have an open intermediate texture format that supports multi-resolution. All texture formats should be converted to this intermediate mipmapped format in render time. It can then pull the proper resolution depending on the object/camera distance and screen accommodation.

Could someone explain to me why smaller textures are rendered faster?

This +1

You can also get similar massive speed ups by using vertex colors instead of textures when it makes sense to do so.

You could also get a speed up if you combined all the book covers into 1 texture, but I am not sure if it would be noticeable.

Great post!
I’m currently having crashes when rendering scenes with the Ocean modifier on GPU. I wonder if it could be related to this… Will do some tests. Thanks a lot!

+1. Cycles is abysmal at handling heavy textures. Part of the work is already built into OpenImageIO too. (.tx files and the like). Why Cycles has still has no mipmap/caching support is something of a mystery to me. It’s pretty much a standard renderer feature. It’s listed on the optimization ideas page, but that’s all anyone has spoken of it in ages.

+1 to mip mapped tiled maps using OpenColorIO. As can be seen here https://support.solidangle.com/display/ARP/Textures it makes a huge difference in arnold too. I wonder though if it’s useable for GPU.

Because the GPU need less memory, it have more memory for the calculation.

Could someone explain to me why smaller textures are rendered faster?

It makes simply even nowadays a difference if you have to handle 10 mb of data, or 100 kb :slight_smile:

In general any image modifications (internally) elsewhere in Blender are not packed or reusable around Blender. I guess this prevents any ghost or vapourware type elements from existing. Only external data or generated textures exist. The only way I have overcome this is by using the output from another scene after the modification is performed there. Unfortunately this only applies in limited situations. Like the node editor or the VSE.

@@ShacharHarshuv

Good idea !

It’s a “must have” for Blender ! :wink:

Kind regards
Alain

Do anyone know if any other render angine have this?
I am sure I’m not the first one to think of this idea.
Is there any programmer here that want to work on it?

It is a lot of work for the user but the computer would be able to convert all the texture into one in seconds. However I think it will be more difficult to program and I’m not sure it will give good results.

This addon would be more powerfull in a scene with a lot of textures. If the problam is with polygons amount it can be solve in a similar way, but not sure how easy and effective it will be to program

Arnold, Vray, and Mental Ray all do. It’s known as “mipmapping and caching” or other similar names. The texture is pre-converted to a special format with mip maps, pre-baked gamma correction, and some other stuff. The renderer will only load the largest mip level it needs, and it can flush data if texture memory usage gets too high. There’s also a related function called tiled textures or other names like that, where the renderer can load textures in sections and can avoid loading the whole image and just load a part of it.

You’re right, it is an obvious feature. It’s actually almost a standard feature in high end renderers, and is already on the Cycles roadmap. There were so docs for Arnold’s texture cache feature (which is insanely effective) posted already in this thread

I knew I wasn’t the first who thought about it. Do you know or think this is on cycles road map? When​ will it be availible to the user?
I never realy understood how MIP map work. what is it doing exactly? and how is it effective?

It’s listed here, under Shaders > SVM: http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Optimization

I also asked DingTo about it in IRC a few weeks ago and he said something like “it’s on the roadmap”. I don’t think anyone really has a timetable for when it’s going to happen, but the devs are aware it’s something that should be done eventually.

Mipmapping is where you store multiple scaled-down copies of a texture inside the image file. For example, if you have a 2048x2048 texture, you can also include smaller versions like 1024x1024, 512x512, and so on. The renderer only loads the largest one it needs. So if you made a 2k texture, and you only need 512x512 for that particular frame, the renderer will just load the 512x512 version. It’s an automated way of what you’re trying do in the OP. (in fact, your whole original post could be boiled down to “optimize your texture sizes!”.)

Optimizing ahead of time isn’t always doable though. If you are rendering the scene from multiple angles, your original textures need to be big enough to hold up at all angles. So let’s say you are rendering an image of a room. One shot is in the doorway, one is at the couch. You have a detailed coffee table by the couch, so you give it a high-res texture so it looks pretty in the couch shot. Except in the doorway shot, you don’t need the high-res coffee table texture, it’s a waste of memory and CPU time. You can’t hand-optimize it by downscaling though, because you DO need it for the couch angle! Mipmapping takes care of that, you can load a low-res of the coffee table texture when doing the doorway render, and the hi-res when doing the couch render. All automatically, without you having to do anything

Tiling is an extension of the same idea. In addition to having multiple resolution levels, you break each level into chunks of a certain size (like 64x64 pixels or whatever), and the renderer only loads the chunks it’s actually using at the moment. If the renderer is passing over an object whose UVs sit on the top-left of a texture atlas, you can just load the top-left tile of the atlas and skip having to load and lookup all the other regions of the texture that you aren’t using.

Thanks for the explanation. I thought that instead of loading several versions of the texture why can’t the engine resize and create more versions?