Unlimited layers?

Are you not a programmer or just a poorly educated programmer? Of course there are cases where you don’t notice the difference between an indeterminate amount of string comparisons and what amounts to a single processor instruction. There are also cases where it absolutely will make a difference, like when you’re doing it millions of times during rendering or physics simulation (as Blender does). Whether it matters or not, using strings is guaranteed to be slower, use more memory and scale worse. Also, don’t even think of doing it “your way” on a GPU.

The reason why blender uses bitmasking for layers is that someone made a mistake. It happens.

Sure, whatever. Just stay out of software development wherever performance matters, and we’ll all be fine.

Perhaps not a mistake but a case of premature optimisation. Happens to a lot of programmers and Blender is far from the first or last application to have it’s feature set suffer from it.

The thing is, the bitfield is optimal for “inner loop” type operations where one is unable to determine what objects are going to be needed for visibility determination, collision testing, and what not. However, it is trivial to determine “sets” of objects up front and then only apply operations (be they rendering to the screen or collision in a hair sim) to the objects in that set. In many cases this is what the bitfield is used for in the first place (creating these sets) as any programmer will tell you that looping through all objects in an inner loop is going to be inefficient regardless of whether the inclusion test is string matching, bit field, or membership in a hash map.

And, of course, this is assuming the programmer is testing membership within a layer by something as inefficient as string matching in the first place. Objects belonging to a bunch of layers can indicate their membership by having an array of pointers to the layers they belong to. Any test for inclusion will be a loop over integer comparisons. Assuming objects don’t belong to more than a handful of layers, we’re talking a single bit comparison against (let’s say) around five equality comparisons. Cost different compared to rendering, physics, simulation, etc - negligible.

I assume some people will take umbrage at the above (seems it is impossible to point out anything less than perfect about Blender without it these days), but it’s a perspective from someone making their living as a developer who happens to have several applications using an unbounded number of layers in it’s rendering loop. The input files identify the layers by name (i.e. strings). The internal code uses pointers (i.e. integers). I just might know what I’m talking about :wink:

I might as well be talking to a wall, but…

However, it is trivial to determine “sets” of objects up front and then only apply operations (be they rendering to the screen or collision in a hair sim) to the objects in that set.

It’s not so trivial to store, maintain and use such sets for each of the use-cases for layer masks in Blender (there are many!). For physics, storing which objects affect each other would lead to a combinatorial explosion. For rendering, storing it per-pixel (for masking) would consume a lot of memory.

And, of course, this is assuming the programmer is testing membership within a layer by something as inefficient as string matching in the first place. Objects belonging to a bunch of layers can indicate their membership by having an array of pointers to the layers they belong to.

Maybe so, but in terms of complexity (in the CS sense), there is no difference.

Any test for inclusion will be a loop over integer comparisons. Assuming objects don’t belong to more than a handful of layers, we’re talking a single bit comparison against (let’s say) around five equality comparisons. Cost different compared to rendering, physics, simulation, etc - negligible.

Don’t forget the cost of an additional pointer dereference (cache miss!) and the added complexity of allocating a small dynamically-sized array. And again, trying to determine if two objects share a layer becomes much more expensive - bitfields really are the way to go here.

I assume some people will take umbrage at the above (seems it is impossible to point out anything less than perfect about Blender without it these days), but it’s a perspective from someone making their living as a developer who happens to have several applications using an unbounded number of layers in it’s rendering loop. The input files identify the layers by name (i.e. strings). The internal code uses pointers (i.e. integers). I just might know what I’m talking about :wink:

You are talking about one particular use-case in your particular codebase. You’re obviously not talking about Blender. So, no, you don’t really know what you are talking about. At least take the time of doing a half-assed study of the source code, like I did :wink:

I’m not a real programmer, so this may be stupid, but can’t Blender just use bigger or more bitfields to represent more layers as needed?

Yes, with the same issues as now - the number is limited by the size of the bitfield. Unless one goes redonkulous in redundancy (i.e. waste some 32 bytes for a max of 256 layers), you’re still going to bump into the limit. More importantly, once you go outside what fits into a register (32 bits on some older machines, 64bits on most) you are doing multiple bit compares already.

There really isn’t a need to do it with bitfields though, especially given the large (backward compatibility breaking) changes that are already planned for Blender 2.8. With cache prefetches, auto unrolling of loops, dual pipeline comparing of integers (i.e. layer pointers), etc from today’s compilers & CPU’s - there is very little benefit to be gained from the bitfields.

Take a look at the Cycles code & interface, it’s not looping through every object every pixel. The difference between between a loop of pointer comparisons & a bitfield compare is not even going to register as a blip on the overall render.

Take a look at the Bullet physics engine used for Blender’s physics simulation - it doesn’t even know about Blender bitfields, the objects loaded into it at the beginning of the sim (once).

Take a look at the Hair simulation code & interface - the performance heavy features don’t care about layers at all.

And so on.

The issues with replacing bitfields with an arbitrary number of layers are primarily compatibility (something we’ve already been warned isn’t guaranteed with the move to 2.8), user interface (the ubiquitous layer buttons will need to be replaced), and developer time (it’s a boring thing to alter and no developer has stepped forward volunteering to do so). It isn’t about performance because, frankly, the difference between a single bitfield comparison and twenty pointer comparisons is so negligible it wouldn’t be noticed by the user until other, far worse, problems with Blender performance make the application unusable.

It looks like you are doing a poor use of linking.
Sincerely to build a city, you should create a .blend per building, tree, animated character and link them as a group into your city blend.
It means 20 layers per building, tree, character. It is really easy to manage with edit linked library addon.
Abilities to use local view on selection and mask modifiers on objects are also helping to focus viewport on what you are working on.

Be more self confident in your abilities as a modeler. You don’t need to keep a join operation in history to be able to do a separate one.
You only need to keep a version for irreversible modifications.
And here, too. The better level to keep versions is a saved as copy blend.

Dear BeerBaron,
the seemingly unfathomable substances that scientists qualify as “dark” and make up for 96% of the known universe are actually my ego.
You have to try way harder than that to even dent it.
To answer your very presumptuously put question, I am not a programmer, not even a poorly educated one.
If I had to think of a way to “provably” say what I am, in the context of a discussion about the way blender is coded, I’d say that in one of my local forks of blender, python is potentially but one of a plethora of languages I can use to write “scripts” and that is a consequence of actually reducing both the complexity and the size of the existing codebase.
I say potentially because, in this very moment, I’m moving the implementation from the “hack & slash” repository to the “properly written” one and I decided that cmake sucks too much so I’m designing and implementing a better build system. It’s the luxury of not being a programmer: I can take as much time as I like, to do what I want, the way I want it.
But I still have the original code and the working executable to prove the thing.
I’d call that my current threshold in knowledge and skill regarding the source code, its design and the possible useful changes.
I can zip the entire repository and sent it to you so that you can verify it and emit your verdict. My ego won’t care a yota, it just doesn’t stop repeating “I’m a frigging god”. And expanding.
Oh, by the way, I’m pretty sure I could also design and implement a change to the codebase that will offer “unlimited” layers, comparable performance and backward compatibility. And I don’t even know C++, isn’t that crazy?
And read “Object Oriented Programming in the BETA Programming Language” if you want to understand why that bitfield is a mistake.

BeerBaron comments remind me of a conversation I overheard many years ago in an age where almost everybody was programming at least in C: it was a erudite comparison of the clock cycles of a JMP vs. CALL (x86 code) in one place in a program…

Hello Zeauro,

thanks for the tips; there is a reason why I would like to do things in ‘my way’ or more like in the ‘usual’ way.
I would like to contact you in a few weeks anyway related to this.

So, my opinion:

Organizing huge projects could include different methods.
The most widespread ones are using layers and using links (xrefs, etc.) and its combinations (I’m not talking about grouping now).

20 layers are simple not enough for CAD-like workflows, especially with the lack of ‘official’ colored wireframes, etc.
So it is a weak point for Blender what must be solved in the close future.
It does not mean that there are no workarounds; it just mean that people like me (and former 3rd party 3d app users) hate workarounds as once we get used to do things in a better way (or we simple like when we have choices to use or not use layers instead of linking).
But that is a long story and I really appreciate your help, so thanks again.

‘Be more self confident in your abilities as a modeler. You don’t need to keep a join operation in history to be able to do a separate one.
You only need to keep a version for irreversible modifications.
And here, too. The better level to keep versions is a saved as copy blend.’

There is no problem with my self-confidency, I’m 3d guy since 1995.
The problem could be related to possible mistakes/other factors what results that a model should stay ‘editable’ as much as it possible. And yes, backups are important, but are also a very nasty way to project organization. Possibly collecting ‘earlier’ versions in a different scene could work; I will test it soon.

So again, thanks.

So what I get from this, no one knows when the 20 layers limitation is resolved? I would assume this would be one of the biggest requests from Blender users. Layers is a central part of any production workflow so I don’t understand why this still hasn’t been look at. I would still be happy with 256 or even 128 layers limit. I don’t think I’ve ever needed more than 60-70 myself. I also use nested layers in Max quite extensively. That will be quite awesome too.

256/128 should be enough for me, too.

Simply rising the number of layers is not a solution. How would you manage 128 layers with the current system? A completely new system would be needed and you would probably loose the ability to have a single object on multiple layers. Such small limit would be than still very limiting (imagine linking 100 of assets - each with 5 layers - and you are over quota).
IMHO much better solution would be better asset management and the most important - IN PLACE linked group editing. This way you can easily link and edit the groups and each group can have their own set of 20 unique layers. You can have a nice hierarchy with as many layers you want - you have only limit 20 layers per scene. The only problem with this workflow is that its PITA to setup, edit and it is quite buggy - this is mainly due to fact that only few people actually use it and test it.

I think the whole point is here. Blender is not a CAD app.
So using it, with the idea to keep a complete editability of an enormous scene seems to be a mistake.
Maybe thinking of an articulation between a scene edited in Freecad and animated and rendered in Blender would be less frustrating.

if your working with cad, do you have some sort of naming conventions in your files. ?
hiding by names might be easier then adding extra object properties.
ea drop a few suzannes and type:
bpy.ops.object.select_pattern(pattern=“Suzanne*”)

note it supports wildcards
and press H to hide or Alt H to unhide

It could, but that’s a lot of code to change and it would break compatibility. Instead, there could just be a different layer management system for organizational purposes.

Also, from a conceptual standpoint, do you actually want an organizational layer system where one object can exist on multiple layers? I’d find that confusing, considering how layers work in many other application.

Wasn’t your solution to store a pointer (8 bytes on 64-bit) to a dynamically-sized array of pointers (8 bytes per element plus 4 bytes for length, plus overhead due to heap fragmentation)? Not saying it’s a “wrong” solution, but the overhead is not much lower in the best case and much worse in the comparable limit case (e.g. if an object is on all the 256 layers that you could have had with a 32-byte bitfield)

With cache prefetches, auto unrolling of loops, dual pipeline comparing of integers (i.e. layer pointers), etc from today’s compilers & CPU’s - there is very little benefit to be gained from the bitfields.

You cannot prefetch the array pointer dereference in this case. The memory access really makes all the difference, because in your “favorable” case (“test layer membership of one object that is only on a handful of layers”) that’s what’s going to cause all the base overhead. Loop unrolling and integer compares don’t matter at that point.

Take a look at the Cycles code & interface, it’s not looping through every object every pixel. The difference between between a loop of pointer comparisons & a bitfield compare is not even going to register as a blip on the overall render.

Cycles isn’t using layers for (per-pixel) shading calculations, but Blender Internal is. This isn’t just a “loop over all objects to determine visibility” situation. I’m not gonna claim I understand all the use-cases for layers in Blender, but you apparently are just ignoring a lot of them.

Take a look at the Bullet physics engine used for Blender’s physics simulation - it doesn’t even know about Blender bitfields, the objects loaded into it at the beginning of the sim (once).

I’m not sure if Blender scene layers interact here, but collision groups are also 20-element bitfields, just like scene layers. Bullet is using bitfields, because that’s the way to do collision masking.

It isn’t about performance because, frankly, the difference between a single bitfield comparison and twenty pointer comparisons is so negligible it wouldn’t be noticed by the user until other, far worse, problems with Blender performance make the application unusable.

You don’t know that. Either way, if I had to argue why this shouldn’t be changed, the amount of work required (compared to adding another system) would be reason enough. Just look at all the places in the code where the existing layers are used.

Uh… ok.

Would you like to take a bet?

Let me get this straight, I’m supposed to read a 22-year-old book on a misguided programming paradigm, about a language that went nowhere, to understand how using bitfields was a mistake in a real codebase for a particular problem domain? I’ll pass, but let it be known that I do have my own share of poor education on “Object Oriented Programming”.

Oh, it does? Well in the case of Blender, it’s about a thousand places, and a difference between a single cycle and several hundred cycles (for BTolputts solution) in the best case, but an essentially unbounded amount for the general case. Does the difference matter? Probably not, in 98% of cases. That doesn’t mean the other 2% don’t matter. Either solutions have their merit - not just in terms of performance, but also in terms of simplicity.

Of course it is, not even just for per-pixel ops, but actually inside the innermost loop possible (node intersection in the (Q)BVH traversal).
The relevant source code is intern/cycles/kernel/geom/geom_bvh_traversal.h (all the VISIBILITY_FLAG areas) and intern/cycles/kernel/kernel_path_state.h (path_state_ray_visibility). According to a comment in there, even the simple bitfield logical test gives a 5% performance hit (!).
Actually, the visibility test there is not just about layers, but also the Diffuse/Glossy/… ray visibility option, so layers can’t even be extended to the full 32 bit with the current code.

This is just to give some info, I’m not particularly in favor/against any solution.

You’re right. I took a only cursory look and decided that “visibility” must be for ray visibility only, overlooking the fact that the layers are actually stored in the upper bits.

Well i never wroe an addon…but ehm if someone did maybe here a code hint, to hide /unhide many Suzanne* objects by name with a wildcard. Code :


import bpy

bpy.ops.object.select_pattern(pattern="Suz*")
for obj in bpy.context.selected_objects:
    obj.hide = not obj.hide


I kinda wonder if such mini scripts could be macro’s for the python console, people who are in CAD might like that.
Then have “Suz*” as an argument string for a command.

Is the layer management system of 3ds max generally considered a good one? Could I consider that a good model or does anyone have a more suitable example of the set of features required by such a system?

@pgi: Whilst I got a laugh out of your “dark matter ego” comment, it’s honestly not worth engaging BB in these kinds of discussions. Assuming that history serves correctly and your quote is representative of the rest of what BB is posting, it’s just meant to rattle your cage and aggravate. He had the same MO in his previous account which is why some of us have blocked both.

FWIW, I took a look over the Blender code last night to refresh my memory. Some of this I’ve mentioned before but I repeat for completeness given it’s no longer from recall, but I can point to code lines and UI.

  • Bullet simulation of physics does not use the bit flags for the layers. Instead it has it’s own bit-flags for the purposes of collision groups. Whilst it is pretty simple to put in a broadphase filter callback (which excludes collision code pretty high up the Bullet simulation loop, i.e. not in the tight “inner loop”), it is not needed and (more importantly) demonstrates a clean separation between render layers and simulation groups.

  • Particles & force fields on the other hand do use the scene layers. It would be trivial to change the code to use their own bitfields, like Bullet, in the upcoming recode (do remember particles & hair are up for major changes in 2.8) and this would make using them intuitive for anyone already using rigid body physics (i.e. with clean separation between render layers and physics/particle groups).

  • Vertex Groups: Layers of additional data in a mesh object, identified by name (not bitfield), looped over per frame by deformers. The performance problems of this loop as opposed to a bitfield optimisation? So negligible no-one has even noticed. :wink:

So, in other words, we already have separate bitfields for the performance intensive areas. We are replacing code of an area that doesn’t (and which should be using similar concepts to physics in the first place!). We are already running searches over external arrays of arbitrary size per rendered frame using strings to identify which to select & which to ignore without any complaints about performance.

Technically speaking, there isn’t a good reason not to change the layer system, at least in regards to performance or memory. The actual problems with implementation are the usual suspects: the change is boring without much glitz/glamour, it is unlikely any of the developers would volunteer for the task because of this, and it hasn’t actually got the BF/Ton tick of approval (& therefore no-one is going to be tasked with the change).