OSL: what does a closure actually do.

I have a question about what a closure actually is and how it is used by the renderer. From my background in quantum mechanics, I can guess that a closure is something that should add up to one, but other than that, I don’t understand it.

The reason I’m asking is that I would like to write a shader that simulates the behaviour of a diffraction grating or actually, the look of an opal, which is a naturally occuring photonic material. The problem is that grating diffraction requires the use of Bragg’s law, which contains the angle of the incident ray with the normal and the angle of the outgoing ray with the normal. In the old fashioned case of a loop of light sources, it would be easy to compute those angles for all light sources, but as I understand fro the osl documentation, an osl shader does not contain a light loop. So how canone go about finding the angles? I experimented a light bit with geometry nodes, but I just don’t understand what I’m doing, so it is getting me nowhere. Is there some OSL guru out there that can give me some pointers. I’m quite a lousy programmer, but a rather good physicist, so I should be able to understand this stuff…

Thanks in advance guys! And happy newyear.

The term “closure” in this case has a programming language background. In some languages, functions can be passed around as function objects into other functions, to be evaluated at a later time. Some languages also allow nested functions with access to data defined in outer scopes.

Pseudocode for a Javascript-like language:


function adder(x:Number){
    function closure(y:Number){
         return x + y
    }
    return closure
}

//add_one is a function object
add_one = adder(1)
//evaluates to 3
add_one(2)
//also evaluates to 3         
adder(1)(2)             

In the above example, the function declared inside “adder” captures (or encloses) the value of x.

With OSL, this term is used because OSL scripts are supposed to capture all the relevant data and return a function that can be evaluated like a BSDF, likely using monte-carlo integration. BSDFs themselves are not evaluated inside OSL scripts, that’s the job of the renderer. What you want to do is to do is implement a BSDF, which requires modifying Cycles itself.

The (admittedly cumbersome) alternative is to provide the light data itself to the OSL script somehow, do the classic light loop, and then pass that result into an emission closure.

Ah, ok. So a closure is just like a callback function in a gui.

As for the rest of your answer. Is what I want really that different from a normal reflection shader? That also needs to find the angle of incidence and the angle of reflection to see if there is a light source there, right? Or is that also handled by the rendered in a bsdf? Do you have a good reference on how the shaders are called from the renderer? Thanks

I’m not that into OSL, etc, but I believe you can get the incident direction of the ray (and info about where it came from, eg. the camera or whether it bounced off a surface).

However, you don’t control the other (“reflection”) direction from OSL, because the integrator can have the path continue into any direction it wants (which might be the direction to a light, the environment, another surface or just a random direction…).

You only know the actual directions while evaluating the BSDF, which is the responsibility of the renderer. You’ll have to look at the source where the BSDFs are implemented (and from where they are called) in “./blender/intern/cycles/kernel/closure”.

This framework exists specifically to enable stochastic/monte-carlo rendering, so you’ll likely want to understand that first.