Recent paper proves the possibility of bidirectional pathtracing on the GPU

http://www.cescg.org/CESCG-2015/papers/Otte-Efficient_Implementation_of_Bi-directional_Path_Tracer_on_GPU.pdf

There’s some very nice information on bidirectional tracing in general here as well (such as how to sharply reduce the performance hit compared with regular tracing). With it showing a way to have such sampling on the GPU, perhaps it can now become a higher priority item for Cycles development now.

So, is bidirectional pathtracing research now at the point where it’s practical to add to Cycles or is there more work needed yet?

Would be great to have a bi-dir option in cycles.

bi direccional path tracing is old news. Double the rays, the same noise.

That’s not really true though - especially for more difficult lighting situations.

hum… unless we are talking about different algorithms, bidirectional has got the very same disadvantages as path tracing: variance show up as high frequency noise, usually very difficult to sample in shadows and dark corners. If visibility rays which connect eye ray hits with light ray hits are carrying lot of variance, you have again a very big computational problem: halving noise means 4 times more samples. Visibility rays variance can reach several f-stops in many cases, for instance anytime you have visible light sources in indoor scenes. Basically, anytime you have to simulate overexposition in your renders, these brute force algorithms stop making any sense.

Besides, bidirectional was meant to solve that little component nobody seems to care about which is caustics, but there are better ways to render that component without resorting to the bidirectional algorithm complexities.

I don’t know why people is so afraid of implementing caustic photon maps in their render engines, they are fast directional high resolution maps quite compatible with frame rendering in animations (at least more that path tracing caustics) and you can build them in several passes with SSPM. Unless you have issues about precomputation and RAM storing of data structures before the eye ray pass, is not that bad of a solution.

If it was about MLT then this could be very interesting but bidirectional mongo pathtracing is so old news!

From what I read, bidirectional tracing excels at cases where there’s a lot of indirect lighting and where the light sources are small (because the sampler will always know where the light is).

Also, it’s not old news when you note that this is an implementation that works on the GPU (which would now make such a system a good fit for Cycles).

The sampler always knows where the lights are. What it doesn’t know is how else to get there if there’s something in the way. The idea for bidirectional is to start paths also from the lights and then connect those with the camera paths. That means tracing twice the path segments plus the rays connecting those segments, so bidirectional causes considerable overhead. It pays off best when there are light sources which are somewhat occluded but still contributing significantly, such as a lamp behind a lampshade.

Also, it’s not old news when you note that this is an implementation that works on the GPU (which would now make such a system a good fit for Cycles).

Bidirectional implementations on the GPU have existed for years, see https://www.youtube.com/user/Dietger86
It shouldn’t be a big deal to implement on modern GPUs, but keeping all those path segments in GPU local memory is likely to limit parallelism. The paper you posted also tries to minimize the path segments and connections. You’d also have to give up the simplifying assumption that all rays start from the camera, which means current materials won’t work anymore, at least in general.

For those interested in comparing Bidir vs. Unidir, I suggest you play around with Mitsuba a bit, to see in what scenes the bidirectional integrator really pays off.

https://www.youtube.com/user/Dietger86” Yeah I need me some of that xD

There is also the metropolis branch, (there still must be some links here on the BAforum) or https://developer.blender.org/T38401
Its a form of optimized bi-diretional path rendering, reversion 20 did also work on GPU.
Its still in development, and been paused for a while (because Lucas has been working lots of other areas in Blender).

If i read the comments on it Ace i see you writing there several sugestions too.
Oh well perhaps a wrong forum thread title, I myself too find it sometimes difficult to sumerize things into a one liner.

Anyway lets hope the next action of those researches is extending cycles :slight_smile:

The MLT patch has no bidirectional path tracing. It is only a basic Metropolis sampler for unidirectional path tracing. LOTS of Cycles would have to be rewritten/retooled to get everything ready to play nicely with a bidirectional integrator.

Hm, I haven’t fully read the paper yet, but currently I fail to find the new contribution. It seems like the author just describes BPT and then presents his timing results.
Generally, BiDir on GPUs itself is not new. The current reference probably is http://cgg.mff.cuni.cz/~jaroslav/papers/2014-gpult/2014-gpult-paper.pdf (even including VCM, which is again a lot more advanced than BPT), but that stuff has been done for quite some years by now.
As BeerBaron said, the main problem is the memory required for subpath storage, but that can easily be traded off against speed (see Table II in the linked paper). Also, this storage of intermediate path data already applies to the current AMD Kernel Split AFAIK since it has to store all path states between kernel calls.

The way I see it (I don’t have too much insight into production workflow, correct me please), the problem with BPT isn’t speed (it’s quite fast with some tricks), number of parameters (that’s one of its advantages, along with pure path tracing, against other methods like photon mapping) or implementing it on the GPU (the way I see it, some of the advanced Cycles features are quite more complex that a simple GPU-BPT).

The main problem for the huge gap between research and production systems (I mean, come on, BPT is 22 years old) is that the more advanced an algorithm gets, the more it relies on physical laws and principles. However, these are usually seen as a limit for creative use, so production systems try to overcome this limit with tons of tricks like NPR (the name says it all), Ray Visibility, the lightpath node, layer control etc.
For Path Tracing, this is fine. PT is extremely robust, it handles these examples pretty well. BPT, however, is fundamentally based on the concept of “Helmholtz reciprocity”, which means that light rays can be reversed with no effect. This is what allows it to create paths both from the camera and the light, because in the end, the result will be the same. However, it won’t be the same when you use the flexibility that Cycles gives you. The result are weird artifacts, for example because subpaths from the light have hit an object that the camera rays didn’t hit (let’s say that the path is “Light <-> Glossy <-> Object <-> Diffuse <-> Camera” and the object has visibility for diffuse rays disabled). Another example are shaders that don’t obey energy conservation (aka “reflection higher than 100%”): PT usually gets along with this stuff, but some more modern algorithms just go berserk on your image in these cases.

The way I see it, that’s the main reason why renderers that focus on design and architecture (like VRay or Indigo) start to go into the BPT direction nowadays, while renderers that focus on VFX and movies (like Arnold or Appleseed), where you need these unphysical tricks way more, tend to stay with unidirectional methods. Cycles, according to its design documents, is focused on movie production, so that’d explain why BPT was and is no big target.

So, as a TL;DR: Adding BPT to Cycles, even on GPUs, should be possible today. However, it would probably conflict with many features that Cycles has, which would IMO raise the question why both PT and BPT have been stuffed into the same engine if the integration was done badly. So, it’s mostly design questions, not technical difficulties limiting Cycles+BPT. It is most likely possible to find a nice integration, but definitely not easy.

Again, this is of course my personal opinion and I’m not really an expert in the rendering industry, so I might be very wrong with my conclusions.

Hi, there are simple methods to get better performance with a GPU pathtracer.
Octane use Coherent pathtracing: http://graphics.ucsd.edu/~iman/coherent_path_tracing.php
and path term power (Whatever that means).
You can test it in your browser http://home.otoy.com/render/octane-render/purchase/
Play with settings http://render.otoy.com/universe.php#51Render%20Kernel%20Settings
These have drawbacks, of course but are easier to implement, if I understand papers correctly with my noob knowlege.

Cheers, mib