Will Luxrender soon have the ultimate lighting algorithm?

Well after looking at this 2-3 months back (as shown in my post above) I also came up with a few possibility’s where Bi-directional sampling could work ( but have to admit was done on a napkin after a good few beers). I’ve been working on a screen space ray traced reflection system that uses multiple camera’s to provide higher angle reflectance correction based of the main view point but using angle view importance sampling. I think there’s a good chance using screen based methods mixed with this could work. But need to put some more thought into it, and after adding extra complexity if would be faster by a decent amount, Also been looking at a screen space filter to convert per pixel to voxel with each voxel encoding 360 degree relevance. To be continued…

Screen space ray tracing has been shown to have some big limitations. Check out some of the post-mortem data from Killzone Shadow Fall.

I know it does, it’s always going to be second best to a full on raytrace/path trace engine. But using multi view importnace sampling for realtime graphics it’s a shed load faster than most solutions to gain the extra z-depth info needed for a decent screen spaced reflection system.

Were of topic though as the point i was making is relevant to dade’s new system (that i still say i came up with first lol), Bi directional tile based importance sampling is possible when using similar techniques.

Only reason Bi-directional is unachievable is because of a single camera view for noise eval and reduction, add extra offset (ghost view camera’s) plus multi camera importance sampling and there’s no reason it couldn’t be adapted for bi directional. Doing this segment in screen space should be good enough but not hit performance negatively.

Again like i said i need to take a closer look (but try to think as this as just detecting the noise level in any given area (tile in this case) as a screen space quad filter that feeds back to the render engine as to where sampling is needed. Adding multi view importance sampling then allows bi directional evaluation, if each screen space pixel can be converted to a voxel per pixel that encodes 360 degree variance sampling in screen space this COULD become really fast.

3DLuver; One needs to note that Dade is no dummy when it comes to raytracing, he is actually one of the most experienced render coders you can find right now (even when you include people in the commercial realm).

He actually did try to make adaptive sampling work with bidirectional sampling before the Luxcore rewrite began, but it was found that the connections that need to happen can occur pretty much anywhere and there was no known method to make the connections only in the places specified as needing more samples.

How would your multi-camera method work exactly, can it work automatically for scenes of any layout and complexity (including those with dozens of lights), are there any lighting cases where it would fail, do you already have a proof of concept with comparisons to images done with bidirectional metropolis sampling and vertex merging?

Well a good old paper that i feel never got enough respect was this one that was aimed at realtime ambient occlusion that used multi view techniques and importance sampling to pump up the screen space accuracy.

Multi-view Ambient Occlusion with Importance Sampling

Read the paper well and your understand why im using similar techniques for screen space raytraced reflections. As the paper shows by taking into account the view point of camera (and ghost camera’s based on view and scene collisions with camera’s, or if importance is more relevant for the lights of the scene ( as ghost camera) screen based pixel issues can be solved to very near ground truth. Also understand this is now quite an old paper and more advances in terms of performance have been found since then. The GPU and screen space is ignored to it’s detriment even for offline rendering in my view, like all things in life mixing strategies is the way forward. Offline or realtime should never be a complete separate line in the sand, like two opposing views about any important issues in life the best result is always a even mix.

Some goodies from next Siggraph (no papers yet, just abstracts). Dropping here since it is a render tech thread.

Fast Tile-Based Adaptive Sampling with User-Specified Fourier Spectra

Unifying Points Beams and Paths in Volumetric Light Transport Simulation

This one sounds really interesting, he’s the guy behind Progressive Photon Mapping and Stochastic Progressive Photon Mapping (scroll down the page)
Multiplexed Metropolis Light Transport -

Fusion of Markov chain Monte Carlo and multiple importance sampling.

And three papers from Wenzel Jakob (Mitsuba) with Marschner (and others).

  1. A Comprehensive Framework for Rendering Layered Materials - 2. Discrete Stochastic Microfacet Models - 3. Rendering Glints on High-Resolution Normal-Mapped Specular Surfaces

Does anyone know when this will be available as a regular download.

Here you go.
http://www.luxrender.net/forum/download/file.php?id=20312&t=1

Hard to judge at this low resolution but I can’t see any seams. Just like m9105826 said, there should not be any issue

@theoldghost - not sure but it’s quite random every year, don’t know the criteria, maybe when technical papers committee accept.

@Fishlike - a test with same doubt i had has been posted, there are slight seams indeed. But raising the quality will let them almost disappear

http://www.luxrender.net/forum/viewtopic.php?f=8&t=10955&start=30#p104856

Some seam is slightly noticeable however is a side effect of using a large threshold value. The algorithm only grants a cap to the noise of each single tile, under the cap anything can happen.

I’d really be curious to test such technique in Cycles.

I honestly still can’t see any seams in Dade’s extreme example of producing seams :smiley: