Will Luxrender soon have the ultimate lighting algorithm?

http://www.luxrender.net/forum/viewtopic.php?f=8&t=10947

Look at the images that are produced with a new algorithm using Luxcore, could this bidirectional metropolis sampling scheme with vertex merging be the algorithm to end all algorithms, as in the only advances possible past that point being optimizations?

Now if Lukasstockner or someone else can get this in Cycles, the results look so free of noise that it could actually become usable for animation, imagine all of the ridiculous scenes that the project Gooseberry artists could make then.

So if this was in Cycles, would this be the final solution that would allow one to literally render anything or is there still a ways to go?

I love Luxrender. The work they are achieving is unbelievable. Go Luxrender!

i would say lux render is back in game?

Would be nice if there were some render times…

Shh secret.

One algorithm for this cpu not gpu

OMG… mind blowing

It’s funny how recently people in this very forum were talking about how Lux needs to change their development structure or be relegated to the has-been category. Looks like they’re doing what they do very well indeed.

Here’s another cool feature:


Green tiles are done (I presume a convergence value is set)
Yellow tiles are in progress (being rendered)
Red tiles are incomplete and still need more rendering.

The number in each tile says how many samples that tile has. Will be interesting to test it on more demanding scenes.

From this page here: http://www.luxrender.net/forum/viewtopic.php?f=36&t=6485&start=20#p104761

And another feature includes volume precedence, so intersecting volumes will be rendered depending on their ‘importance’ value, thus boundaries between volumes are easy to set up (or not really needed to be done at all)

http://www.luxrender.net/forum/viewtopic.php?f=8&t=10907

Have a look at this thread here on the LuxCOre new adaptive rendering thread:

http://www.luxrender.net/forum/viewtopic.php?f=8&t=10955&sid=8774701c5a8832e330716af5a8f74863

The author Dade said that its quite easy to plug in over an existing tile renderer, to quote:

“This new idea can be easily plugged over any existing tile rendered like BIASPATHCPU and BIASPATHOCL (… and looking at Cycles).”

Should make Ace happy xD

What’s nice is that Dade is producing a nice treasure trove of rendering technology that has the same license as Cycles, and Cycles itself is not intruding too much on what Luxcore is doing because of a design that doesn’t restrict one to physical plausibility and is instead designed for maximum artistic freedom. (at the cost of a higher learning curve if you really are trying to simulate physical plausibility). It seems like this would be the better solution for Cycles as of now because it is being designed to be able to work with animation (while the bidir-metropolis-vertex merging algorithm is considered to be a bit slower, but steady and can render about anything).

That said, Luxcore would probably be the engine to use if what you want to do is the flawless emulation of photography with no shortcuts or tricks, as the materials have a flawless physical plausibility out of the box for one thing.

Boy do they. Lux’s small material collection is incredibly impressive. With a minimum of tweaking, you can hide CGI items in photographs with almost completely believability.

For me, the killer feature would be a rendered viewport. That’s all I really miss when I use Lux.

I started a thread a while ago with the same idea for Cycles to allow early out rendering for distibuted rendering setups like on amazon called:

Blender Cycles rendering image noise percentage detection
Quote"As the image is constantly being updated with more and more samples could the devs implement an off screen render image buffer technique where the render progress is output progressively say every 30 seconds to the frame final PNG file (so it’s the same image that just gets overwritten time and time again until the render is stopped) and have a noise evaluation algorithm detect the total image percentage of noise.

Idea being once the image hits a low enough image percentage of noise the frame render can be stoped by hitting this pre defines noise level.

This way we can guarantee that when doing distributed rendering with cycles scene’s that each frame render job has equal quality final renders no matter the scene and shader complexity without having to set stupidly high levels of max samples before the frame stops rendering, (eg set max 10,000 samples for the render quality but if the noise algorithm realises that the image has already reached the image noise ratio percentage of say 5% image noise it can early out the render and push a new frame into the render que rather than waiting till hits 10,000 samples".

I was told it was a bad idea, Good to know my thoughts were in the right direction. Only difference I didn’t think about it per tile. :slight_smile:

I’ve been more or less occupied with other things the last year, but I should have more time going forward for all things Lux soon, and rendered viewport is very high up on my list.

A video demo of adaptive rendering:

It really is too bad that it’s not really possible to produce an adaptive rendering implementation that can use bidirectional sampling, you could imagine how much time that would save while being able to have full-quality caustics.

I guess for cycles, it wouldn’t be too bad because we can just make use of the ‘filter glossy’ setting with adaptive sampling to get better caustics than what would normally be achievable with pathtracing, but I can see why it would be difficult with bidirectional sampling because it’s considered hard enough to actually understand what you just wrote to get it working.

Really interesting stuff, Dade has great talent.

I wonder though, wouldn’t tile “seams” be visible? In the chrome ball example there is tiles with 92 passes next to tiles with 14. But without getting rid of the grid is hard to judge.

If these seams of different noise level are visible, wouldn’t be possible to offset the coordinates of half of the passes, by half of the resolution of the single tile?

Something like this:

64x64px tiles - half passes are computed with green tile coordinates, the other half are computed with blue scheme, this way the “seam” get blended avoiding sharp difference in noise level. Don’t know if makes sense or even possible O__O

https://db.tt/e6h0RvX8

Since all tiles are converging to a relative variance level there shouldn’t be any issue with seams between tiles.

Indeed i’m not sure if there are any obvious difference between tiles, just thinking…i’d really love to see a full render without the grid and obvious overall noise, an hard light situation. Seems so promising

EDIT: an animation example would be great
EDIT2: re-reading the Lux forum seems it would be well suited for animation. Lets see what Cycles devs think

A silly thing to say. I keep hearing things along these lines, “once cycles is feature complete, then it’s just optimizations forever!”. I can’t imagine what features may come in the future, so I can’t imagine writing off any future development.

I know your statement was hyperbolic, ace (at least I hope it was). But, it’s just something that I’ve noticed repeatedly in the past few months. I know a lot of the big todos are getting crossed off the list(volumetrics, deformation motion blur, smoke in cycles), but as long as there is research going on in the field of CG, there will always be new features.