Brigade 3 Demo. Clean real time path tracing.

Very impressive demo. There’a almost certainly some fancy post filtering going on to achieve these results in real time (currently being speculated about over on ompf2), but nonetheless it’s amazing how far path tracing has come just in the past few years.

I just saw this on CGTalk, I wonder how many GPU’s they’re using though?

A lot of these demos tend to run on the best hardware that money can buy, and that means thousands of dollars worth in GPU’s alone, I don’t see this being available on the consumer level for a while yet.

dooooooooooooooooooooooood

Indeed, that must be some heavy 2D filters running on top on limited path-tracing settings and probably on multiple top-of-the-line GPU’s as well but I’ll be damned if that isn’t impressive still :smiley:

They used about 80 GPUs to render that. So not something you can do at home, yet at least. :slight_smile:

In this case, Cycles (and perhaps Thea, Indigo, and Luxrender) could probably do something like this if they had support for 80 GPU setups.

Nothing special then I guess unless they plan to make it one of the first products that powers PC games from the cloud, I guess finding a way to get 80 GPU’s to work in tandem is kind of an accomplishment.

soon those 80 gpu, will be one tiny gpu,

with the way quantum dots and optical, computing is progressing,
we will probably see full optical computing soon.

Моя смеялсо :slight_smile:

https://youtu.be/u00ONxi4-rk

Not quite, there is a difference. Brigade has been optimized for gaming and all that entails. Octane and Cycles would not be able to do the same even with 80 GPUs.

in about 2 years, it looks like photo real will be real time.

I don’t know if they’re really serious about that, but the idea that we’ll run path-traced games running on GPUs in the cloud sounds like complete bullshit to me. Nobody in their right mind would target a game platform that requires hardware upwards of 10,000$ and double-digit kilowatts of power for a single seat, for graphics that really are at best marginally better than what you can do with high-end rasterization.

Not to mention that imgtec has raytracing hardware (targeting mobile chipsets, of all things) which is more than a magnitude more efficient than ordinary GPUs.

Update:

Marketing bullshit aside, all they’re saying is they’ll have very large datasets available at DRAM speed. That’s not going to make your rendering faster, at all. All it will do is let you store really large scenes and access them as if they were in-memory.

To get down to where you can fit 80x as many transistors for actual computation, you’d need several breakthroughs in manufacturing processes and I don’t see that happen any time “soon”. We’re already getting close to some fundamental physical limits.

yeah, I think we will see the rise of the Gpgpu photonic grid computer,

basically its all 1 big cluster of 100% parrallel architecture

so a gpu can get 1 Terra bite in a ms,

the current arcitecture is not scalable.

imagine your computer can be upgraded just by snapping another computer into it, now its 2x the threads and data cap.

so basically, it can empty a hardrive in a second and spit it out proccessed.

with this kind of data moving and proccessing power at home, anything will be possible.

imagine your computer can be upgraded just by snapping another computer into it, now its 2x the threads and data cap.

Well, that’s nice if you have your own datacenter and can afford plugging in another 80 computers or so (or renting them). It also assumes that your problem can be parallelized well (which raytracing can!). But that’s really not the same as having 80x the computational power in the same space.

so basically, it can empty a hardrive in a second and spit it out proccessed.

Maybe if you ignore the time it takes to actually process the data, or if that time is negligible. Unfortunately, most interesting computations do not have negligible computational cost.

with this kind of data moving and proccessing power at home, anything will be possible.

Except for all the problems that are literally uncomputable. Also, there are functions which - even for a small number - on a machine 1 billion times faster than anything we have today, would take trillions of years to compute.

Yep, I’m not sure how it would work out? The Otoy devs do talk about having it priced at about $1 per hour. This would be great for rendering, but I don’t know how well it would compare to gaming. I don’t game, but I have seen my sons complete a $60 game in 4 to 6 hours. That is some expensive entertainment.