YafaRay News: 1.99 Beta4 released

Hello,

I’m preparing the upcoming v0.1.99-beta3 that will include the fix for the black render (thanks Jens!) in Blender 2.74.4 and higher. I want to add a few new features before releasing it, so I hope to have it ready in the next days.

It’s a pleasure to use Yafaray after so many years. By curiosity I selected SPPM and after fruitless tweaking checked the “PM Initial radius estimate” and tadaaa! On an interior scene lit only by sunlight, in 5 minutes I had an almost grainless pic. Wow. Magic! I need to learn more about SPPM.

Dani

Nice to see there’s still work being done on this renderer. Thank you Jens for osx build.

Are there any plans regarding Render Passes and Motion Blur? And possibly Smoke/Volume emission?

I had the impression there is an issue with Bi-dir?.. It gives me a horizontal stripe pattern - but maybe that’s my fault, somewhere. Need to do some tests…

It would be nice to have option to disable Caustic Photons in Photon Mapper. Set Count to 1 and so works - just not so convenient…

PS: On osx, my Render Presets did not get recognised by Yafa 1.99 beta 2. The Presets worked with previous Version - but were located in Addons_Contrib… Copy them to Yafaray Folder in Addons did fix that though.

YafaRay crashes with latest buildbot.

Win 7 64Bit
YafaRay v0.1.99-beta2 build Win7 / 8.1 64bit
Blender from Today - Hash: f553aba

Default scene.
Select cube, goto Materialpanel-New Material… Crash!

Crash Report:

Blender 2.74 (sub 5), Commit date: 2015-04-29 23:55, Hash f553aba

bpy.data.window_managers[“WinMan”].addon_support = {‘OFFICIAL’, ‘COMMUNITY’, ‘TESTING’} # Property
bpy.data.window_managers[“WinMan”].addon_search = “yaf” # Property
bpy.context.scene.render.engine = ‘YAFA_RENDER’ # Property

backtrace

Crash backtrace not supported on release builds

Is there planning GPU Rendering function for new versions? If it’ll can run both Nvidia and AMD Graphics then it’ll be great so far.

Hello,

I’ve released the new YafaRay Experimental v0.1.99-beta3 with several fixes and improvements. I made builds for Windows 32bit/64bit and Linux 32bit/64bit. See here:

http://www.yafaray.org/community/forum/viewtopic.php?f=12&t=5094

I’ve added Jens’s fix so now it works with the latest builds from Blender buildbots v2.74.5 :slight_smile:

Also, I’ve added a new Extended Texture Mapping system that allows to create materials in new ways. I hope you like this. For some images with the new system see:

http://www.yafaray.org/community/forum/viewtopic.php?f=22&t=5091

I hope you like the changes!! Best regards!

Hello,

Unfortunately that would require huge changes to the YafaRay code and our resources are very low at the moment :frowning:

Hm…But, anyway, is it in yours (developers) plans? And is it possible to join YafaRay devs?

GPU was discussed long ago for the Google Summer of Code 2010, see http://www.yafaray.org/node/373

However I don’t think there was progress on it and now resources are very low, so the answer is “probably not in the roadmap”.

Not sure how somebody can join the development team. If you’re interested you can perhaps do what I do: create your fork in Github, make changes and send pull requests to the official YafaRay so the devs analyse them and decide whether to include your changes or not in the master YafaRay code.

YafaRay Experimental v0.1.99-beta3

Many thanks.
Works great!
Win7 64
Blender Hash b50c6e3

Do you have a roadmap for YafaRay?
Irradiance Caching, interactive rendering etc.
(Sorry, I couldn’t resist. I know of the limited manpower)

Hello,

Rodrigo (DarkTide) is the Project Leader and I’m not one of the official YafaRay developers at the moment, only a collaborator, so I don’t know what the official YafaRay roadmap is.

The only I’m trying to do is to go through the Bug Tracker issues and try to fix as many as I can, considering my still reduced knowledge of the internals of YafaRay and the little free time I have.

Just talking about my own opinion and this is not YafaRay’s official opinion, I would like to do several things (if it’s even possible!):

  • Fix the crashes that happens sometimes in YafaRay (yes, still in the latest 0.1.5-official and 0.1.99-Experimental Beta3 :frowning:
  • Integrate better YafaRay in the Blender workflow (for example to make sure the Gamma input and output is correct for Blender Linear Workflow)
  • Make YafaRay more compatible with Blender (for example using Alpha textures in the same way Blender does, to reduce the extra work for the users when switching between Blender and YafaRay renderers)
  • Try to fix the blend material to avoid crashes and other problems. For now I tried to make the normal Texture system better to reduce the dependency on blend material as much as possible.
  • Try to implement SSS somehow. Povmaniaco has a fork of YafaRay called “TheBounty” where he implemented SSS materials. I would like to take a look at his code and maybe try to adapt it for YafaRay :wink:
  • Material nodes, (and perhaps OSL? this could be very ambitious but also would be very nice), but that’s probably out of my league for now…
  • Texture baking, but way out of my league for now.
  • Render passes, also looks difficult and would require a major rewrite I think?
  • GPU render: I don’t think we can consider this as a viable possibility. That would almost be a totally new renderer. I think LuxRender got it but had to rewrite a very big chunk of their codebase.

Most of those above are just wishes/dreams, but it’s more or less the way I would like to take…

Hi guys,

first of all, i would like to thank David for his great job, it’s really a pleasure to see Yafaray is still alive and interesting for devs :slight_smile: This last month, his work was very useful for Yafa users and wake up the interest to the community for this amazing renderer !
About the features list above, here my own opinion:
_Render passes is ONE OF THE MOST IMPORTANT features that I/we miss ! We can render color pass, AO pass, Glossy pass,… but each ones must to be made one by one and rendered separatly, so working and render time are really extended ! Most of other renderer have this feature like Vray, Cycles, Corona, Octane,… ( the old Kerkythea too I think).
_ I think SSS is not a vital feature, it requires longer rendering for not really convincing results in many cases. SSS is an old problem in rendering system, only few engines provide good enough results. For this, I use a different pass done with the Internal and composited with the Yafa raw rendering ( very fast and pretty good :wink: )
_Texture Baking is very useful for Game designer for example, and to promote Yafa too ! I use Baking for my projects but I have to make it with Cycles… making it with Yafa will be a pleasure :wink:
_Material nodes OK but keeping the easy existing material system.
_GPU render: I think this kind of rendering technology was in mood but not really constant. It requires heavy investment in expansive GPU and PathTracing is quite the only method used for. There’s several render engine based on CPU faster than Cyles on GPU, in EVERY situation ( interior or exterior). Look at Clarisse IFX or Corona renderer, impressive engines only CPU based !
But the most interesting is the interactive rendering like Cycles to configure the lights and shaders, that will accelerate the workflow !
_of course fixing bugs, Yafaray more compatible with Blender workflow ( gamma, shading behavior,…) are important too. I think too about to speed up the adaptative antialiasing as Vray does ( DMC sampler incredible fast O_o )

Of course I’m not a dev and I can imagine the huge work that it require, but let’s have a talk about Yafaray and the way to make him the “opensource Vray-like” :slight_smile: and sorry for my english :stuck_out_tongue:

Thank you, Olivier, that’s a very good insight!

Perhaps we should set such a roadmap. In my opinion, it would make sense to give priorities depending on two factors:

  • How important they would be for the users in general.
  • How much work/skills they would need.

For example, if a feature is needed but requires way too much effort and skills, it’s possible that it will never happen.

On the other side, perhaps some other feature, also needed, would require a smaller amount of work, effort and skills. In that case, I would vote to go for this kind of feature (if any!)

For example, the interactivity could be new feature both needed and somehow affordable. I remember that povmaniac was also interested on this for TheBounty and he told me that the LuxRender team got it working in their add-on renderer. Perhaps we can try to get some ideas from LuxRender, as povmaniac suggested, and see if we can use them for YafaRay.

On the other side, samo (Álvaro) was worried about the limitations in Blender APIs for Python add-ons, because in his opinion, Blender API does not work well with big and frequent transfers of scenes to Python add-ons, in terms of speed and parallelization. There are several possible solutions here:

  • (Relatively!) quick and dirty: get some interactivity working but transferring the whole scene every time something changes in it (materials, properties, positions, etc). This would be ok for small scenes but very slow for big/complex scenes. Also the problem would be if photon mapping is involved it would need to regenerate the photons (as we don’t have any irradiance cache yet).
  • Long shot: integrate YafaRay directly in Blender. But in addition to the enormous work involved, this would require for us to patch and recompile the entire Blender for each YafaRay release.
  • LuxRender-style approach: I think they don’t transfer the entire scene every time there is a change, but only some changes depending on what actually was modified. I would have to investigate this approach by looking at their exporter.

Texture baking is also something I’m seriously considering, because I also believe it would be very helpful even for animations and mixing fast direct light renders with pre-baked textures from a still made with photon/path/bidir for GI realism. However, I have no idea how much work/difficulties implementing it would take…

Hi david,

thanks for your answer.

Interactive Rendering is the feature that I miss most.

Second place, the Render passes.

An adaptation of these two Bounty-Scripts would be a little helper for postproduction.


P.S. (Bounty+YafaRay) Too bad that the forces are not united to work together on the renderer.

Thank you very much David. I’m very happy to see Yafa is still aliveI think SSS isn’t that important.

Long shot no good idea imo. Too much work just too keep it running.

Luxrender does afaik save out each model and scene description separately as either native or ply (i think). So it only needs to update what’s needed. Very nice.

There are two API. New one, Luxcore, has viewport render, old has not.

On my wish list as well is Render Passes! And second in Photon Mapping the ability to disable caustic photons completely. (right now you need to set count to 1…)

I could probably come up with a bunch of more small things todo… (Like light emission from Volume/smoke/fire data. Explosions in yafaray could be great :wink:

give yafa some love

I agree, render passes would be great for YafaRay, completely separating luminance and chrominance pipelines for instance.

In general if you take a look into the last two or three Vray releases, there are lot of stuff YafaRay could have implemented. But in general I am happy with the most important feature YafaRay delivers which is render quality, only Indigo delivers consistent better results IMO. SSS is not critical but on the other hand is the only kind of material YafaRay is missing at the moment so it would be great to have the set completed. Povmaniac’s fork uses the SSS code from this GSoC 2010 project I believe which is very slow and not that efficient as per some YafaRay developers’ opinion. More important we need more improvements in SSPM to make diffuses patches converge faster and in a more predictable way, so in the last passes behaves more like a path tracer, producing high frequency noise at subpixel level. Also an irradiance cache algorithm for faster path tracing and background lighting. Also a more stable and usable Blend material would be great, there are some realistic materials you can only achieve by mixing properties from several ones, for instance layering several grades of glossiness together for plastic and metallic material. In general I am more inclined to add value and robustness to the feature set we already have instead of adding new stuff, which means more maintenance work which is our weakest point. Also keeping YafaRay simple is very importantt.

GPU is not possible or realistic. We hardly have resources to maintain the code base we already have.

  • (Relatively!) quick and dirty: get some interactivity working but transferring the whole scene every time something changes in it (materials, properties, positions, etc). This would be ok for small scenes but very slow for big/complex scenes. Also the problem would be if photon mapping is involved it would need to regenerate the photons (as we don’t have any irradiance cache yet).

A real interactive engine means you need a way to store the whole ray tree structure for real time interaction, something YafaRay can not do at the moment. A basic continuous rendering process in the background like Arnold would be nice, with tile size at one pixel, full adaptive with one ray each pass, very low or no AA thresold, and basic GI unbiased estimation with irradiance cache refined in each pass or SSPM. Fully multithreaded export to a render window in QT, like the one we had in pre Blender 2.50 series. Also correlating the materials preview to blender units would save lot of work.

Hi,
I’m the LuxBlend developer who wrote that part of the code, feel free to ask me any question you like about it.
If you want to have a look, this is where the relevant stuff starts: https://bitbucket.org/luxrender/luxblend25/src/910b0f34ae310d351ab8114d7723494ebe2ec2bf/src/luxrender/core/__init__.py?at=default#cl-1866

regards
Simon

Hello, Simon.

Thank you very much for your offer!!! :slight_smile: I really appreciate it.

This is now my current project in YafaRay and I’m starting to investigate how to set up the Exporter and the Core code for this.

I’m still in the preliminary part, trying to setup the scene and launch the renderer when there is a change in the view. I got it in a very rudimentary way but I still could not get the GL buffer because YafaRay renders to an internal special object called colorOutput_t / imageOutput_t and I have to implement some kind of conversion between those objects and the float array needed for the GL buffer.

So, my next objectives for now are:

  • Get the conversion between imageOutput_t and the GL float array buffer, to see if I can show the render result in the GL view.
  • Once the above is done and the render result shows (crossing fingers) I will try to optimize the view_update borrowing the scene changes detection code from LuxRender as much as possible.

In any case, if I can make this work, I will publish a beta build even if it has only a rudimentary/slow/non-optimized version of the realtime preview.

So, it’s too soon for me to make questions but I will for sure!!! :slight_smile: Thank you very much!

Best regards. David.