Oh, how hard it is to get “experts” to admit to themselves they said something wrong…
I didn’t say the same thing. Here’s what you said:
“As far as i know, blender’s ID passes have jaggy edges mainly because of the way it stores them in rendered file (exr).”
This is wrong.
“Every ID pass is stored in one single channel with different value ranges for each of them.”
This is also wrong.
“ID pass is a matte pass by definition whatever tech is used to get one, because it’s a matte of some objects or objects with some materials.”
Wrong.
“And any matte pass with jagged edges is not usable - by definition.”
Arguably correct, but since ID passes are not matte passes, the corollary is still wrong.
“Flickering and artifacts… and no antialiasing filter can help.”
Anti-Aliasing through supersampling is something you can always do in post, and it’s common use for issues like these.
I didn’t said ID pass should or must be antialiased. I said that the reason one needs ID pass is the same reason one needs matte pass. To affect to a part of an image where one or the other object can be seen.
You didn’t actually say that. You said that ID passes are matte passes and that anti-aliased matte passes are not usable. You can create a matte from an ID pass, but obviously that’s not working too well because ID passes cannot be anti-aliased.
That’s the exact same reason matte passes and, as i know, ID pass is needed. You don’t need ID pass to grow flowers in the garden do you?
The primary use for ID passes is identifying objects (useful for picking or for verification e.g. in computer vision). It’s not actually that good for matting, because you can’t easily group IDs, you can’t control them, etc. They’re also dead simple to implement, so they’re good to just have around, even if there aren’t that many really good uses for it.
edit:
For motion blur, semi-transparent objects etc, ID passes don’t really work in general. For this, use the matte scene workaround I described.
ID are stored in pure integer values. Maybe this could be possible to save the all the range (antialiasing, motion blur, transparency) between each step. from 0 to 1, 1 to 2, 2 to 3… you have of the float values that can also be used. Is this hard to implement? I have no idea of how difficult it is.
I’m not sure what this is supposed to mean. What you can do is have multiple samples per pixel. The only way to do this with the current compositor is to render at a higher resolution (and then downsample). This doesn’t solve the problems requiring multiple depth samples, for that you really do need deep compositing.