Micropoly Script & Tests

hi,

i had another strange idea. now it’s the implementation of micropolygons via python. it’s probably the worst implementation of micropolygons ever made, because it’s so slow and needs unbelievable amounts of memory but at least it works.

here are some examples with different settings originally rendered in DVD resolution:

http://img359.imageshack.us/img359/1856/basiclrhk7.jpg

this is the base mesh, a plane and a low poly icosphere. i just added a material with a procedural displacement texture to it and ran the script.

http://i25.tinypic.com/11t7qzb.gif

division rate of 16 pixel/poly, average of 10 seconds per frame calculation time, up to 250 mb memory use

http://i29.tinypic.com/111sa5h.gif

division rate of 8 pixel/poly, average of 45 seconds per frame calculation time, up to 500 mb memory use

http://i32.tinypic.com/2psentx.gif

division rate of 4 pixel/poly, average of 2:30 minutes per frame calculation time, up to 1 gig memory use

highres videos

unfortunately you can’t reach very high detail for instance 1 pixel/poly in higher resolutions, it simply cost to much memory and will crash blender.

i still need to do some improvements then i’ll release it.

Hey great work…

You are saying that you was able to code python script for micropoly rendering? You are really genius man !!!

that looks very interesting! :open_mouth:
please, release the python script as soon as you can, so others could take a look at it and add a few optimisations.
simply incredible! wow!
just to clarify: this is true micropoly and not just simple cc-subdivision, right?

Kai, you’re really a mad man… micropolygons, demolition… wooooow, man… i’ve no words… :wink:

Gosh, even Brecht didnt have the balls to finish micropoly rendering! :wink:
I hope your able to refine this into something faster
Ideally you will only subdivide polys for a 32x32 bucket at a time, then render them and throw it away.
You could use the render api’s border render access for that.

@gaalgergely:
no cc, we have this already. :wink:
this kind of subdivision depends on the camera position, lens, render resolution etc.

http://img363.imageshack.us/img363/33/scr1jp3.png

@ideasman42:
at the moment it works very simple and wasteful regarding memory consumption.
as you can see it simply subdivides everything around the camera.
do you know of a fast way to check if a 3d point or edge is within the viewfield of the camera? and i mean really fast, otherwise it would probably become even slower.

Kai, i am happy to see you again with this innovative idea.
Really amazing that you get it working in python.
respect
migius

You probably can start to do major optimizations because that way it would be a lot more usable, Ideasman is a genious at python, he can help you.

A good thing would be to greatly optimize the script then port it to C to have it hardcoded into Blender.

I know that (at least in the GE) the command for checking if a camera can see [something] is something about the frustum. Real helpful, I know. I looked in the API for general python (not limited to GE) and only came up with this:
http://www.blender.org/documentation/244PythonDoc/BGL-module.html#glFrustum
I have little idea how to use it, but I’m hoping it might help…

Yay for finally getting micropoly into Blender! Even if it’s just a start!

Isn’t this basically just LOD?

Lod takes a high poly mesh and decimates(removes polygons) it as it gets further away.

This seems to take a mesh and tessellates (adds polygons) as it gets closer. In this case those extra polygons are then displaced by a procedural to add geometry detail.

Looks like a cool thing!

On optimising…
I guess it’d be really tough to figure out localised subdivision based on the displacement “rate of change”… probably take the calculation time through the roof! might help memory though.

i could imagine this being a modifier in blender. it’d need just two inputs, the preview subdivision rate and the render division rate in pixel/poly. the preview rate would be useful to tweak possible following displacement modifiers etc. with a low poly rate.

the idea behind this script is that simple, that i can quick explain it in pseudo-code here:

repeat:
    for all edges in the mesh do:
        if lengthOfEdge > (((16 /lensInMM) *distanceOfEdgeCenterToCamera *2) /renderResolutionMaxX) *divisionRate then:
            subdivideEdge()
do this until no more edges are found to be subdivided

that’s the central (although simplified) algorithm, nothing very complicated. i think a good coder could make such a modifier ready in a really short amount of time. if i could be sure it would make it into the main trunk, maybe then i’d do it myself.

btw, i thought about the viewfield/bucket problem. an assumed subdivided vertex which would be outside of this window could be pushed into it by a displacement anyway, so you can’t basically say everything outside of the viewfield isn’t needed. another thing are mirrors, you’d have to check also what can be seen in reflective surfaces. this could become a coding nightmare. so viewfield dependend subdivision should only be an option, it depends strongly on your scene if it can be used without glitches.

congrats.
great work, very useful.

i had another strange idea.

have as many as you like!
m.a.

i did some optimizations for motion blur and another example:

http://img528.imageshack.us/img528/5162/m01lrsj5.gif

Really impressive stuff.

Any plans to add interpolated transitioning between subdivision levels to minimise popping/banding?

kai, that looks great.
Drools…

Kai. I’m loving the Star Trek 2 feel of your test.

I’m not a coder, but could it be possible to adapt the code to ‘C’ so it could be rolled into the Blender core and speeded up?