Using blender to turn images into 3D objects

Hello!

This is my first experience with a non-CAD modeling program and I have already encountered an obstacle.

To preface, I am doing research into that application of 3D models as a way to improve plant identification. I have been conducting this project using a 3D scanner, which is a really cool tool, but does not function effectively when it comes to modelling smaller plant components such as leaves and seeds, which are pivotal to the identification of a plant. Regardless, I was hoping to use blender as an alternative method for creating leaf models, primarily because leaves only need to be extruding a minute amount to transform a picture of a leaf into an object.

I followed some tutorials on the process, but none work with complex models like leaves, they tend to focus on simpler objects as well as objects that do not have to be matched precisely with a texture. So far I have used Inkscape to trace a bitmap of the leaf, which I was able to then upload as an .svg file into blender. This proved an effective way to outline the leaf, though it did not bring the texture with it.

TL;DR, would like help with attaching a leaf-shaped texture to a leaf-shaped model.

Started with a white oak leaf, included in this post.


depends also if leave is folded a little in 3D
so might need to fold it with some tool

but you can use your color image to add as a PNG with alpha may be to get a nice render !

going with cycles here ?

happy bl

how realist you want it -
high res or low res min verts ?

and do you want the branches to be circular 3 D ?

happy bl

Attachments


Have you tried UV mapping the texture onto the model? (I think that’s what you are asking. or is it about modelling the leaf?)

This may be getting a bit ahead, but could you just use a flat plane with the leaf texture on it? or does the leaf model need to be detailed?

If I could extrude the veins of the leaf that would be extraordinary, same goes for folding. If I could give the leaf only a millimeter of depth, and then have the veins stick out an additional millimeter, with all of that with a slight concave/convex fold, that would be a dream. I know it’s a large favor to ask, but could you assist me in the steps needed to get there? Once I do it a single time, I’ll be able to apply the same process to the rest of the leaves.

I think I may have? I followed a tutorial that did not net me the results I wanted, but that is primarily because I don’t know what UV Mapping is, nor the process involved. I remember now, my pitfall was that I went to “unwrap” the model, and the option was not there.

And also, even if I could have it exist flat as thick as paper, that would be perfect. At least from there I could make more aesthetic changes, but I would have a model nonetheless.

can you show where you are at right now
some pics

and how realist do you want it
need to do very close shots or only far away!
this determines how much precision you need for the model

and I use mostly Cycles now

happy bl

I did awile ago a failed little project with leaves:
I used ivy generator for veins which I joined with the leaf.

Hi

You can do it with the displace modifier…See Picture.


First is the leaves…A Low rez .png 640 x 640 Pix.

A Render…With side light to see it better…:slight_smile:

Last is Material View.

Problem if…Is size - This is 1 M faces…I tried 1 subdive more = 4 M faces…No problem other than Time…:slight_smile:

A better rez .png or whatever file will give better reault…No tweake done…Have to play a little with it…Puff Puff

Tai

I have been conducting this project using a 3D scanner, which is a really cool tool, but does not function effectively when it comes to modelling smaller plant components such as leaves and seeds

If the tool turns out inappropriate for the project, plans do change. Isn’t it so?
What tools are you using now to get representations of the objects you do research?
Is Stock image as an example the best for you to try new research methods?

Inkscape produces vector information; any photo editor is supposed to provide Blender with bitmap info. Usually one combines both to blend.

“primarily because leaves only need to be extruding a minute amount to transform a picture of a leaf into an object.”
Minute extrusion amount does not minimize complexity of the mesh you would need to create beforehand.

“I followed some tutorials on the process…” Could you please provide links and example images of what were your results?
I wont put full link to this however Google “blender 3d create leaf” video search brings up certain amount of decent easy to follow tutorials.

"I remember now, my pitfall was that I went to “unwrap” the model, and the option was not there. "
Having default Plane as an object in Edit mode (Tab key to toggle modes) and hit U key brings up Unwrap menu.

And the last - what results do you expect to get using Blender for your research? Can you provide some insights on this, examples?

Hello! Thank you all for your continued support.

I was able to import an .svg file from an inkscape vector map I created. It gave me the outline of the leaf, however when I go into edit mode I find that the leaf’s vertices appear fractured. I can not say conclusively, but I think this is why I can not unwrap for UV mapping. I wanted to create a seam down the perimeter of the leaf, but when I use CTRL + E or U I can not access either menu.

These are all awesome questions.

I should probably have prefaced with the objective of the project haha. The project is the investigation of 3D imaging and the utilization of models in plant taxonomy. Initially, the project revolved around using a 3D scanner called an iSense, though its shortcomings became apparent when its alleged 1mm depth resolution was not demonstrated. As most projects go, I adapted and sought to create the models that could not be captured by the infrared scanner using blender.

The stock image of a leaf is due primarily to the seasonal restrictions. In my area (Eastern US) most of the foliage has been shed for the season, so I am using this time to advance with the project and grasp an understand of how to create models of leaves, so that when the forests bloom again in late May, I will be prepared and can create the models using images I’ve captured myself. The stock image leaf is only to beta test the idea.

The tutorials I have used so far are:
WikiBooks UV Mapping Basics
Youtube Video on Image to Object Conversion
Youtube Video on Creating Leaves Using GLSL <- which seems to be different approach than the one I’m using, have only began investigating recently.

And lastly, the project will be using the objects created in blender, rather than renders. The concept is that the models will present an alternative to images for those seeking plant identification (not these actual models, mind you. This is more of a flagship to demonstrate what a site that hosts models would look like)

  • BeefBrown

Thank you for introducing the goal and details of your project, hope this will help forum members further analyze tasks and allow for more precise suggestions.

I have seen printed catalogs containing flora and fauna images; drawings, not photos, which supposedly should help viewers to find some Latin name of the plant, bug or animal they’ve found. While images usually are exceptionally well done i find it quite hard to match even known things, maybe this is just silly me…Still, don’t you think user depends on the degree of artist who did the drawings for the book? His individual ability to find and transfer to image commonalities for this or that plant, characteristics, details or colors?
Not that i’m saying “you’re not the one”; just to point out - it’s hard to mimic nature and make common denominator to match all the possible variations.

To the project - example was certainly the oak leaf which means subject is an oak tree. Am i right assuming that there will be oak tree model with tens of thousands of leaves on the display? Will/should the user be able to zoom on every part of that tree, every branch, acorns and their details? Seasonal changes, what about these?

Last video from the links you provide is describing parts of the process to get mesh textured. It shouldn’t be this different; your example image shows you still have Curve object, not Mesh which is shown in the video. What you refer to as fractured vertices are curve points and their control handle end points. Unfortunately Curve objects and image textures do not pass each other well. You’d need mesh object to use UV texture coordinates to place image on.
To get Mesh object from Curve press Alt-C while curve object is selected in Object mode. Select Convert to Mesh.

but I mean you could make leave scan
but I would think it would be easier to put the leave on a white background
easier to get alpha value from white color!

and yup you need to convert from curve to mesh to work with UV images

now do you need just a UV image on a plane
or need more 3D effect like the branches inside the leave in 3D like a rod or cylinder ?

happy bl

forgot to ask but what do you get from the scanner ?
Is it a color image 2D or 3D
or vector image BW in 3D with curves ?

also some leaves are sort of bent so better use 3D
but depends what you want to do!

you could also make a tree using the leaves

happy cl

Attachments


haha I think you misunderstood. I am using blender now instead of the scanner. I’m just taking pictures of the leaf on a white background, as you’ve said.

Also, I took your advice and converted the leaf to a mesh, and it seems to be absolutely coated in black lines (vertices?), which is strange because it is a plane. I imagine I want to put the seam around the perimeter, but I can’t figure out how with so many vertices.


This is part of the process automation; result ain’t that great… You’d need to decide on less curve resolution before convert to mesh. Since mesh is only approximation of curvature curve can give mesh object contours wont be so smooth ( or you would need to deal with all this thousand of vertices for one leaf). It’s adjustable in Curve Data pane, Shape -> PreviewU. Value 5 would be more sane.
However mesh might look bad (this is all narrow triangles) but it is fine for rendering hardware - GPU. This type of mesh wont allow to form natural shape of the leaf though - select all, Delete -> Limited Dissolve would make one n-gon out of this where you then would form face rings and divide quads or tris as you see fit.

All this aside, if you establish Background image - leaf in Top Ortho view, add Plane and merge all its vertices into one Alt-M -Center and start to Ctrl-click ‘drawing’ of leaf outline you’re faster around rather going Inkscape vector route.

you could simply draw it directly in blender trying to get as many quads as you can

if need to have specific bent line
it can be added afterward with bisect tool.

did a quick model here mesh only

included another material for the small branches inside
what the English name for these ?

happy bl

Attachments


I used the inkscape method because not all leaves will be as simple as the oak leaf, and some leaves have distinct ribbing around the edges that I can not overlook because it’s essential to the identification of the leaf, such as the birch leaf. If your proposed method can also capture those minute features, I would gladly convert, I am open to–and will–try any ideas proposed.

That looks outstanding, can you explain what you mean by “trying to get as many quads as you can”, because if I can replicate your process, that would be superb.

Also, the “branches” are called veins haha, like veins in the body they transport nutrients. Including them would have been my next step. Ideally, I would be able to either extrude the veins to be more like a bump on one side of the leaf, or a cylindrical shape in which it protrudes on both. That would vary from leaf to leaf, though. I would also like to be able to bend the leaf as well since some leaves bend in nature.

Beef

A “good” 3d scanning system might be 50 to 100 times the cost of iSense, so I think it makes sense that the iSense its not going to produce the best results. AGISoft makes a application called Photoscan that’s about 1/2 the cost of the iSense, and uses still images (no special hardware required) to generate the 3d model. You do need lots of pictures to build the 3d model. You could try the demo, and I think posting on the forum over there there might help point you in the right direction to set up a workflow on producing good results. It does take an understanding of the whole process, from taking the best images, to generating the model, and additional applications that might be needed.

in any case for flat leaves don’t think you really need a 3D scanner !

just take a high res color pic on a white background and it should give a nice image to work with

happy cl

RickyBlender, how did you create that mesh of the leaf? Did you manually drag the vertices to the perimeter of the leaf? Or did you use a process that maps the image for you?

Beef