Get 3D location of mesh surface point from UV Parameter?

Hello, I’m a relatively new user of Blender, familiar with Python (and well acquainted with Autodesk Maya); I’ve looked for a Blender solution for a number of hours and have not even found a proper reply, although others have asked the same question elsewhere:

I want to be able to supply a python function with a mesh object, UV map name and FLOAT UV values, (eg. “face” mesh, “UVMap1”, U=0.215, V=0.862,) and have it return the 3D position, in World space or Object space, of that point on the object. The end intent is to enable driving eg. an empty’s transforms to follow a mesh surface accurately along a dynamically changing U (and/or V) value from eg. 0.00 to 1.00.

For those who know and have rigged in Maya, I am trying to replicate the function of a Maya ‘pointOnSurfaceInfo’ node, ‘follicle’ node or that muscle node I can’t remember the name of, all of which have adjustable U and V parameters to slide the node’s transform along a surface (‘pointOnSurfaceInfo’ only works on NURBS though,) in Blender, for various rigging purposes.:cool:

To avoid confusion that others with this question have encountered: I am not trying to create UVs, I don’t just want the UV’s at VERTEX positions (that’s easy enough to get) and I don’t just want the closest surface point to a 3D point (that’s trickier but not what I need); And the UV value must be able to be changed, I don’t just want dupli-faces. Presume the mesh is smooth-ish and irregular, eg. a character’s head skeleton, not flat or spherical (for which a formula may suffice.)

So far the only commands I’ve found that might vaguely help are bgl (GUI) commands (which probably won’t like renderfarms) and possibly the code behind some compositor nodes…:eek:

Any helpful code would be appreciated!

I remember a thread with this exact question, but didn’t bother to search it. Here’s a demo instead:

import bpy
import bmesh
from mathutils.geometry import barycentric_transform, intersect_point_tri_2d


ob = bpy.context.object
me = ob.data
bm = bmesh.new()
bm.from_mesh(me)


# tag selected faces, because triangulate may clear selection state
for f in bm.faces:
    if f.select:
        f.tag = True


# viewport seems to use fixed / clipping instead of beauty
bmesh.ops.triangulate(bm, faces=bm.faces, quad_method=1, ngon_method=1)


# re-select faces
for f in bm.faces:
    if f.tag:
        f.select_set(True)


uv_layer = bm.loops.layers.uv.active


for area in bpy.context.screen.areas:
    if area.type == 'IMAGE_EDITOR':
        loc = area.spaces.active.cursor_location.to_3d()
        break


def find_coord(loc, face):
    u, v, w = [l[uv_layer].uv.to_3d() for l in face.loops]
    x, y, z = [v.co for v in face.verts]
    co = barycentric_transform(loc, u, v, w, x, y, z)
    bpy.context.scene.cursor_location = ob.matrix_world * co
    
sel_faces = [f for f in bm.faces if f.select]
for face in sel_faces:
    u, v, w = [l[uv_layer].uv.to_3d() for l in face.loops]
    if intersect_point_tri_2d(loc, u, v, w):
        print("found intersecting triangle")
        find_coord(loc, face)
        break
else:
    print("trying random selected face for extrapolation")
    find_coord(loc, sel_faces[0])

The key is barycentric_transform().

Note a couple of requirements:

  • the script expects one Image / UV editor to be visible
  • a mesh must be selected and in editmode and have a uv map
  • UV mesh selection sync needs to be enabled in UV editor
  • 2d cursor needs to placed at the location you want to get the 3d coordinate for (in UV editor)
  • the UV face around the 2d cursor location needs to be selected (this will also select the corresponding mesh face in 3D View)

Then run the script. It should place the 3D cursor at the 3d coordinate. The coordinate stored in “co” is in object space, but turned into world space to place the 3d cursor correctly.

The mesh does not need to be triangulated, but the script may fail especially on non-planar quads (couldn’t figure out why exactly).

If the selected 2d coordinate does not intersect the selected face, extrapolation will be tried, but the result will often be useless.

The face selection + sync requirement could be lifted by some code to automatically find all intersecting uv faces for the chosen 2d coordinate. You should probably handle overlapping faces, e.g. return one 3d coordinate for every intersecting face.

If all you want is to stick something onto the mesh’s surface, then no Python is needed by the way.
Use the 3D cursor to get an arbitrary location on the surface, add an empty (will be created at 3d cursor location) and parent it to the mesh with type Vertex (Triangle). Even works with shape keys quite well.

I don’t think I can be of any more help than you’ve already received, but I’ll post my thoughts anyway since the question kind of interests me and it sounds like you’re capable of programming it.

The face’s normal would be perpendicular to the UV map(if it were a 3d object with non-custom normals), right?
If that’s the case, then the plane perpendicular to the face normal defines the plane/coordinates the UV map describes.
If I’m not mistaken, the face on the UV map is simply a projection of the face along this normal onto a 2d plane(and scaled).
That should mean that any point on the face in the UV map should correspond to a point in 3d space on a plane perpendicular to the face normal. To find the space coordinates the UV coords would have to be scaled to account for the difference in coordinate size between the 3d object and its UV map.

So, if you want to find the equivalent point in 3d space of a point on a face in the UV map, then you could:
-find the face object, get its normal and some kind of relevant coordinates you can use to translate the UV X/Y coords to space
-get the scale of the UV map’s coordinates relative to world space coordinates
-find a point(we’ll call it Q)in space described by the UV map’s X/Y coordinates using one of the face parts(verts, the face’s center, etc) by translating across the plane perpendicular to the face normal
-cast a/two rays starting from point Q along the face normal vector, and you should find the equivalent surface point

If I’m not mistaken, that could be a solution, although I imagine CoDEmanX’s is better(and I’m just conjecturing so I probably am mistaken… or maybe I don’t understand the depth of the issue). I’m not sure how it would work with a deformed mesh.

In any case, good luck.

Every inter- or extrapolated point in relation to a given UV triangle is in the mesh face’s plane, that’s right. It’s also true for polygon UVs and mesh faces, as long as they are flat. Otherwise only for triangles.

Every point within a UV face describes a point on the mesh face’s surface. Every point outside describes a point in the same plane, but outside the face’s surface. Note that the UV face must be defined. If the entire UV map contained no overlaps, then any UV coordinate that lies in any of the UV faces will be mappable to a point on the mesh surface, with no ambiguity. With overlaps, there will be multiple solutions, or the UV face you want to calculate the corresponding mesh coordinate for must be specified. Every point outside any UV face will not correspond to a point on the mesh surface (but around or inside the mesh), unless you clamp the UV coordinates to [0, 1] - which will result in texture repetition (as if you were sampling from an endless tile grid of the texture).

-find the face object, get its normal and some kind of relevant coordinates you can use to translate the UV X/Y coords to space

The “relevant coordinates” that can be used to translate UV coordinates to 3d space coordinates are the vertex coordinates of the face/triangle (both, UV and mesh).

-get the scale of the UV map’s coordinates relative to world space coordinates

No, you wouldn’t do that. You interpolate the 3d coordinate in object space, then multiply the world matrix by that coordinate to receive world space location.

-find a point(we’ll call it Q)in space described by the UV map’s X/Y coordinates

What? You mean you would scan the mesh until you find the corresponding UV coordinate, only to know that the input 3d coordinate is correct? In the worst cast scenario, that would mean to actually check EVERY 3d coordinate, O(n)-ish. That’s just wrong. You would calculate a single 3d coordinate from the UV coordinate directly, O(1).

-cast a/two rays starting from point Q along the face normal vector, and you should find the equivalent surface point

That assumes that a UV coordinate might not be in the mesh face’s plane. But it can’t.

There’s really no need to construct a plane a do fancy calculations and raycasting. A barycentric transform is all that is needed. It takes two triangles and a point in the first triangle’s plane and calculates the cooresponding point in the second triangle. It can do that with 2 triangles in 3D space. If you want to transform a UV triangle to a mesh triangle, then the former is 2D, whereas the second is 3D. We can put the 2D triangle in 3D however, by assuming z=0:

(0.15, 0.89) –> (0.15, 0.89, 0.0)

Now that triangle is also 3D and we can transform the point from UV to mesh triangle. That’s what my above code is doing.

In particular, it triangulates a copy of the mesh to allow barycentric transformation, then finds the input UV face that the user must select beforehand (remember, any coordinate in UV space can be transformed, but it may not be on the mesh surface).

Due to the triangulation, we do not know which triangle is the best option to use as input for the transformation - we potentially want a point on the mesh surface, and thus we need to chose the triangle the user-defined input point lies within. Otherwise we would extrapolate a coordinate, which is an issue if the target mesh face is a non-planar polygons. We would end up with a coordinate not on the surface in that case, it should still work for planar target polygons however. To determine that triangle, the script does triangle-point intersection testing for all triangles the triangulation generated from the input face. If the UV-space point the user wants to calculate the 3d coordinate for isn’t within the selected UV face, no intersection test will return true. The only option is to abort, or use a random triangle to extrapolate a (possibly unwanted) coordinate - my script does the latter.

Then it carries out the barycentric transform by using the user-defined point (2d cursor location), the UV triangle vertex coordinates (remember, to_3d()) and the mesh triangle vertex coordinates (the one that corresponds to the UV triangle, ideally with the point lying within). The resulting mesh coordinate is transformated to world space and the 3D cursor placed at that location.

This is the code of the barycentric transform BTW:

Since our UV triangle is 2D, it should be possible to slightly improve efficiency, because we do not need to “flatten” it. See the comments in above linked code, dimensionality is reduced from 3D to 2D for the input triangle, but not needed in our case because it’s already 2D. This reduction is only possible if the input is planar, which explains why only triangles are accepted - every polygon with a vertex count greater than 3 can be non-planar.

Thanks for your replies, I might be able to get there with those - although it does seem strange to me that there isn’t a command already available to just take a UV position and give it’s 3D location (without having to give it individual faces…); of course it would have to error or something on overlapping UVs or UV areas without faces, but that’s how it would be expected to work. It is meant to be able to slide over an edge etc.

I tried codemanX’s script - however it didn’t quite work as it was meant to; it looks like the cursor_location.to_3d() wasn’t the same size as the .uv.to_3d() so they never intersected, or something like that? I just tested it on a default cube with UVs (plus a print statement or two):





all it got was some points on the ground way in the distance.

Alright, I didn’t look into why codemanX’s one wasn’t quite working for me, but the concept worked for what I wanted, and I got it working with driver U and V attributes sliding an empty around a cube :eyebrowlift2: , (albeit probably somewhat inefficiently :wink: ) Which means I can possibly use it for rigging, (if I can work around update order stuff,) and try to port my Maya facial scripts (sliding skin for lips etc) over to blender… :smiley: (even though to some degree the ‘shrinkwrap’ stuff might work instead, I prefer more control.)



import bpy
import bmesh
from mathutils.geometry import barycentric_transform, intersect_point_tri_2d
from mathutils import Vector


def map_flat_pt_to_face(flat_pt, uv_layer, face):
    print(flat_pt)
    pa, pb, pc = [l[uv_layer].uv.to_3d() for l in face.loops]
    pd, pe, pf = [v.co for v in face.verts]
    return barycentric_transform(flat_pt, pa, pb, pc, pd, pe, pf)


def UVto3dSpace(uv, mesh, uv_layer):
    # Put the UV point on a flat plane
    puv = Vector([uv[0], uv[1], 0.0])

    # Get new bmesh version of mesh
    bm = bmesh.new()
    bm.from_mesh(mesh)

    # Triangulate the mesh (so we're just dealing with triangles)
    bmesh.ops.triangulate(bm, faces=bm.faces, quad_method=1, ngon_method=1)

    # Use the current UV layer (for now)
    #uv_layer = bm.loops.layers.uv.active
    uv_layer = bm.loops.layers.uv['UVMap']

    # Iterate faces, return the mapped position in the first (UV) triangle that 'intersects'
    for face in bm.faces:
        pa, pb, pc = [l[uv_layer].uv.to_3d() for l in face.loops]
        print("Trying", pa, pb, pc, "vs", puv)
        print(intersect_point_tri_2d(puv, pa, pb, pc))
        if intersect_point_tri_2d(puv, pa, pb, pc):
            print("found an intersecting triangle")
            return map_flat_pt_to_face(puv, uv_layer, face)
    
    # If the point hit empty space, just use the origin (may use last valid point instead later)
    return Vector([0.0, 0.0, 0.0])


# Convenience testing method (use the mesh selected when running the script)
theLastMesh = None
def getTheLastMesh():
    global theLastMesh
    
    try:
        tmpData = bpy.context.object.data
        if tmpData and tmpData.rna_type.name == "Mesh":
            theLastMesh = tmpData
    except:
        pass
    return theLastMesh
        
# Other convenience method (don't bother specifying a mesh
def currentObjUVPt(u, v):
    tmpMesh = getTheLastMesh()
    return UVto3dSpace([u, v], tmpMesh, None)
    


# Make this function available to drivers
bpy.app.driver_namespace['currentObjUVPt'] = currentObjUVPt

# Assign the mesh when script is run
getTheLastMesh()

I made a stupid mistake: I assumed the 2D cursor to have a normalized UV coordinate, but it only has if the Normalized option is ticked in the sidebar. Because it doesn’t update the coordinate immediately if set by script, so I now do the normalization myself based on the face texture dimensions, or the default value of 256 if there is no image. It should work regardless of the 2D cursor settings:

import bpyimport bmesh
from mathutils.geometry import barycentric_transform, intersect_point_tri_2d


# this will use the 2D cursor of the first UV editor found
for area in bpy.context.screen.areas:
    if area.type == 'IMAGE_EDITOR':
        space_data = area.spaces.active
        loc = space_data.cursor_location
        norm_coords = space_data.uv_editor.show_normalized_coords
        break
else:
    raise Exception("No UV editor found")


# mesh UVs are always normalized, but not the 2D cursor!
def uv_normalize(tex, uv):
    if tex.image is None:
        x, y = 256, 256
    else:
        x, y = tex.image.size
    return (uv[0] / x, uv[1] / y, 0) # to_3d()


ob = bpy.context.object
assert ob.type == "MESH", "Selected object not a mesh"
me = ob.data
bm = bmesh.new()
bm.from_mesh(me)




# tag selected faces, because triangulate may clear selection state
for f in bm.faces:
    if f.select:
        f.tag = True




# viewport seems to use fixed / clipping instead of beauty
bmesh.ops.triangulate(bm, faces=bm.faces, quad_method=1, ngon_method=1)




# re-select faces
for f in bm.faces:
    if f.tag:
        f.select_set(True)




uv_layer = bm.loops.layers.uv.active
tex = bm.faces.layers.tex.active




def find_coord(loc, face, uvs):
    uv1, uv2, uv3 = uvs
    x, y, z = [v.co for v in face.verts]
    co = barycentric_transform(loc, uv1, uv2, uv3, x, y, z)
    bpy.context.scene.cursor_location = ob.matrix_world * co




random_face = None
sel_faces = [f for f in bm.faces if f.select]
for face in sel_faces:
    uv1, uv2, uv3 = [l[uv_layer].uv.to_3d() for l in face.loops]
    if norm_coords:
        loc_normalized = loc.to_3d()
    else:
        loc_normalized = uv_normalize(face[tex], loc)
    
    # remember the first face for possible fallback
    if random_face is None:
        random_face = loc_normalized, face, (uv1, uv2, uv3)
    
    #print("trying", loc_normalized, "vs", uv1, uv2, uv3)
    if intersect_point_tri_2d(loc_normalized, uv1, uv2, uv3):
        print("found intersecting triangle")
        find_coord(loc_normalized, face, (uv1, uv2, uv3))
        break
else:
    print("trying random selected face for extrapolation")
    find_coord(*random_face)

Ok, well it was enough for me to figure it out anyway. :slight_smile:

(Your updated script did need a couple of tweaks to go, but I got that working too. The first line said “import bpyimport bmesh”, and the mathutils script wanted a Vector class from the uv_normalize def)

import bpy
import bmesh
from mathutils.geometry import barycentric_transform, intersect_point_tri_2d


# this will use the 2D cursor of the first UV editor found
for area in bpy.context.screen.areas:
    if area.type == 'IMAGE_EDITOR':
        space_data = area.spaces.active
        loc = space_data.cursor_location
        norm_coords = space_data.uv_editor.show_normalized_coords
        break
else:
    raise Exception("No UV editor found")


# mesh UVs are always normalized, but not the 2D cursor!
def uv_normalize(tex, uv):
    if tex.image is None:
        x, y = 256, 256
    else:
        x, y = tex.image.size
    
    # To Vector class  # to_3d()
    uvVec = uv.to_3d()
    uvVec.x /= x
    uvVec.y /= y

    return uvVec


ob = bpy.context.object
assert ob.type == "MESH", "Selected object not a mesh"
me = ob.data
bm = bmesh.new()
bm.from_mesh(me)


# tag selected faces, because triangulate may clear selection state
for f in bm.faces:
    if f.select:
        f.tag = True


# viewport seems to use fixed / clipping instead of beauty
bmesh.ops.triangulate(bm, faces=bm.faces, quad_method=1, ngon_method=1)


# re-select faces
for f in bm.faces:
    if f.tag:
        f.select_set(True)


uv_layer = bm.loops.layers.uv.active
tex = bm.faces.layers.tex.active


def find_coord(loc, face, uvs):
    uv1, uv2, uv3 = uvs
    x, y, z = [v.co for v in face.verts]
    co = barycentric_transform(loc, uv1, uv2, uv3, x, y, z)
    bpy.context.scene.cursor_location = ob.matrix_world * co


random_face = None
sel_faces = [f for f in bm.faces if f.select]
for face in sel_faces:
    uv1, uv2, uv3 = [l[uv_layer].uv.to_3d() for l in face.loops]
    if norm_coords:
        loc_normalized = loc.to_3d()
    else:
        loc_normalized = uv_normalize(face[tex], loc)
    
    # remember the first face for possible fallback
    if random_face is None:
        random_face = loc_normalized, face, (uv1, uv2, uv3)
    
    print("trying", loc_normalized, "vs", uv1, uv2, uv3)
    if intersect_point_tri_2d(loc_normalized, uv1, uv2, uv3):
        print("found intersecting triangle")
        find_coord(loc_normalized, face, (uv1, uv2, uv3))
        break
else:
    print("trying random selected face for extrapolation")
    find_coord(*random_face)


For anyone else who finds this thread - This is what I ended up with for my UV space driver, after removing the temporary test parameters.

I also tried to have a sort of cache for the triangulated mesh to avoid triangulating it all the time when the mesh isn’t deforming; but I suspect my means of doing so (using a list of all the vert coordinates to generate a hash value) wasn’t much more efficient.

This code was set to ‘Register’ for use in driver expressions. I’m not sure if there’s a way to make it drive all three x, y and z values at once; I just used the function three times with [0], [1] etc, eg. on an Empty’s translationX driver:

UVto3dSpace([pu/10, pv/10], 'Head', 'UVMap')[0]

Where pu and pv were variable names from custom object properties (sliders 0.0 to 10.0) on the Empty that I was sticking to the mesh.
(I also added null driver variables to force the expression to update.

  • One linked to the mesh object’s transform; (this probably wasn’t required after adding the second one)
  • One linked to another Empty that was parented to a vertex of the mesh. (I couldn’t find another way to force updates when sliding eg. the bend value of a SimpleDeform modifier on the mesh))

The object I was testing on was a UV’d head mesh.

import bpy
import bmesh
from mathutils.geometry import barycentric_transform, intersect_point_tri_2d
from mathutils import Vector


def map_flat_pt_to_face(flat_pt, uv_layer, face):
    pa, pb, pc = [l[uv_layer].uv.to_3d() for l in face.loops]
    pd, pe, pf = [v.co for v in face.verts]
    return barycentric_transform(flat_pt, pa, pb, pc, pd, pe, pf)


def objHash(bMeshObj):
    return hash(repr([v.co for v in bMeshObj.verts]))


rivetMeshCache = {}

def UVto3dSpace(uv, objectName, uv_layer, worldSpace=True):
    global rivetMeshCache
    
    # Get the mesh
    ob = bpy.data.objects[objectName]
    mesh = ob.data
    
    # Put the UV point on a flat plane
    puv = Vector([uv[0], uv[1], 0.0])
    
    # Get new bmesh version of mesh
    bm = bmesh.new()
    #bm.from_mesh(mesh)
    bm.from_object(ob, bpy.context.scene)  # Account for deformations
    
    # Check for a cached bmesh before triangulating a new one
    checkForMesh = mesh.name in rivetMeshCache
    #if (checkForMesh and mesh.is_updated_data):
    #print(objHash(bm))
    if (checkForMesh and rivetMeshCache[mesh.name][1] == objHash(bm)):
        bm = rivetMeshCache[mesh.name][0]
        #print("Using old mesh", mesh.is_updated_data)
    else:
        if checkForMesh:
            # Clear memory of existing cache
            rivetMeshCache[mesh.name][0].free()
        

        # Triangulate the mesh (so we're just dealing with triangles)
        bmesh.ops.triangulate(bm, faces=bm.faces, quad_method=1, ngon_method=1)
        
        # Store the mesh for next time
        rivetMeshCache[mesh.name] = [bm, objHash(bm)]
        #print("Using new mesh")

    # Get the UV layer
    uv_layer = bm.loops.layers.uv['UVMap']

    # Iterate faces, return the mapped position in the first (UV) triangle that 'intersects'
    for face in bm.faces:
        pa, pb, pc = [l[uv_layer].uv.to_3d() for l in face.loops]
        if intersect_point_tri_2d(puv, pa, pb, pc):
            localPos = map_flat_pt_to_face(puv, uv_layer, face)
            break
    else:
        # If the point hit empty space (no UV triangles), just use the origin (may use last valid point instead later)
        localPos = Vector([0.0, 0.0, 0.0])
    
    # Return the position in world space if specified
    if worldSpace:
        return ob.matrix_world*localPos
    else:
        return localPos



# Make this function available to drivers
bpy.app.driver_namespace['UVto3dSpace'] = UVto3dSpace


The UV space Empty could also be duplicated, and the new object would still follow the mesh, with it’s own u and v parameters. Although I duplicated it four times, slid them to different parts of the head, then dragged the bend deformer angle value back and forth; it worked, but then it crashed…

1 Like

This post is old but I found it incredibly helpful for a project I am working on where I have specific coordinates on a texture and I need to find those in 3D space on the mesh. The gotcha I ran into is that in my case for some reason I had to look for (u, 1.0 - v) instead of (u, v). Thank you @CoDEmanX and @Overcomer