Raycasting without physics objects

I have some very complex objects I want to raycast against. They do not need to have collisions, and hece they were ‘no collision’. Now I need to raycast agains them and, well, raycasts seem to only work against physics objects!

Turning them into physics objects easily doubles the startup time (there are in excess of 900.000 faces).

Two questions:

  • Can you do raycasts against non-physics objects?
  • Can you think of any other method to identify what is on screen at a specific location?

Lots of useful maths here. Maybe you can take it and see if you can use it in Blender?:slight_smile:

First 30 seconds of the video includes: “3d mouse picking using raycasting”
Uh, raycasting in BGE requires physics objects, which I don’t want to use as per the first post

It depends on how accurate you need to be.

There are plenty of functions in mathutils.geometry to help you to make your own collision tools.
Probably the best would be mathutils.geometry.intersect_line_sphere()
Then you just need to get a screen ray, and check for each target object on screen. Sphere collision bounds would be suitable wouldn’t they? Otherwise you’re going to have to do intersect_ray_tri() 900,000 times…

Thanks. I hadn’t looked there before. Looks useful for other things.

Uh. hmm. Spheres?

I think I’ll just deal with the loading time (as soon as the physics meshes go to sleep, there’s very little CPU usage from the 30 or so objects)

So this mesh is being clipped and projected onto a 2D image and I want to get what object the mouse cursor is over. I do some coordinate transforms and get a point within the model. From there I do a raycast to establish what object I am inside.
It works, but (in my mind) shouldn’t need physics.

Well, if you can make a low poly mesh, and then use intersect_line_tri on data harvested from those meshes…
But at that point you’re probably better off just using a low poly mesh with normal physics and raycasting.

Off topic, don’t you get problems with rendering transparent objects like that? I’ve never been able to get it to work correctly.

Distance fields!
They’re designed to run on GPU, but should be good to go if you’re tracing a few rays only on the CPU. They’re also very acceleration structure friendly.

If the object count is low, or only some of the object are non collision…
maybe having the same objects on a different scene and rendering them to a small texture at the mouse location while lowering fov to make it like a microscope.
Then it’s a simple task of reading the pixel(s) for if they have any color(hit) or are transparent(background).
You could select a subset(half) of all object to draw them in a binary search fashion, so the frame-hit delay would be log(obj_count) frames.

@Smoking_mirror
I did. I ran into all sorts of issues with them, including one I posted here a couple weeks back. But I have solve it now.
That screenshot is from the blender viewport, and it uses a node-based shader. In the game engine, it has some issues.

When the simulation is running they all change to using a custom GLSL shader, which doesn’t have those issues. As far as I can tell, the GLSL shader is (very much almost) exactly the same.
The dataset is pretty special though. It comes from voxel data originally, and ever face has been displaced slightly so there are no intersections.

@Jackii
Hmm. That looks interesting. This data is static, so the distance-fields could be pre-built. Uhh, that would take time though, and probably similar to what the physics engine takes, unless I can delegate it to the (way overpowered) GPU.
In other news, I am really wanting GPU-based physics. I can see that for this application, a dedicated physics processing unit would be extremely useful.

@Vegetablejuice
That’s an interesting idea.Or just making sure that no two objects have the same color, and using a color -> object dictionary. At some point in the future I plan to have an object->color dictionary already…

When my computer finds lines which contains this information, it highlights them red and adds warning triangle to it. :smiley:

Make low poly proxy meshes of the objects,
Then have raycasting detect the proxy,

Side note, you can use a KD tree to look up what vertex you hit, even with the hitpoint from the proxy mesh, (if the mesh approximation is close enough)

GPU based physics isn’t impossible with the data passing “hacks” HG1 implemented, I haven’t tried but I’d assume building a texture with the geometry data and passing it in and out of GPU every frame would be inferior to native Frame buffer objects performance wise.

On the bright side, things like massive particles with collision, meta meshes and cone tracing are all distance fields friendly!

Ok, with a BVHTree you can raycast at a mesh you feed it, but that method to use it was broken, and I am building the updated upbge with it locally

People normally feed it a bmesh, not a ge mesh, so no one ever used the vertex based tree builder.

http://www.pasteall.org/blend/40627/

here you go - this is using a single plane to build the ray cast,
but you can use any ray (origin + (vector *distance))

you will need a fresh build for this to work

quick spaghetto blend

Attachments

untitled.blend (5.25 MB)

Did you find a solution? I have one but its a bit hard to explain so remind me on skype later and I will show you. I use the same method for raycasting lights.

Sent from my LG-H440n using Tapatalk

Due to some other things, I’ll eventually migrate to using a method similar to what VegetableJuiceF has mentioned.

The procedure will be:

  1. Each object is assigned a unique red value of object-color. Use with a shadeless material this makes every object unique (up to probably several hundred, possibly a few thousand before the next color channel has to be used)
  2. This is used with render-to-texture to generate an image
  3. Then, by using the mouse position in screen-space to look up the pixel color on the texture, the object it is over can be determined
  4. Another shader operates on render-to-texture to make it look how it should (which isn’t just restoring the original colors but doing some other fancy stuff).

The actual reason this is happening is because the existing method of applying the shader used in part 4 previously suffered from performance degredation over a 5 minute period.

To play devils advocate, you could do this more easily in Panda :wink:

Any reason you can’t use Bullet here? You will incur a load time, but make sure you’re using sensible collision shapes.
It makes sense to use physics, because raycasting is one of two jobs that physics is responsible for. Doing it in screen space is fine, but limits you to non-transparent materials (otherwise, a pain to resolve).

Hide the loading time with an overlay, and remove it when ready.

casting a ray with a KD tree worked pretty damn good for me,

you can always initiate the KD tree on frame 0 -> build it during loading screen by adding X units per frame. this raycast will ignore normal physics objects,

does the target move or something?

Yes. It does