Armature level of detail?

I’ve been trying to get my NPC to go down in detail when it gets farther away from the player with Kupoman’s lod system, but it seems that I can’t use it for the armature, is this right? I’ve now got it to where as soon as the NPC gets to a certain distance from the player it triggers a near sensor on him and a high level of detail NPC gets added and the low level NPC deletes itself. Same thing with the high level NPC, it deletes itself and adds the lower level one.

But the problem is that the armature does not carry on the animation of the previous NPC, so when it gets added to the scene there is a very visible unwanted change in the character. Is there a way that the armature getting deleted can pass on the information about what frame it was at in the animation so that the armature getting added will know what frame to start?

Sorry if that doesn’t make any sense. Here’s what I’m talking about:

Attachments

ArmatureLOD.blend (1.12 MB)

When I start the game, blender crashes. Windows 7 64, Blender 2.70a Hash 6f1a648.

Sorry, I have no idea why it would do that. Does it crash for anyone else?

Try this blend file, I made all the near sensors have a reset distance of zero. Thanks for helping btw.

Attachments

ArmatureLOD.blend (1.11 MB)

Still crashing. But it works on 2.70a (official release). I can’t really see what’s going on. The guys keep walking as expected, only their faces change into something spooky. Is that what you mean by ‘unwanted’ change?

No the face change is just there to show when they change. Look down at the legs when you walk close to the npcs, you will see a sudden switch to the beginning of the animation cycle. I just want to tell the npc with the spooky face to continue on the frame where the other one left off.

You’re talking about this bug: T40344 Levels of Detail not smooth with Armature Modifier. Open the blend I uploaded there which illustrates the issue. But it’s fixed now. Download the latest build and see the difference.

No I don’t think that’s what it is. In the example .blend, I have it so a near sensor is doing the level of detail, not kupomans lod. I’m ‘ending’ the entire cube the npc is parented to, and that same cube adds another cube with a totally different armature and mesh parented to it. I need to pass the information to the other armature.

Do not remove the central control of your character. Just replace the presentation objects (armature/skin-mesh/physics mesh).

Have a look at the Demo file of the BGE Guide to Characters. It shows one way to do a character LOD.

There are much more options.
*** Attention many details ***

You can go to the extreme, that you can even remove the object from the scene. Then you can perform an “out-of-sight” processing (behavior LOD). This means you simulate a simplified behavior of one or more objects that are not visible. You can even simulate a simplified model of a city or world.

Imagine following:

Your character met an NPC in your game. You can see the NPC how it moves, with all this little details (high detail level). Lets say you give this NPC a medallion (e.g. via a trade). The NPC continues its journey, goes to a town and trades the medallion wit another NPC.

You can see when he goes how the presentation details decrease from high detail to low detail to nothing (lowest presentation detail).

What happens with the logic?
As long as the NPC is near (the camera) the NPC logic can interact with the character in detail (specific: he can take the medallion from you, pay you, talk to you). He can give the medallion to someone else, while you are watching.

Static behavior
What happens if you run away until he is not visible? Can he still give the medallion away? With pure presentation LOD, yes because the behavior does not change on the distance.
The advantage is, the game world continues to live without you watching all the details.
The disadvantage of that is, the behavior of each NPC (and other interactive entities) in the game eat processing time even when not visible.

Occluded behavior
This is (sort-of) what you did.
You removed the complete NPC after he is out of sight: In that case there is no behavior = no out-of-sight processing.

Advantage: Much less behavior to process
Disadvantage: The world out-of-sight freezes (because it does not exist as long it is not visible)

Behavior LOD
With that you can get the benefits of both approaches: a living world, many interactive entities, managable processing time. But similar to LOD it is complex.

How to do that:
When an entity (e.g. NPC) is near it can process all the detailed behavior you expect it to do (e.g. speak, run, hunt, deal)
When an entity is not that near some of the behavior can be simplified or completely skipped (e.g. speaking). Simplification means, the entity does not process all sub-steps.

[There is even a difference when the NPC is dealing with your character and dealing with other NPCs]
Example
detailed dealing:

  • greetings
  • offer
  • re-offer
  • agreement
  • money exchange
  • good exchange
  • leaving

simplified dealing:

  • money exchange
  • good exchange

You just set the result of an action, rather than performing the action.

Remember the philosophical question: If in a forest a tree falls, and nobody is there - does it make noise? In a game no, there is not even a tree. But when you visit the forest the next time … the tree is lays down ;).

Back to the NPC example: The NPC walks a long way to a town and gives the medallion to another NPC.
You did not watch him doing, but when you met him on his way, you expect he still has the medallion.
When you met him after he visited the town, you expect him that he does not has the medallion.
You met the “another NPC” in town, and you expect it to have the medallion (after he got it).
The NPCs did all these things without you watching. And still you could watch the (detailed trade).

[Yes, the result might be different when watching and when not watching]

Maybe all of this is not what you want.
I still hope you got the idea.

You can set up animations to run via a property.


If you have an always sensor set to make a property +1 every tic and a second always sensor set to reset that property to 0 after 21 tics (or however long your walk cycle is) the walk cycle with animate quite smoothly.


If you have two armatures and two meshes both attached to the same “character” (really just a cube or cylinder, used for collision detection and movement). When you deactivate one armature and make its mesh invisible it doesn’t use any CPU time, armatures only eat up a lot of CPU when active (when playing animations).

So I’m sure you can guess how to handle the situation, set up both the low level armature and the high level armature to run using properties. Set the properties on each armature from a parent property on the “character” object, either using a script or logic bricks.

Activate the correct armature and make the correct mesh visible using your near sensor, you can use states to activate and deactivate the armatures. One downside to this is that property driven actions don’t blend well with other actions (as compared to play or loop mode) so there may be a bit of a jump when switching between walk and run mode, but you can get over this by timing the switch to coincide with the second reseting logic pulse and make sure the first frame of animations is quite similar (left leg forward first, and left leg slightly forward in the default pose, this will give you much smoother transitions anyway, and it’s good practice).

You can even make both armatures and meshes deactivated and invisible when the NPC is far enough away, but as Monster says, he can still go on taking actions, even though he’s not visible.

Monster, thanks for your great post, you have some very nice resources. The behavior LOD sounds good for a few NPCs in the game, like the main character’s close friends or important people he meets. But I think having all the NPCs with stats and logic running would be too much of a performance hit, because I want as many NPCs on screen as I can get. The NPCs will be interacting with the player and each other, so simplifying their interactions, e.g., having two NPCs talking to each other using shape keys and bone transformations at 5BU(blender units), but at 10BU they are “talking” only using bone transforms, and at 30BU they are basically only a mesh, or only have a couple of bones. Any distance after that I don’t care about preserving their individual characteristics or history.

What I really want is a more GTA style LOD, where if the player walks too far away from the NPCs and then goes to find them again, he can’t (for all you know they went into a building, a car, or are playing hide and seek), but the player does come across another group of random NPCs.

Right now my team has got a system of randomizing the visual features of the NPCs by having multiple heads and accessories(beards, eyeglasses, etc.) all parented and weight painted on to one armature. The NPC get added with the armature to the scene with all the heads etc attached, then a random head and accessory are chosen to stay in the scene and the rest are ended, leaving only a couple of objects out of hundreds.

About your demo file - I don’t understand where the real control object is(non instanced) or the logic, so I’m still not sure how to get rid of just the “presentation objects”. I tried ending just the armature and then adding a new low detail armature, but how am I supposed to parent the armature to the control cube? Also… you left auto keying on.

Smoking mirror - I will try that but I don’t know how I feel about having two always sensors on pulse mode going all the time for every NPC. Thank you for trying to help, I appreciate it

Currently I can’t look at my own files :eek:, i suggest to look at the object that has LOD on the name. As far as I remember it performs the exchange of the armature.

Hint: the implementation is at another layer (maybe 16) as the level layers contain instances only.

Ah, yes I see it now, they were in the other scene called “Groups”. I still don’t really understand whats going on with it since it’s written in python, so I’ll have to show that to the programmer so he can help me with that. Btw how did you get the “Target” empty to be in every layer all the time? That’s the first time I’ve seen that.

<M> ( move to layer )
Shift select the layers
<CR>

I think you can do that via the object tab too, but I’m used to the move method :slight_smile:

Ah, yes that’s simple enough :slight_smile:

Thanks.