Getting a list of objects in the frustrum

As I said, optimizing everything is not best practice, because (as I explained in a previous post) it leads to greater system complexity, which is a more relevant concern than some potential performance problem.

For that reason, one should focus on simplifying system architecture, and only optimize specific bottlenecks that actually matter in the overall performance profile.

My own models for design

ObjectX - spawns object that run code then die - (recent development)
add “pusher”
pusher[‘target’]=target
pusher[‘vector’]=vector
pusher[‘time’]=time

etc.

ObjectParent - object spawns items using a list and index that parent to it, object manipulates child using own.child[0]

  • this is so that I can package all logic in the item to be spawned, except the triggers which are in the parent, this allows the parent
    to maintain persistent data, and return it to the child- Working in my weapon system - I have coded 8 weapons without touching the spawning/firing/reloading code. * I can actually add a new weapon in about 5 minutes

Manager- object has list it manages, individual objects add themselves, and remove themselves from list

  • still a experiment

There are some situations where initial complexity saves hundreds of hours of work later.

As I said before: no one argued against planning.

If a certain system architecture (or a specific optimization) solves a clear problem (that you’re likely to have), or provides clear benefits (that you’ll actually need), then it’s justified. However, if it doesn’t, and you go on to optimize everything, you’re just needlessly increasing the overall complexity of the system.

Jonathan Blow (developer of Braid) talks about these matters in this fantastic talk: http://www.youtube.com/watch?v=JjDsP5n2kSM

I recently read that in python, using divide rather than multiplying by a decimal is not that much slower. There are certain code systems where it is, and others where it isn’t. A language like python already does a lot of interpretation, so such optimizations don’t make much impact.

I’m sure planning is important, but it should take the form of testing. Write two functions and then stress test them. If there’s not a big difference in performance, choose the least complex one or the one with least dependencies. If there’s a huge difference you could choose the faster one.

If you want to stress test a function you can try running the function several hundred times in a single logic tic:

for _ in range(500):
    do_function()

The resultant impact on performance won’t be realistic but it is good for drawing comparisons between two different functions, as the margin of difference will be hugely magnified, enough so you can actually see it in a difference in frame rate.

Of course there’s a difference between optimization and just writing better code. If your code is not well written it will run slowly. For example, if you have to iterate through every object make sure you only do it once.

When you make a scene manager but only have one function which uses the data, what you’re really doing is this:

enemy_list = []
for ob in scene.objects:
    if ob.get("enemy"):
        enemy_list.append(ob)
for enemy in enemy_list:
    do_something(enemy)

When instead you could do this:


for ob in scene.objects:
    if ob.get("enemy"):
        do_something(ob)

Which is actually much faster.

But if you want to sort the list (by distance from the player for example) creating a master list can be useful, but only if you can be sure it won’t change during the frame.

Another thing to remember is to remove, or comment out, print() statements when checking the speed of a function. You might be depressed that a function is running slowly and ruining your game, but it’s possible that removing print() will solve all problems.

What I’m missing is why you need this. So what do you want to achieve.

If you have performance problems with the number of objects, you have either to much objects or you use an inefficient organization for you needs. (e.g. caching search result, using trees, using efficient search algorithms …)

I think that kind of testing has its place (in the actual optimization effort), but I don’t think it should be used to drive system design, or to decide on optimization priorities: The deltas in performance, between multiple implementations of the same algorithm (even if significant), are only relevant if the most straightforward implementation is confirmed to be a bottleneck. Until then, one should simply use the most straightforward implementation.