Python code slows down with increasing iterations

Hey Community, great to know that you are out there - this is my first contact to blender and its python API and after some time reading and playing around, i want to do something useful with it.

Since I am a physicist working in the field of surface science…wouldn’t it be nice to get some mind blowing graphics out of some models?

So most of the time I want to have a surface of lets say Cu displayed with some molecules on it. Thanks to Patrick Fuller who wrote some nice piece of code to import molecular structures in blender it is easy to import them.
“Harder” issue is the modelling of the surface. The brute force attempt to do some simple (python) loops results in horrible slow creation times which gets worse, the more atoms(uv-spheres) are added. I read through some issues regarding wrong operator usage (scene gets updated every time, which consumes time) but that doesn’t fit my code.

Improvements on the code which I’m not sure how to tackle:

– Reduce the compile time of the python code in blender and reduce memory usage!

– Since I create several layers of Cu-substrate atoms, I could create one slap and duplicate it - and not create every single atoms once again.
– Maybe don’t copy the spheres but make a linked duplicate to save memory? Not sure How to do this properly.
– Nice feature … create a slap that is created following the close packed directions of the substrate (there are three of them with 60° rotated to each other)
– Nice feature … create not only (111) surfaces, but also (100) and mixed surfaces with steps.

Code follows in next post and attached…substrate_to_blender_temp.pdf (10.5 KB)
Please have a look and post your thoughts.

Thanks for your interest

Python script attached as .pdf (just open in text editor… its not a pdf but a text file… i could not upload text (script) files by definition)

Please note, that I read about an issue with calling bpy.ops.* because they may update the whole scene after they apply - and that update takes long time…especially when there are 10.000 spheres to update.

This is why there is only one single bpy.ops. call in my script.


#!/usr/bin/env python

import bpy
from math import sin,cos,sqrt
import json
import os, time

PATH = os.path.dirname(os.path.realpath(__file__))

with open(os.path.join(PATH, "atoms.json")) as in_file:
    atom_data = json.load(in_file)

###############################################################
def draw_substrate():

    scale = 1             #scales atomic radius by scale - does not change lattice constant

    a = 0.255            #lattice constant in nm
    a1 = a * 3.20432         # ####3,240692139
#correct spacings for correct nearest neighbour distance to be a

# This is for hexagonal lattice (111) for fcc lattice constant
    dx=a1*cos(30)*1.0115836        # *1.0115836 to get nearest neighbour to 0.255
    dy=a1*sin(30)/sqrt(3)* 0.947583    #     ------------ " --------------    

    d= (a1/sqrt(3)) * scale    / 7 #  original layer spacing

#Iteration index, width of substrate drawn
    n = 120
    layers = 3            ### number of layers to draw ... >8gb for 3rd layer with n=50...
    smooth = True
    join = True
    link_to_scene = True
    shapes = []
    verbose = True
    verbose2 = False


# Add atom primitive
    bpy.ops.object.select_all(action='DESELECT')
    bpy.ops.mesh.primitive_uv_sphere_add()
    sphere = bpy.context.object
    sphere.dimensions = [atom_data["Cu"]["radius"]* scale] * 3
    
    key = "Cu1"
    bpy.data.materials.new(name=key)
    bpy.data.materials[key].diffuse_color = (1, 0.638152, 0.252242)    #light brown
    bpy.data.materials[key].diffuse_intensity = 0.7    
    bpy.data.materials[key].specular_intensity = 0.2

    
    key = "Cu2"
    bpy.data.materials.new(name=key)
    bpy.data.materials[key].diffuse_color = (0.281, 0.183, 0.076)    #darker brown
    bpy.data.materials[key].diffuse_intensity = 0.5    
    bpy.data.materials[key].specular_intensity = 0    
    bpy.data.materials[key].use_shadeless = True    


    key = "Cu3"
    bpy.data.materials.new(name=key)
    bpy.data.materials[key].diffuse_color = (0.073, 0.049, 0.022)    #dark brown
    bpy.data.materials[key].diffuse_intensity = 0.2    
    bpy.data.materials[key].specular_intensity = 0    
    bpy.data.materials[key].use_shadeless = True    
###############################################################################################

    print("Drawing layer 1:")
    layer_start_time = time.time()
    for i in range(n):
        if verbose: print("row: ", i, " of ", n)
        row_start_time = time.time()
        for j in range(n):
            atom_sphere = sphere.copy()
            atom_sphere.data = sphere.data.copy()
            atom_sphere.location = (2*i*dx,j*dy,0)
            atom_sphere.active_material = bpy.data.materials["Cu1"]
            shapes.append(atom_sphere)

            atom_sphere = sphere.copy()
            atom_sphere.data = sphere.data.copy()
            atom_sphere.location = (2*i*dx+dx,j*dy+dy/2,0)
            atom_sphere.active_material = bpy.data.materials["Cu1"]
            bpy.context.scene.objects.link(atom_sphere)
            shapes.append(atom_sphere)
            if verbose2: print("Atom: ",j)
        row_run_time = round((time.time() - row_start_time),2)
        if verbose: print("Time for row: ", i, " in layer I is ", row_run_time)

    layer_runtime = round((time.time() - layer_start_time),2)
    print("--- Runtime: ", layer_runtime, " seconds --- for first layer")

    if layers > 1: # build second layer
        print("Drawing layer 2:")
        layer_start_time = time.time()
        for i in range(n):
            if verbose: print("row: ", i, " of ", n)
            row_start_time = time.time()
            for j in range(n):
            
                atom_sphere = sphere.copy()
                atom_sphere.data = sphere.data.copy()
                atom_sphere.location = (2*i*dx+dx,j*dy+dy/6,-d)
                atom_sphere.active_material = bpy.data.materials["Cu2"]
                shapes.append(atom_sphere)

                atom_sphere = sphere.copy()
                atom_sphere.data = sphere.data.copy()
                atom_sphere.location = (2*i*dx+dx+dx,j*dy+dy/2+dy/6,-d)
                atom_sphere.active_material = bpy.data.materials["Cu2"]
                shapes.append(atom_sphere)
                if verbose2: print("Atom: ",j)
            row_run_time = round((time.time() - row_start_time),2)
            if verbose: print("Time for row: ", i, " in layer II is ", row_run_time)

        layer_runtime = round((time.time() - layer_start_time),2)
        print("--- Runtime: ", layer_runtime, " seconds --- for second layer")

    if layers > 2: # build third layer
        print("Drawing layer 2:")
        layer_start_time = time.time()
        for i in range(n):
            if verbose: print("row: ", i, " of ", n)
            row_start_time = time.time()
            for j in range(n):
                atom_sphere = sphere.copy()
                atom_sphere.data = sphere.data.copy()
                atom_sphere.location = (2*i*dx+dx+dx,j*dy+2*dy/6,-2*d)
                atom_sphere.active_material = bpy.data.materials["Cu3"]
                shapes.append(atom_sphere)

                atom_sphere = sphere.copy()
                atom_sphere.data = sphere.data.copy()
                atom_sphere.location = (2*i*dx+dx+dx+dx,j*dy+dy/2+2*dy/6,-2*d)
                atom_sphere.active_material = bpy.data.materials["Cu3"]
                shapes.append(atom_sphere)
                if verbose2: print("Atom: ",j)
            if verbose: 
                row_run_time = round((time.time() - row_start_time),2)
                print("Time for row: ", i, " in layer II is ", row_run_time)
        layer_runtime = round((time.time() - layer_start_time),2)
        print("--- Runtime: ", layer_runtime, " seconds --- for third layer")

    if layers > 3: print("Choose 1-3 layers or modify code!")

    # Smooth and join molecule shapes
    if smooth:
        print("smoothing...")
        start_time = time.time()
        for shape in shapes:
            shape.select = True
            bpy.context.scene.objects.active = shapes[0]
            bpy.ops.object.shade_smooth()
        bpy.ops.object.select_all(action='DESELECT')
        runtime = round((time.time() - start_time),2)
        print("--- Runtime: ", runtime, " seconds --- for smoothing")
    if link_to_scene: 
        print("linking all shapes to scene")
        start_time = time.time()
        for shape in shapes:
            bpy.context.scene.objects.link(shape)    
        bpy.ops.object.select_all(action='DESELECT')
        runtime = round((time.time() - start_time),2)
        print("--- Runtime: ", runtime, " seconds --- for linking to scene")
    if join: 
        print("joining all shapes into one object")
        start_time = time.time()
        for shape in shapes:
            shape.select = True    
        bpy.context.scene.objects.active = shapes[0]
        bpy.ops.object.join()
        bpy.ops.object.select_all(action='DESELECT')
        runtime = round((time.time() - start_time),2)
        print("--- Runtime: ", runtime, " seconds --- for joing into single object")



###############################################################
def add_light(tx, ty, tz, style):

    scene = bpy.context.scene
    lamp_data = bpy.data.lamps.new(name="New Lamp", type=style)
    lamp_object = bpy.data.objects.new(name="New Lamp", object_data=lamp_data)
    scene.objects.link(lamp_object)
    lamp_object.location = (tx, ty, tz)
    lamp_object.select = True
    scene.objects.active = lamp_object

###############################################################
def clear_scene():
# If the starting cube is there, remove it
    if "Cube" in bpy.data.objects.keys():
        bpy.data.objects.get("Cube").select = True
    if "Lamp" in bpy.data.objects.keys():
        bpy.data.objects.get("Lamp").select = True
    if "Camera" in bpy.data.objects.keys():
        bpy.data.objects.get("Lamp").select = True
    bpy.ops.object.delete()

###############################################################
def add_camera(tx,ty,tz,rx,ry,rz,label):
    import bpy
    
    fov = 25.0
    pi = 3.14159265
    scene = bpy.context.scene
    camera_data = bpy.data.cameras.new(name=label)
    camera_object = bpy.data.objects.new(name=label, object_data=camera_data)
    scene.objects.link(camera_object)
    camera_object.location = (tx, ty, tz)
    camera_object.select = True
    scene.objects.active = camera_object

###############################################################
# Runs the method
if __name__ == "__main__":
    clear_scene()    
    print('draw substrate . . .')
    draw_substrate()
    print('add light 1')
    add_light(2,-3,10,"HEMI")
    print('add light 2')
    add_light(0,0,12,"POINT")
    print('add light 3')    
    add_light(0,-10,13,"POINT")
    print('add light 4')
    add_light(10,0,12,"POINT")
    print('add light 5')
    add_light(10,-10,12,"POINT")

    print('add camera 1')
    add_camera(3,-4,20,0,0,0,"cam1")
    print('add camera 2')
    add_camera(2,5,5,-90,180,0,"cam2")
    print('update scene')
    bpy.context.scene.update()

Running the above mentioned code with blender 2.76b ("./blender -P /path/to/script/file/") results in every row taking longer time than the cycle before although the number of operations staid the same (120 atoms each row).


Drawing layer 1:
row:  0  of  120
Time for row:  0  in layer I is  0.01
row:  1  of  120
Time for row:  1  in layer I is  0.03
row:  2  of  120
Time for row:  2  in layer I is  0.04
row:  3  of  120
Time for row:  3  in layer I is  0.06
row:  4  of  120
Time for row:  4  in layer I is  0.06
row:  5  of  120
Time for row:  5  in layer I is  0.08
row:  6  of  120
Time for row:  6  in layer I is  0.1
row:  7  of  120
Time for row:  7  in layer I is  0.13
row:  8  of  120
Time for row:  8  in layer I is  0.16
row:  9  of  120
Time for row:  9  in layer I is  0.2
row:  10  of  120
Time for row:  10  in layer I is  0.25
row:  11  of  120
Time for row:  11  in layer I is  0.31
row:  12  of  120
Time for row:  12  in layer I is  0.37
row:  13  of  120
Time for row:  13  in layer I is  0.43
row:  14  of  120
Time for row:  14  in layer I is  0.49
row:  15  of  120
Time for row:  15  in layer I is  0.55
row:  16  of  120
Time for row:  16  in layer I is  0.62
row:  17  of  120
Time for row:  17  in layer I is  0.68
row:  18  of  120
Time for row:  18  in layer I is  0.73
row:  19  of  120
Time for row:  19  in layer I is  0.79
row:  20  of  120
Time for row:  20  in layer I is  0.85
[...]
Time for row:  116  in layer II is  17.42
row:  117  of  120
Time for row:  117  in layer II is  17.49
row:  118  of  120
Time for row:  118  in layer II is  17.55
row:  119  of  120
Time for row:  119  in layer II is  17.6

read this

anything higher then a few 1000 and it will get slower in any case!

might take a few minutes and not seconds to do if higher then 10 000 !

it’s is the nature of python and blender
and yes bpy ops are the slowest of all but very powerful and flexible!

happy bl

Hey RickyBlender, thx for comment - had no time to answer.

Read this one and of cource scene updating through operators is a stupid idea in for loops so Iam using .copy() for the ID’s, but I guess each time an new atom is copied I call through sphere = bpy.context. an operator. But I am not sure how to read about this, if there.

Yep, I live with 8min creating time once I run the script - thought of some kind of hyperethreading ideas for the for loops :slight_smile:

…but working with the bunch of objects is slow though the viewport…if it proved when I could change theit visibility?

I have worked on the script since then and will post updated code soon, got several seconds saved through array modifiers :smiley:

using duplivert or instancing if possible should improve things

also turn off you history - you loose undo
but will be faster

the only other way that might help would be to do more objects in same data mesh
then do the add to update scene
but more tedious to program it !

that way it would reduce the qty so instead of having 10 K you might have 1/2 or 1/4 objects
then it would be a lot faster

happy bl

Isn’t that what a “join” does?
Reading into dupliverts and its python usage…

not really!
let say more when you add manually a circle you could then add another circle in the same Bmesh data
and do only on Bmesh update
if you can do that !

that way you lower the quantity of updates in scene

happy bl

I did a little test and the performance was very bad.


import bpy
import time


def draw_substrate():    
    # Trying to figure out how much time it takes to iterate.


    # Substrate counting was put into a linear form: 120 * 120
    # Normally your code wants total_substrates = 14400
    # but now I am testing with lower values for safety.
    # Some Tests Made: OS: Win7-64, CPU: i5-2320, MEM: 4GB
    # 5000 cubes: Take 6 seconds
    # 10000 cubes: Take 29 seconds + 5 seconds for Blender to recover
    # for i in range(total_substrates):
    #     calc_time()
    #     copy_obj = obj.copy()
    #     bpy.context.scene.objects.link(copy_obj)
    
    substrates_num = 100


    # Create the mesh template.
    bpy.ops.mesh.primitive_cube_add()
    obj = bpy.context.scene.objects["Cube"]
    
    # Start the timing
    start_time = time.time()
    total_time = 0
    def calc_time():
        nonlocal start_time, total_time
        elap_time = time.time() - start_time
        if elap_time > 1:
            total_time += elap_time
            start_time = time.time()
            print("Total Time: " + str(total_time))


    # Start the loop
    for i in range(substrates_num):
        for i in range(substrates_num):
            calc_time()


            copy_obj = obj.copy()
            bpy.context.scene.objects.link(copy_obj)








def clear_scene():
    bpy.ops.object.select_by_type(type='MESH')
    bpy.ops.object.delete()
    bpy.ops.object.select_by_type(type='EMPTY')
    bpy.ops.object.delete()


if __name__ == "__main__":  
    clear_scene()
    draw_substrate()

So my idea is to optimize the render by splitting it into different Render Layers. Instead of producing one image at once, you could render 4 (e.g.) images and then combine them into the compositor.

In real numbers. I create 43200 atoms in the memory in a few milliseconds, but I want to render 8 times, only 5400 of them each time, when 5400 items take 7 seconds to generate.

This approach is very optimized and manageable, it won’t crash Blender since the resources are kept low.

Tip: If you save the generated atoms in a text file, you won’t have to recalculate them each time. Also you could send have of the batches to your friend and speed up your production rate in half. :slight_smile:

Some tips about rendering. The only problem is that the shadows won’t interact well between the render layers, since each layer is a brand new render. So the deal is to use some pseudo-shadows (e.g. Ambient Occlusion) instead and avoid shadow casting for good. I bet that in the atomic scale light rules do not apply, so you will be accurate either way.


import bpy
import time
from mathutils import Vector
from random import random


total_batches = 8
atoms_per_batch = 0
atoms = []


class Atom:
    def __init__(self):
        self.location = Vector()
        self.layer = 0
        self.material = ""
        self.batch_index = 0;
        self.dist = 0.0


def generate_atom_data():
    print("Generating Data")
    
    substrates_num = 120
    layers_num = 3


    # Start the timing
    start_time = time.time()
    
    # Start the loop
    global atoms
    for l in range(layers_num):
        for i in range(substrates_num):
            for j in range(substrates_num):
                at = Atom()
                at.layer = l + 1
                # Adding some random values for testing purposes
                at.location = Vector([random() * 100, random() * 100, random() * 100])
                atoms.append(at)


    print("Added", str(len(atoms)), "atoms to the list.")
    print("Time Took:", str(time.time() - start_time))
    
    def debug_list():
        for i in atoms:
            print(i.location, i.layer)
    #debug_list()
    print()


def calculate_atom_batches():
    global atoms
    global total_batches
    global atoms_per_batch
    
    c = bpy.context.scene.objects["Camera"]    
    
    atoms_per_batch = int(len(atoms) / total_batches)
    print("Calculating Batches")
    print("Atoms:", len(atoms))
    print("Batches:", total_batches)
    print("Atoms / Batch:", atoms_per_batch)
    
    # Calculate distance from camera
    for i in atoms:
        i.dist = (c.location - i.location).length
    
    # Sort the atoms list
    from operator import attrgetter
    atoms = sorted(atoms, key=attrgetter('dist'))
    
    # Split the items virtually
    batch_index = 0
    batch_count = 0
    for i in atoms:
        i.batch_index = batch_index
        batch_count += 1
        if batch_count == atoms_per_batch:
            batch_index += 1
            batch_count = 0


    print()
    
def start_render_batches():
    print("Started Mesh Generation")   
    
    # Create the mesh template.
    def template_init():
        bpy.ops.mesh.primitive_cube_add()
        return bpy.context.scene.objects["Cube"]
    
    # Timing.
    start_time = 0
    total_time = 0
    def calc_time():
        nonlocal start_time, total_time
        elap_time = time.time() - start_time
        if elap_time > 1:
            total_time += elap_time
            start_time = time.time()
            print("Total Time: " + str(total_time))
    
    # What to do when batch ends
    def call_when_batch_ends():
        print("Batch Ended")
        # doing something else...




    # Start the batch loops.
    global atoms
    global total_batches


    for b in range(total_batches):
        print("Batch", b)


        # Refresh the scene
        clear_scene()
        # Create the object from template.
        obj = template_init()


        # Pick atoms based on their batch ID.
        items = [x for x in atoms if x.batch_index == b]
        
        # Begin timing
        start_time = time.time()
        total_time = 0


        # Start iterating through the picked items.
        for i in items:
            calc_time()


            # Copy the new object and it's properties
            copy_obj = obj.copy()
            copy_obj.location = i.location
            bpy.context.scene.objects.link(copy_obj)


        # Once all items are placed in the scene
        call_when_batch_ends()


        # Force quit for debugging purposes
        if b == 0:
            break


    print()




def clear_atom_data():
    global atoms
    atoms.clear()


def clear_scene():
    bpy.ops.object.select_by_type(type='MESH')
    bpy.ops.object.delete()


if __name__ == "__main__":
    print("STARTED
")
    clear_atom_data()
    clear_scene()
    generate_atom_data()
    calculate_atom_batches()
    start_render_batches()

Thx for your reply, I’m going to test it this weekend, so your idea is to improve render times by seperate rendering - shadows aren’t that important, maybe just for the underlying substrate and only there for the first layer, will check differences.

I haven’t looked into detail in this topic yet, my first and nastiest problem is the slow response times of the viewport, for example if I want to rotate the view, align cameras, put other stuff in the scene, delete spare atoms…etc.

Btw: Updated code: => https://github.com/getarun/blender-chemicals
In the folder structures, you find my generating scripts to hexagonal boron nitride/graphene and the substrate (fcc metals)(111).

in given site with pics
I can see different molecules
so not certain what final thing is

but if there are repeating shape/pattern you could use instancing with duplivert of dupliface
which should be faster in viewport

happy bl

Hi,
I noticed that you’re using a default UV sphere for your atoms, but because my computer is a bit slow I almost always reduce the polygon count on my spheres i.e.
bpy.ops.mesh.primitive_uv_sphere_add(segments=10, ring_count=6)

should lower your polygon count below the defaults (32 and 16), while still giving passable spheres (with smooth shading).

One other option I’ve been looking at recently is using OpenGL (GLSL shading), to get render times down so that I can get animations to render in a reasonable time. If you are not already doing so, try changing from multitexture to GLSL in the Shading section of the right hand band (n key) of the 3D viewport/window.

@ Kauranga: Great ideas, though about reducing the level of detail in the viewport with this option, but I didn’t check the render result.

@ RickyWill look into your scripts in more detail during weekend :slight_smile: Problem is not the molecule with ~ 150 atoms each, but the substrate with ~ 4k atoms.

Is there a way to reduce level of detail only in the viewport and not in the rendered objects?

look into render panel
for viewport it is the preview setting

in my script I create spheres at the beginning then simply copy it in the for loop

happy bl

GLSL shading only applies to the (current?) viewport, you can switch to Cycles when you want to render.

Reducing the level of detail only in the viewport sounds like using a subdivision surface modifier? You can get a passable sphere by: with 8 segments and 5 rings (40 faces - 64 tris), then apply a subsurf modifier, but set the view to 0 and render to 1 or 2. Make sure to set “smooth shading”. You can reduce the segments and rings further but the resulting “sphere” is a bit uneven (lumpy). I don’t know how the viewport will go if you have thousands of objects with subsurf on them. You might want to use linked duplicates so there’s only one actual sphere mesh duplicated on your model.

I switched to a ramdisk and viewport improves :slight_smile:

@kauranga: yep, will implement that, be patient, updates will follow, and a comparison
@RickyBlender: haven’t looked in your code yet, but atm I copy my (created in the beginning) sample spheres every atoms location. I will not use .copy() anymore, but will switch to linked duplicates as kauranga suggests.

you wont be able to change color for mat with link ob !
but might work if all same mat

and .copy if I remember well is link I think
look at my script for adding new sphere copy

happy bl

@Kauranga:
okay, first idea is to apply bpy.ops.object.duplicate(linked=True, mode=‘TRANSLATION’) but It takes several seconds for the first 50atoms (implied scene update()? , so as I have no idea how to replace those .copy() functions with something like .move_linked() ( but I can’t find those functions :confused: )


bpy.ops.object.select_all(action='DESELECT') 
    bpy.ops.mesh.primitive_uv_sphere_add(segments=8, ring_count=5)
    sphere = bpy.context.object 
    bpy.ops.object.modifier_add(type='SUBSURF')
    bpy.ops.objects["Sphere"].modifiers["Subsurf"].levels = 1

results in an error


bpy.ops.objects["Sphere"].modifiers["Subsurf"].levels = 1
TypeError: 'BPyOpsSubMod' object is not subscriptable

same with bpy.ops.objects[“Sphere”].modifiers[“Subsurf”].show_viewport = False … wrong way of accessing it I guess

@RickyBlender:
Finally checked your code…took me some time to find the code within the blender file … nice program - this blender is :slight_smile:


if  -5 < mag  <=1:
                col_name ="red1"
#                setMaterial(bpy.context.object, red1)
    
                ob8 = spherered1.copy()
                ob8.location = ( XC, YC, ZC )
                bpy.context.scene.objects.link(ob8)

So we do it the same way…

Dont’t need to worry about the copied spheres properties, they are all the same except the location of course.

Generally speaking (not personally to Kauranga) This was a reason I used Cubes in my example, in a sense that if a subdivision modifier can be added into the cube it will become like a sphere. This way tons of geometry can be avoided.

Also there might be a case to use only Empty objects and use them as Proxy objects also. But unfortunately I don’t know any more details on this.

Is it this ramdisk? https://www.youtube.com/watch?v=vcrjpog0g3k Wow! I really need to try this out. :slight_smile:

Also another important topic, if someone can figure out a clever way of visibility testing for each cube, tons of Cubes can be discarded. The only case that brings me ideas is raycasting, if the raycast is interrupted it means that the object should be hidden. Also you could do 4 raycasts per object (the bounding box) to ensure that an object won’t be partially hidden.

Also taking in mind the distance (cube-camera) as well, you could define the subdivision detail of the cube. Creating a static image will work fine. But for animation it might work I did a silly test and it worked fine (added two keyframes into modifiers[“Subsurf”].render_levels and rendered). In order to recalculate the subdivision level in each frame you will need to use this technique. https://www.blender.org/api/blender_python_api_2_61_release/bpy.app.handlers.html