efficiently creating very large meshes

Hello,

I’m currently trying to write an import script, that takes a map of a voxel landscape and creates a 3d model out of it.
As a first, rudimentary setup, to see if it works I created a cube for every wall and a plane for every floor tile. The relevant code looks like this:

for t in tiles:
    if t.type == Tile.WALL or t.type == Tile.FLOOR:
        locx = bigX + t.x
        locy = bigY + t.y
        if t.type == Tile.WALL:
            bpy.ops.mesh.primitive_cube_add(location=(locx * 2, locy * 2, bigZ*2), enter_editmode=True)
        else:
            bpy.ops.mesh.primitive_plane_add(location=(locx * 2, locy * 2, bigZ*2 - 1), enter_editmode=True)

The map I tested this on had a total of ~20.000 wall and floor tiles which took at least 15 minutes. Of course there is much room for improvement (e.g. unnecessary faces and vertices when 2 walls are next to each other) but even then it seems to me like it would take quite long, especially since these maps can easily be much larger. Also note, that these maps do not only contain walls and floors, I just used these for testing.

Now, my question is, what would be the fastest way to automatically create these kinds of meshes?
Does the enter_editmode part maybe cause significant overhead? Maybe the primitive_xxx_add functions are for some reason inefficient? Would it help if I split the mesh into multiple objects? Should I first only calculate all the vertex coordinates and then create the mesh with a single function call? Or is it just a technical limitation of Blender, that such large meshes can’t be created quickly?

Any help is greatly appreciated.
Thanks in advance,
Jolty

EDIT: I just found out, it can’t only be the sheer number of vertices, because creating a 1000*1000 mesh takes less than a second.

UPDATE:
I think I have already found a solution on my own. I used the following code:


    #mesh arrays
    verts = []
    faces = []
 
    for bla in range(20000):     
        #fill verts array
        for i in range (2):
            for j in range(2):
         
                x = i
                y = j
                z = 0
         
                vert = (x,y,z) 
                verts.append(vert)
         
        #fill faces array
        
        A = bla * 4
        B = bla * 4 + 1
        C = bla * 4 + 3
        D = bla * 4 + 2
         
        face = (A,B,C,D)
        faces.append(face)
        
    #create mesh and object
    mesh = bpy.data.meshes.new("wave")
    object = bpy.data.objects.new("wave",mesh)
 
    #set mesh location
    object.location = bpy.context.scene.cursor_location
    bpy.context.scene.objects.link(object)
     
    #create mesh from python data
    mesh.from_pydata(verts,[],faces)
    mesh.update(calc_edges=True)

It basically creates a bunch of planes on top of each other, but creates the mesh all at once instead of adding each plane separately. Creating 2000 was pretty much instant, which already takes some time with the other method. 20000 had a short, but noticeable delay, and creating 200000 takes a few seconds, which is absolutely acceptable (Blender lags anyways after that).

It would still be nice to know, why the other code is so slow and if there is maybe a better solution.

See here:
Q: Python performance with Blender operators

Ah yes, very helpful and clear explanations. I guess I was too focused on my particular problem to be able to find them with google.

Thank you!