The Programm I’ve implemented as a Prototype in Python is meant to generate a synthesized reverb in an artificial room modeled in Blender3D.
So it calculates a reverb cararacteristic for a given Signal (eg. an Impuls) and a given set of Data (eg. size of the room, topology, position of the signal source, position of the reciver … ).
Goals:
Multible Sources support
native stereo support
Make it animatable
Implement diffrerent Materials (eg. paraterized EQ or diffuse Scattering)
Implement it in C++ & OpenCL for faster raytracing
very nice little project you have here.
I played around a little and tried creating an empty room (which leads to less occlusions) and lowered the scale factor from 8 to 2 (otherwise Python ran out of space with “List Index out of range, on line 97”).
If I find time I will do some more testing.
Fascinating to see the power of Python with creating audio files from a physical calculation of the Blender scene. If it just would be more performant…
OMG! I love the idea of visualising a recording space and a sound space for playback. Could you specify surface materials as well?
A strange example from work, I often have clients send voice over recordings made in a motor vehicle, as they think that it is quiet and small. But it’s full of reflective surfaces and sounds overly bright. Wish I could show people how bad the idea is. Then what environment would be better (generally).