Holy crap.. Nvidia puts all of our simulations to shame.... IN REAL TIME!

This is sort of on topic, I’m running dual GTX690’s and have no problems rendering, but particle simulations are KILLING ME! I’m finding a lot of blade servers and node servers on ebay that can get me into the 32 CPU arena. But I don’t know jack about how these work. So I’m reluctant to make a purchase. Can anyone tell me if I buy something like a PowerEdge C6100 will I be able to run my simulations on that server, bake, then render on my desktop (the one with the 690’s)…or have I missed the concept entirely. If so, does it require multiple backflips, and jumping through flaming hoops or is it as simple as installing Ubuntu, Blender, and then networking over my .blend file.

I guess I’m in a delimma, because if I try to upgrade to a 24x core i7, it will cost the more than one of these older servers. I don’t even know if its possible. If there is a better place to ask this question, please refer me to that resource. Thanks.

Yes, i believe that blender either saves the simulation data inside of the blend file, or that it saves to a folder in the same directory as the blend file… in any case you should eb able to simulate on one and render on another (we do it all the time

I guess I’m in a delimma, because if I try to upgrade to a 24x core i7, it will cost the more than one of these older servers. I don’t even know if its possible. If there is a better place to ask this question, please refer me to that resource. Thanks.

Just do your research on benchmarking these servers before purchasing one… there should be benchmarks out there that will give you an accurate percentage speed increase over your current desktop. not to mention, you will be paying more for older generations of cpus in terms of power usage as older cpus are much much more power hungry.