Anyone tried James Yonan's "brenda" which uses Amazon instances for rendering?

Before I try this out myself, which version of Blender does the Brenda AMI have installed? Has anyone rolled their own AMI?

@rider_rebooted: so you would set “threads” to 1000 or something? Cool…

@rider_rebooted - Right now the brenda package only runs on Linux/OSX. I can’t remember the reason, but probably due to additional optimization to get it running on Windows. I don’t see why we wouldn’t be able to get it to work on Windows as long as the dependent libraries (boto, etc) could be installed. Maybe there are path issues or something.

@Hoverkraft - right now there is just the one original AMI that James set up. It’s running a 2.6x version (can’t remember which one), so 2.7 files probably won’t work. The instructions to roll your own AMI are on the github page but I haven’t had time to try it yet.

@Kemmler - The instructions to modify the default files to render tiles is on the github page. You should know that the process would only render the individual tiles. It wouldn’t stitch them back into the final full size image. You’d have to do that locally somehow.

Thanks for the answer Todd.

If the tiles are rendered full scale but only the respective tile visible and the rest transparent (like with border checked, crop unchecked and transparent film) then you can just slap each layer/tile on top of each other, for example with imagemagick or photoshop.

@toddmcintosh I think Windows is a must as I would imagine most Blender users run it. I have to dual boot with Ubuntu now just for Brenda lol.

I’m the original author of Brenda, and I thought I’d share some of my ideas on developing a UI (I should add though that I have a fairly demanding day-job, so I probably won’t be able to contribute to the development).

Probably the ideal way to make a UI for Brenda would be to develop it as a Blender plugin. This lets you leverage on all of the UI capabilities and cross-platform portability already built into Blender.

The main stumbling block right now is that Blender uses Python 3 internally, but Brenda uses Python 2 because it leverages heavily on the Boto library for accessing AWS, and the current production release of Boto is Python 2 only.

There’s a new version of Boto in development, Boto 3, that will support Python 3, and it looks like the project is getting some support from Amazon as well:

https://aws.amazon.com/sdkforpython/

It’s not clear to me yet when Boto 3 will be ready for production. At a minimum, Brenda needs S3, EC2, and SQS APIs that are reasonably complete. There are some other Boto branches that claim Boto support for Python 3, but they don’t look very well-maintained.

Getting Brenda to work on Windows

The main sticking point to getting Brenda to work on Windows is that the brenda-tool component requires an ssh client so it can log into the EC2 instances, execute commands, and capture the output. On Mac and Linux, it’s very easy for Python code to interact with the ssh client that’s bundled by default with the OS. Windows isn’t bundled with an ssh client, so it would be necessary to modify Brenda to use an external ssh client library.

It looks like libssh2 might be a good solution that can also interoperate with Python 3:

http://stackoverflow.com/questions/14367020/how-to-send-a-file-using-scp-using-python-3-2

Brenda UI as a web app

There are other alternatives to developing a UI where Brenda would remain standalone and not be integrated into Blender via plugin.

For example, the Brenda client could be deployed on a low-cost on-demand EC2 instance like t1.micro and the UI could be implemented through a local web server that drives Brenda from this instance.

Will EC2 spot instances get more expensive if everyone starts rendering on Amazon?

I doubt it for the foreseeable future. The cloud is growing like crazy, and the vast majority of apps require on-demand instances, so spot capacity will continue to be heavily discounted. This is going to create a competitive niche for rendering and other kinds of compute-intensive batch processes that can play well with spot instances.

James

So I’ve got this nearly working (or so it seems) but when I try to bid I get the following error:

boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version=“1.0” encoding=“UTF-8”?>
<Response><Errors><Error><Code>MaxSpotInstanceCountExceeded</Code><Message>Max spot instance count exceeded</Message></Error></Errors>…

which seems strange as I only wanted bid on 4 instances followin the tutorial.

There was a problem with my account or on amazon’s side so I contacted support and got that sorted.

I will try to get set up btsync on the cloud so the rendered frames are synced back on-the-fly, when you got a couple hundred 30+ MB exr files the time to download all of this adds up so I am looking at ways to receive the frames ASAP.

Todd, what toolkit are you using for that interface?

James, thanks for your effort, brenda turned out really nicely.

@hoverkraft - right now I’m just using the wxPython library. I’m not sure it’s a long term solution, but it’s been good just to get something working locally.

I’ve managed to create an up-to-date AMI but I am having trouble writing file-out nodes from compositing in a way that gets picked up in the frame bucket. Any hints? Is the $OUTDIR variable in the frame template mandatory? I will try with EBS and subfolders next, maybe that will solve it.

Another question: I’ve commented out the shutdown command from brenda-conf so my instances keep running. But when I push another set of tasks they are just sitting in the queue while the nodes idle. Is there a way to give them a nudge?

@hoverkraft - I have a little experience tinkering with the output paths. For my gui app, I wanted my frame scripts to output a large image and a thumbnail so that I could load the thumbnail back into the gui app. You can’t use preset FileOutput nodes to do this because the node doesn’t yet know about the brenda output path.

The way I got it to work was to write a script that adds the FileOutput node programatically and while doing so it writes in the Brenda path variable like this:

fileOutNode.base_path = ‘$OUTDIR’

Here’s a snippet of what my frame script would look like (will render a Mulitlayer image, with a JPG rendered from fileoutput node):


cat >thumbnail.py <<EOF
import bpy
import sys

class Thumbnail:

#bulk of the script here - removed for brevity
fileOutNode.base_path = '$OUTDIR'

EOF
blender -b *.blend -P thumbnail.py -F MULTILAYER -o $OUTDIR/frame_###### -s $START -e $END -j $STEP -t 0 -a


Now, in my scenario I wanted the entire thumbnail generation process to be decoupled from the blend file so that it could be used on any blend file.

If you wanted to set up the FileOutput node in the blend file and then just override the path string in the frame script, that might work too. You’d just have get the instance of the fileoutput node by name or datablock and then update the base_path property.

Thanks for the help Todd, that clears things up. I tried something like this but it didn’t occur at all to me that i could get $OUTDIR from within Blender!

@hoverkraft, well actually the point is you can’t get $OUTDIR from within Blender initially, but you can write $OUTDIR into the script that you write into the frame script. That’s how I got it to work.

Sorry, I got that mixed up with this. I find the uploading step a bit cumbersome, I usually create projects that have a couple of linked blend files. The S3 method expects a tarball so I have to tar and upload all of the blend files even if just a single one changed? The ebs method on the other hand would allow replacement of single files but you have to snapshot it every time something changes. Maybe this process could be scripted too.

On the output side the lack of subfolders in the S3 bucket is also a nuisance.

Maybe those are just user errors or can be fixed by some additional scritpting. But don’t get me wrong, brenda rocks, I am just pointing out some thoughts.

@rider_rebooted or any other Windows users,
If you don’t want to do a Linux/Windows dual boot, you can try Oracle VM VirtualBox and setup a linux virtual machine specifically to use Brenda. I was using it quite a bit for another project a few months ago, it worked perfect and I could still do work on my windows machine while the VM would be running in the background.

Thanks crazycourier,

I’ve actually removed the dual-boot and installed Lubuntu onto an old laptop and manage project files and rendered frames using s3 browser on my main computer. I did consider using something like that but didn’t like the idea of it using system resources and I wasn’t sure if I would have to install Brenda etc. every time I started it.

Sorry to keep bringing up perceived limitations but here is another one, maybe someone knows a workaround: S3 doesnt support subfolders (does it? I didn’t get it to work). So I would like to render, say, tasks 1-100 to bucket 01 and the rest to bucket 02. But because RENDER_OUTPUT is a var read by brenda-run and not brenda-work, they will all end up in the same bucket, correct?

As I said, I tried writing file-out nodes to subfolder but they weren’t picked showing up on S3.

Edit: I thought of an example to clarify:
Let’s say you have a studio environment with some artists working on several shots and a boss (who is in charge of billing). The boss would decide how many instances he would have like running and the artists would submit their tasks to the queue but not start or stop instances. Like in a render farm the nodes would pick up whatever comes down the queue.

Right now, when the queue is empty and I submit a new set of tasks it gets sent to the bucket the instances were started with.

That’s why I would like the output bucket defined on the job side, not on the server-side, because these different shots would all get mixed up and the lack of folders in S3 makes this even more difficult.

Of course I can open up a second work queue but then I submit a shot with 15 frames and the instances are running idle while they could pick up more jobs from queue one.

And while I’m at it, another question: If run 25 nodes for 15 minutes, then the task is finished and they are shut down. That means I will be billed for the whole hour. If I fire up another 25 nodes now, how does that work? Will I still be billed for 1x25 instances or 2x25 instances?

How does AWS know what to do?

Here is my question – I uploaded a tar.gz with my blend file to s3://
I uploaded a file called workQ to the same.
I submitted a run request, but it never asks for the name of the workfile. Does AWS automatically realize that the file that is not a tarball is a work file?

AFAIK it HAS to be a tarball (or zip).

If the scene is set in stone and 100% finished that is ok. But what if stuff is changing? If it’s just a main file that is changing you can put textures and libraries in an ebs, snapshot it and mount it as a subfolder. That’s a bit of work but then you only have to update your main file. But if one of the libraries changes then you have to redo the EBS as well. So for example with my current project I decided uploading a 300MB tarball a couple of times is faster than separating files and building a directory structure.

But I bet a lot of this can be scripted, there are scripts out there for automating EBS snapshots but they are more geared towards backups.

If your whole project is in the tarball, what’s the reason for the workQ file?

Edit: Just had an idea: wouldn’t it be simpler to upload a couple of tarballs and being able to mount those as subdirs? Like textures.tar.gz, libraries.tar.gz, scene.tar.gz, and then they can stay and you can update them as-needed. Would be faster than EBS and more flexible than a single tarball.

Oh I get it: The file to render is specified in the frame template. Say you have 3 blend files in the tarball, main.blend, lib01.blend, lib02.blend, you put

blender -b main.blend -F PNG -o $OUTDIR/frame_###### -s $START -e $END -j $STEP -t 0 -a

in the frame template.