So, today as I realized I was coming to possess a surplus of unused-but-not-broken-or-completely obsolete smart phones…I was trying to think of some uses, and the idea of a smart-phone based renderfarm (no display, all command-line) might be plausible with the speed of today and tomorrow’s smartphones…
I did some google searches and came up with basically nothing. I was wondering if any smartphone/Android enthusiasts might be willing to play devil’s advocate for me and explain why that is the case?
My first port of call in this thought exercise is the fact that Blender HAS been ported to an Android tablet. But, the projects haven’t really gone anywhere, and I think that is because the “creative” portion of CGI is not well suited to the tablet/smartphone arena. In addition, it seems like a waste of processor on a tablet/smartphone to put all the energy into drawing 3d objects in realtime.
If we recall, Blender is also a command-line application. What is stopping someone from porting Blender command-line, or even JUST the renderer and its necessary file-storage structures, to a smart-phone? And then using WiFi and a laptop/desktop/server as the “Brain” to divide up and send out tasks? It’s not inconceivable to rather cheaply procure 50 smartphones (especially if they have bad EINs and cannot easily be activated on any network…we only need WiFi)…if each were to render one frame?
If that worked…perhaps there is even the possibility for a dedicated Android ROM that strips everything unnecessary from the phone and allows it to devote a maximum of resources toward its function as a mini-renderer? I’m sure that there ARE certain processing and memory limits, given that phones are lower powered than desktop machines. But I’ll tell you, my phone is generally faster than any computer I owned prior to 2009 or so…and I render MANY complicated scenes on those lower powered machines.
If I had the chops…I would try to put my money where my mouth is…but I don’t really. Although, I would love to facilitate that happening. I would imagine that for an initial proof of concept, neither Blender nor Cycles is the first place to start (although, again, we DO know it is possible to port them)…
But perhaps this could be a place to begin: http://www.hxa.name/minilight/
MiniLight is a minimalist proof-of-concept global illumination renderer that accepts specially formatted text files as scenes. It’s author encourages it to be rewritten into as many different languages as possible (about 6 of which are on that site). Interestingly, as far as Android goes, it is also already ported to Java: https://github.com/ORBAT/MiniLightJava
Perhaps starting with this minimalist renderer, and simply working on the idea of using a desktop server to farm out several different scenes to several different phones with a port of this renderer could be used as a proof of concept? And then the farmed scene could be raised in geometric complexity to verify that there isn’t an exceedingly low complexity ceiling. I am not sure about MiniLight’s capability for texturing, but as a simple proof-of-concept, the fact that it can output a REALLY GOOD physically-lighted (lit?) Cornell Box is as good as a starting point as any?
From there, if there is success…perhaps we could then look to Blender’s GUI-deprived command-line, and perhaps the production of an open-source, freely-distributed ROM and application devoted to Blender network rendering across smartphones…
In my own time I am certainly trying to learn the basics of writing some simple applications for Android…but if we have any expert Java/Android developers here…would anyone be willing to make a MiniLight renderer proof-of-concept? I would be floored by a 5-frame 5-smart-phone-rendered Cornell Box animation. And from there…
EDIT: It turns out there’s even an exporter script to MiniLight renderer from Blender on the page linked above. Seems like the proof of concept can even start with Blender scenes, just in a simplified renderer! And then move on to Cycles for the coup-de-grace…
TL;DR: Will someone please build an Android smart-phone render farm, either with MiniLight minimalist renderer, or ultimately, with Blender’s cycles. If this is NOT possible or feasible…why not? Detailed explanations please. With a detailed explanation, even if I do not know enough to understand the reply, it gives me the opportunity to learn and COME to understand the reply.
Thank you