On 23/01/2014 at 11:51, xxxxxxxx wrote:
I wanted to float this out there and see if I got any responses. Has anyone setup a renderfarm for C4D in the AWS cloud before?
On 29/01/2014 at 08:13, xxxxxxxx wrote:
So I can tell by the lack of response on this thread that noone has created a cloud based render farm for C4D with AWS. I am actively trying to solve this problem, I will update this thread with my findings and I'm writing a blog post about the project.
On 03/02/2014 at 01:00, xxxxxxxx wrote:
We've experimented with setting up c4d slaves on AWS but it turned out to be quite expensive when you take into account both the cost and the time it takes to set up and sync assets between studio and cloud. We've ended up with using Rebus Renderfarm when the inhouse renderfarm is overbooked. I guess if you use AWS one a big scale and setup and sync is fully automatic, you can probably compete with commercial cloud farms. But in our case where we just might need 10-20 slaves for a day or two, its way faster to use Rebus.
On 03/02/2014 at 07:58, xxxxxxxx wrote:
Interesting response. The reason why I am actually interested in using AWS for the cloud is because of the implementation through a common s3 bucket. I was working on a project that utilized a linux based render farm that mounted on each render node a s3 bucket and wrote all final renders to the same place.
I know that C4D's distributed render engine doesn't run on Linux so I'd have to create the farm on windows machines, but I can still utilize s3 for the common place to write files and then mount that on every render node.
One problem is the installation of net render on the windows machines, if it was linux I could just utilize opscode and ruby to assign a role to the server and net render would automatically get installed. That may not be the case for windows machines.
Do you have any thoughts on that, Bonsak?
On 05/02/2014 at 13:32, xxxxxxxx wrote:
I dont know much about opscode but on our studio farm we sync C4d installations on the slaves with a tiny rsync bash script. When we did our AWS tests we had one server and one client defined as snapshots and then we just cloned instances from there. I think your s3 idea might work but you still have to get your data up and down to the s3. Speed is nice between ec2 and s3, but not so impressive up and down to s3. We're currently using s3 only for weekly nearline production server backup.