TL;DR — I documented the steps that helped me set up my own rendering powerhouse on Google Compute Engine and produce over a dozen 20-megapixel interior renders in less than an hour. For free.


The Backstory

So here’s my quick intro. About two months ago, I started 3D modeling with Blender. In order to build a handful of beginner-level models, my current workhorse of a laptop, 2018 Dell XPS 15 with an NVIDIA GTX 1050 Ti, was more than sufficient.

As I’ve graduated to more complex models with high-res textures and intricate materials, my rendering times increased drastically. So much so that I spent most of the day just waiting for the renders to finish. If I was going to get serious with 3D design, I needed something much more powerful than my XPS, but I’m no Rockefeller and was not about to dish out $2k on a high-end GPU.

I should also point out, I’m not an engineer either and my coding knowledge is fairly limited. However, after a couple days of sifting through dozens of forum threads, I managed to compile a guide (with command snippets) that gave me access to state-of-the-art rendering resources on the Google Cloud Platform (GCP).

Here’s the kicker. New users on GCP receive $300 credit to get started. In other words, you can render using something like NVIDIA Tesla P100 effectively for free for a very, very long time.

Fine, but what happens when the free credit runs out?

Well I’m glad you asked. Depending on your configurations, you can continue churning out massive renders for just pennies per rendering or $1–2/hr of use of your Virtual Machine (VM) depending on configurations.


Build Your Virtual Rendering Workhorse

To get started, get yourself set up with Google Cloud account. Start a new Google Compute Engine project and proceed to create a New Instance — this will become your VM. Here are some sample specs I used. You can play around with the configurations, run with 2 GPUS and even go CPU-only.

Even though you are getting $300 credit, keep an eye on the cost estimate as you configure your VM; you should be able to stay in the $1–2/hr ballpark. Don’t mind the exorbitant monthly estimate as you will only use your VM for minutes at a time and power it down while not in use.

You may see the default image set to Debian, but let’s change it to Ubuntu 16.04 LTS and be sure to tick the checkbox “Allow HTTP traffic.

Your machine should spin up in a few minutes.

Here I made 2 instances: One using just CPUs and the other using NVIDIA’s P100 GPU

If you receive an error. It’s possible that your configurations are either not available in the geographic zone you selected (change the zone and try again) or you exceeded the default resource Quota for your account, for instance 1 GPU max or 24 CPUs max. You can edit your Quota fairly easily for a specific zone or specific GPU type / CPU count and the requests usually get fulfilled in minutes.

It’s best to use filters above to edit quotas for exactly the zones and resources that you need.

Up and Running

Alright, now click on the SSH button of your active VM to pull up the command line UI in a separate window and let’s dive right in.

First thing’s first — make sure you have the latest Ubuntu packages by running the following 2 commands one at a time

sudo apt update
sudo apt upgrade

Install GPU Drivers

Only follow the steps below if you wish to render with a GPU and you’ve configured your VM with at least one GPU. Otherwise, skip ahead to Installing Blender. If you plan on rendering with just CPUs, you should still review considerations at the end of this article.

Run the 5 commands below to download and install the NVIDIA GPU drivers

curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb

sudo dpkg -i cuda-repo-ubuntu1604_10.0.130–1_amd64.deb

sudo apt-key adv — fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub

sudo apt-get update

sudo apt-get install cuda

Installing may take a few minutes so go make yourself some coffee. Once complete, you can verify GPU driver install and GPU status with nvidia-smi

You can optionally review the full documentation on getting the GPUs up and running on: https://cloud.google.com/compute/docs/gpus/install-drivers-gpu#ubuntu-driver-steps


Installing Blender

Run sudo apt install xz-utils utility to extract Blender install files.

Download the latest stable Blender release for Ubuntu/Linux. In my case this happens to be Blender 2.82a, which is also the same version I am using on my XPS laptop to create and save the .blend file I’ll be rendering.

wget ‘https://download.blender.org/release/Blender2.82/blender-2.82a-linux64.tar.xz'

Extract and install Blender with…

sudo tar -xvf blender-2.82a-linux64.tar.xz

Then remove Blender install files…

rm blender-2.82a-linux64.tar.xz 

… and shorten the directory name to something relevant (e.g. blender282a)

mv blender-2.82a-linux64 blender282a

Lastly, install a few necessary libraries, one at a time, to make sure everything runs smoothly when start rendering.

sudo apt-get install libglu1 

sudo sudo apt-get install libxi6 libgconf-2–4

sudo apt-get install libxrender1

To stay organized, let’s create a folder mkdir blends to put your .blend files into. Then go to the new blend directory cd blends, add another folder to put your final renders into mkdir renders and return to home directory cd ~.

And finally to save yourself the hassle of battling permission errors later on when trying to upload your .blend file, let’s change the permission of our new blend directory. When you run stat -c “%a %n” [your directory path] you will get code (775 or 644) essentially telling you that you cannot write new data or add files to this directory. Be sure to replace [your directory path] with the actual path where the blend file is located. If you are not sure, what your directory path is, you can see it in the following step when you connect to your VM with SFTP. In my case, I ran stat -c “%a %n” /home/bt/blends and got 775. Let’s change our blender directory permission to 777 with sudo chmod 777 [your directory path] which will allow us to upload .blend file via SFTP.


Connecting with SFTP

To upload your .blend file to the VM, you will need an SFTP client like FileZilla and an SSH login tool like PuTTYgen. If you are unfamiliar with these tools, don’t worry, there are plenty of short YouTube videos likethis one by One Page Zen that can get you quickly up to speed on how to work with PuTTYgen and how to use SFTP client to connect to your VM.

Drag and drop your PACKED .blend file into the blends directory. If you get a permission error when trying to do so, it’s likely that your blend directory permission needs to be changed. Run the sudo chmod 777 [your directory path] command above, replacing the [your directory path] with the one that appears in FileZilla when you click on the blends folder.

When your .blend file uploads successfully, you are ready to start rendering!


Rendering Using The CPU

At first I was rendering with CPUs only (I started with the max which was 96) seeing as it was quicker to set up and requires less code to run. Remember to replace the filename.blend with your .blend file name and renderingname## with the rendering file name of your choice (the ## become numeric values starting with 01).

Now let’s run the following command to fire up the rendering.

blender282a/blender -b blends/FILENAME.blend -E CYCLES -o blends/renders/RENDERINGNAME## -f 1

Essentially it tells blender to run in the background, use your uploaded .blend file in blends directory, render using the Cycles engine (vs. Eevee), place the rendered image into the renders directory, and render only the first frame (relevant only when you are animating). If you’d like to learn more, you can read about Command Line Rendering here.

Once complete, you should see the rendered image in the renders folder and can download it via your SFTP client (aka FileZilla).

Although rendering with 96x CPUs may get the job done, you’ll see below that it’s certainly not cost-effective and doesn’t make the best use of the resources at your disposal with GCP.


Rendering Using The GPU

If you’ve configured your VM to have at least 1 GPU and installed the GPU drivers using steps up above, let’s put it all to work.

First, you will need to create a custom Python script to tell blender to use all the available GPUs in your VM. The last thing you want is to be paying for a GPU rendering powerhouse and not utilizing it fully or at all!

Open up any simple text editor like Notepad and copy/paste the code below. Save it as set_gpu.py (you can save as set_gpu.txt and rename the extension from .txt to .py afterwards).

Note that this code may change with future updates to the Blender API.

import bpy
prefs = bpy.context.preferences.addons[‘cycles’].preferences
devices = prefs.get_devices()
prefs.compute_device_type = ‘CUDA’
devices[0][0].use = True # enable first GPU
devices[0][1].use = True # enable second GPU / device (if you have one)

Now upload this file via SFTP client to your VM, placing it in your home directory. You can also place the .py script elsewhere on your VM, as long as you reference it properly when running the render code below.

To initiate the GPU rendering, run the following code. Remember to replace the filename.blend and renderingname## with your .blend file and rendering file name of your choice.

blender282a/blender -P set_gpu.py -b blends/FILENAME.blend -E CYCLES -o blends/renders/RENDERINGNAME## -f 1

The major difference from CPU command is that we’ve added this little bit -P set_gpu.py that tells Blender to read our custom Python script in the home directory and use our GPUs for rendering.

One big advantage of GPU rendering is that it tends to be cheaper because one GPU chip has hundreds of cores and therefore can do the work of several dozen CPUs.


The Final Step

Always, always remember to power down your VM after rendering. The last thing you want to see at the end of the month is a bill for $1k because you left your VM running after your last render.


Important Considerations

  • Note that you are saving a PACKED .blend file to ensure all your linked texture images, etc. are available during render
  • Make sure your blend files are saved with the same release of Blender as the one you are using on GCP to render. This is important to prevent any snags with unsupported features between releases.
  • Make sure the Render Properties (particularly Device and Tile Size) in your .blend file are set for your GCP resources. In other words, if you are using CPU-only rendering on GCP, your Render Device should be set to CPU and your optimal tile size is set to 16×16 or 32×32. For GPU rendering, tile size should be closer to 128×128 or 256×256. Blender Guru explains tile sizes in detail here.
  • Future changes to the Blender API (after 2.82a) may affect/break your set_gpu.py Python script and prevent you from using your GPUs so it’s possible you may need to eventually make some tweaks to it.
  • Add sufficient resources to your VM. Even if you are using a GPU, you still may need at least 4+ vCPUs and 13+ GB memory if you don’t want your machine to crash during resource intensive task like compositing/denoising at the end of the rendering process.
  • You may not be able to request 96 vCPUs or even 1 GPU out of the box without first requesting a Quota increase. This can take as little as a few minutes or as long as 1–2 business days.

So What’s The Verdict?

I won’t bore you too much with the stats, but if you are curious if any of this is truly worth the time, effort and $$$, here are a couple benchmarks from my first not-so-scientific rendering experiment. Of course, there are dozens of variables that contribute to rendering times, so please take these results with a grain of salt.

Here’s the rendering itself. It’s a 5000×4000 pixel image configured to render at 128 samples paired with a Denoiser node. To speed things up, I only used 2 light bounces, which for me showed barely any difference compared to the default 12. Lastly, let’s assume the tile sizes I used (CPU: 16×16 | GPU: 128×128) are optimal for the respective rendering methods.

Rendering on my home machine (Dell XPS 15), I got the following results:

CPU GPU
Intel i7 8th Gen 2.20 GHz NVIDIA GTX 1050 Ti Max-Q
42min 33min

Now for our virtual rendering farm! Below are the three various VMs I’ve set up using the steps in this article. The first running solely on CPUs and the other two configured to use one GPU and different mix of CPUs + their corresponding GCP cost and render times

VM1VM2VM3
96x N1 vCPUs 8x N1 vCPUs 12x N1 vCPUs
86.4GB memory 30GB memory 16GB memory
1x NVIDIA Tesla P100 1x NVIDIA Tesla P100
$2.38/hr $1.29/hr $1.35/hr
3:07min ($0.12) 4:09min ($0.09) 3:47min ($0.09)

Pretty compelling, huh?


If you enjoyed this article, found it helpful or thought provoking, please don’t hesitate to drop a comment below. I’m also open to hearing about way to improve this guide and what others are doing in the realm of cloud rendering, especially those with limited knowledge of coding. Cheers!