CORTEX in the cloud (experimental - running CORTEX on Amazon G2 instances)

A number of folks have asked about running CORTEX on virtual machines or in the cloud over the past year or so, but the application’s dependency on NVIDIA’s CUDA libraries has always been an issue because most virtual machines I’ve seen are not able to expose the underlying hardware in a way that allows the application to make efficient use of it, if it can see it at all.

But today I spent a little time today playing with the G2 instances on Amazon’s EC2 cloud compute infrastructure.

Officially, we require a Windows 7 Professional 64 bit operating system, but hey, none of this is really qualified yet, so I went ahead and chose a Windows Server 2012 Base AMI to get started:

Then I chose the G2 instance type:

After that, I gave it a name and followed some instructions to get connected using Remote Desktop.

I installed Google Chrome, grabbed an eval license for CORTEX, and downloaded the latest 1.5 beta installer. Installation went without a hitch, but when I first attempted to start CORTEX I got an error saying I didn’t have a sufficient GPU.

After a little googling, I came across this article which states two things:

  1. You [must] download NVIDIA drivers from Advanced Driver Search | NVIDIA. Select a driver for the NVIDIA GRID K520 (G2 instances)
  2. In order to access your GPU hardware, you must use a different remote access tool, such as VNC.

I got the drivers installed, then I tried getting RealVNC working, but have enough time to sort out how I had to set up IP addresses, etc to connect so I decided to go with TeamViewer instead.

After installing TeamViewer, I closed Remote Desktop and reconnected with TeamViewer. I launched Cortex and it started up with no complaints.

Next, I downloaded some test footage and ran some quick and dirty tests.

Here are my results:

ProRes422HQ to DNxHD36: 45 fps
ARRIRAW to DNxHD36:     36 fps
SonyF55 RAW to DNxHD36: 24 fps
RED Epic 4K to DNxHD36: 19 fps

Not nearly as fast as running on a real Z820 machine with a GeForce 780ti, but not too shabby either.

Aside from the speed, there was one other awkward thing. Running through TeamViewer, I was only able to set the display resolution to a maximum of 1280x1024. CORTEX is really designed to work best at 1920x1080 or higher. It was workable for these tests, but certainly not ideal.

That said, for running the underlying CohogRender command line transcoding application, it seems like it could work well.

Definitely not yet a qualified configuration, but worth playing around with more!


Can you set-up a shared database as well used by two instances of CORTEX running on the cloud?
Could I have a physical machine (like my laptop) with CORTEX Enterprise working with a shared database located on an Amazon server?

Very good questions.

I imagine this is possible, but I need to research how different instances can mount shared storage for the media.

This would probably be a much bigger challenge - probably not.

1 Like

Could you share some more info about how you “downloaded some test footage”? Just connected to the MTI FTP server and downloaded files to a local drive (“local” on the Cloud side)? What kind of download speeds on the Cloud machine?

Could the footage be hosted on a secure, private cloud (like Rackspace or some other cloud storage)?

I know this kind of configuration is not currently ideal and seems to be cumbersome to operate, but who knows - maybe in the future all dailies will be done this way? No big iron locally, just upload stations and consoles?

For this test, that is exactly what I did.

Hmm… didn’t make a note of that unfortunately, but I think the speeds were reasonable. I don’t remember being surprised one way or another.

I believe so. The natural choice for my tests would be to try it with Amazon’s cloud storage since I was testing on an Amazon instance. I imagine you would want your compute and storage in the same ‘cloud’.