XOCN Footage slow transcode vs Resolve

Hi, I’m a DIT working in the British film industry, Cortex isn’t part of my daily routine but I have been running some tests recently that I though would interest MTI:

I thought it worth bringing to MTIs attention that footage encoded from XOCN currently appears to be hobbled by the Sony debayer - something Sony may have told MTI they have to use, but not all companies appear to be in the same boat.

I have recently been using Assimilate Scratch which is where this issue came to light. My findings, in short, are below:

(All results at 1/2 res debyer, Sony 6k XOCN ST 17:9 25fps > DNX HD36 25fps)

Windows: [7960x, 64GB, 1080Ti_11GB, storage: 2GB/s+]

  • Davinci Resolve: 108fps
  • Assimilate Scratch: 50.12fps
  • MTI Cortex: 50fps

OSX: [Macbook Pro 2.7Ghz i7, 16GB, Radeon Pro 455 2GB, storage 2GB/s+]

  • Davinci Resolve: 28fps
  • Pomfort Silverstack: 26fps
  • Assimilate Sxratch: 7fps

When speaking to Assimilate about this they say that Sony has stated they must use their SDK, preventing them from speeding up the debayer process as they usually would. I was initially comparing against Resolve only, and assumed that they must simply have more weight with Sony and thus be in an unfair position of being able to use their own SDK. However, the tests on my MBP (not the best platform but still…) suggest that Pomfort have also been able to use their own SDK.

I admittedly am far from an experienced Cortex user, and this is based off of the trial, but I experimented with the debayer settings in the Cortex colour tab (both MTI GPU Debayer, and Sony) without managing to improve the results in any significant way.

Assimilate have been met with a typical Sony response of “No one else is complaining, so its not a priority” - my aim in bringing this to MTI’s attention is to perhaps twist Sony’s arm into giving a more even handed response to their software partners.

  • Dylan

Thank you for your help and interest. I wonder if you can to a quick test? In the configuration that you used there is a setting called Source Decoding Quality. Can you tell us what it is set to? And, if you have the time, set it to "Faster’ and run the test again. The default is normally “Optimized”. Your input is valued,

Larry