We have a beta build that will support the new GTX 1080 GPUs. In order to provide support, we have to upgrade to CUDA 8 which is only a release candidate right now. When the final version is available we will include it in Cortex. The GTX 1080 has 8 GB of memory and is more than Cortex currently uses. The only deliverable that is currently encoded on the GPU is JPEG 2000. Other than that, we use the GPU for image processing and debayering of camera formats that we debayer.
Copy that. So GPU for image processing and debayer. Will stick to 980s for now. They are on-sale anyway…
How much memory does Cortex actually take advantage of now? Each nVidia GPU family comes in various memory size offerings, would like to size the right one.
On a slight tangent, with the release of the new version of the new Sony SDK that supports XOCN, rumor is decoding of XAVC may have gotten a boost as well. Possibly by allowing GPU to be used for processing. Any idea on if Cortex makes use of the XAVC GPU decode? Either way, what kind of boost?
RED is the most GPU memory hungry format we handle since it decodes directly to GPU memory and we keep a bunch of frames in cache before they reach the GPU for image processing, so for RED at full resolution 4K I would say at least 4GB. For other formats or for regular HD you could get by with 2GB. Most current NVidia GPUs have at least 4GB, but they do still make some that only have 2GB or even 1GB. We do debayer Sony RAW on the GPU, but that doesn’t require a lot of memory like RED.
As far XAVC decoding, the Sony SDK doesn’t make use of the GPU for decoding, so I don’t know offhand if the newest release is any faster with XAVC than the earlier ones. We use the Sony SDK to read the frames from the file and the Mainconcept decoder to decode the XAVC. That decoder is CPU based and I know the newest release has been optimized to better handle many core machines.