False Color by T/Stops

Inspired by Ed Lachman’s EL Zone, using the camera as a spot meter is really interesting for us DITs.

I had a couple issues with the EL Zone:
– The 14 stops Dynamic Range (I wanted the clipping points)
– The color schemes (Not a fan of green for under exposed)

What’s the base for it?

To turn our picture into stops, we will base our research on the DCTL made by Diode Film. Here is his nice tutorial on it https://www.youtube.com/watch?v=iXrR0CMFNZY
Everything is amazingly well explained in his video, but we will just need to add a CST from our camera color space to LINEAR, and then add a DCTL. Once this is done, we should have a nice False Color color coded by T Stop.
We can (and will!) modify this DCTL to add stops above and under to have our clipping point where we want them depending on the camera we’re using.

1D LUT VS 3D LUT

Why is 1D LUT better than 3D LUT in that case?
A 1D LUT (we’re going with 10bits), is just 1024 values turned into 1024 other values, with no interpolation, this will allow us crisp edges compared to a 3D LUT that uses interpolation between values.
Also 3D LUT interpolation doesn’t work great with grey, it

The input image also needs to be black and white, if it’s not, you will have some color artefacts, as shown in the ISSUE part of this post.
All of this is nicely explained in the Pomfort False Color creator (Which uses a 1D LUT system): https://kb.pomfort.com/livegrade/color-grading-features/false-color-mode-in-livegrade/

Here’s a quick comparison of 3D lut vs 1D LUT. That’s a shot with a lot of well separated brightness. In real conditions, 3D LUT is really hard to use for False Color.

Changing the DCTL for our cameras:

This is where testing needs to be done, but the base DCTL gives us 6 stops over and 6 stops under. We’re going to try and have our clipping points in our false color.
Here’s the classic dynamic range distribution for our most used cameras (at base ISO!)

Sony Venice 1: +6 / – 9
Arri ALEV3 sensors: +7.5 / -7.0
Arri ALEV4 sensors: +9.3 / -7.5.

LOG SONY VENICE 1

DCTL WITH STOPS -9/+6

Sony Venice classic False color


We can see that with our modified DCTL, we’ve got our high clipping points. Our low ones, due to the linear encoding (not much values at the bottom), isn’t as perfect, but we can adjust our bottom stops to match the official false color if needed.

How to setup in Resolve

If you’ve never used a DCTL before, copy the .dctl files in your Resolve LUT folder (/Library/Application Support/Blackmagic Design/DaVinci Resolve/LUT), restart resolve.
Add an OpenFX DCTL and select your DCTL in the list.

NODE #1
Monochrome

NODE #2
CST to Linear
Input: Camera Colorspace / Gamma (LOG)
Output: Camera Colorspace / Linear
Tone mapping: None

NODE #3
DCTL (OpenFX)

How to export a 1D LUT?

Resolve unfortunately can’t export a 1D LUT natively. We can’t convert a 3D LUT into a 1D without loosing all our colors and precisions (even with tools like Lattice). But thanks to Thatcher Freeman who coded some nice Fusion Fuses, allowing us to export 1D LUT from Fusion page.
Here’s how we do it, in this repository we need 2 fuses:
– LUT Cube Creator 1D : https://github.com/thatcherfreeman/utility-dctls/blob/main/Fuses/LUT%20Cube%20Creator%201D.fuse
– LUT Cube Analyzer 1D : https://github.com/thatcherfreeman/utility-dctls/blob/main/Fuses/LUT%20Cube%20Analyzer%201D.fuse
Copy the .fuse’s here: ~/Library/Application Support/Blackmagic Design/DaVinci Resolve/Support/Fusion/Fuses

In Fusion, shift+space bar to search for tools, select all the following nodes:

NODE #1
LUTCubeCreator1D: 1024

NODE #2
CST to Linear
Input: Camera Colorspace / Gamma (LOG)
Output: Camera Colorspace / Linear
Tone mapping: None

NODE #3
[[DCTL]]

NODE #4
LUTCubeAnalyzer1D: Export 1D LUT


The LUT Cube Creator creates a 1 pixel x 1024 pixel wide image with a scale from black to white, the CST adjust that scale from our camera color space to linear, our DCTL applies some color per brightness, and then we export with LUT Cube Analyser. You can see on the right viewer the visual representation of our False Color.

In Fusion, we can now clearly see how 1D LUT color scheme:

Links

Here’s a small link to all the DCTL/Files I did. Terribly sorry for this badly written article and badly organised files.
https://drive.google.com/drive/folders/1730gMiqiSJwkM9T-QsAcB8GgY-R2t07h?usp=sharing

ISSUES:

BLACK AND WHITE INPUT
Without a Black and White input, there is some weird color artefacts coming in. You can see some pink going into some colors.

What’s next?

Try all this theory on set first, thanks to Tom from Omniscope, the version 1.10.139 accepts 1D LUT import, which can make a project like that very useful for all the DITs.

Making a script to streamline all this would be nice too. Instead of having to enter the code and add RGB values to every stop manually.

LENS INFO

Mamiya 645s:
– Very nice glass
– Comes either as original glass + Speedbooster / Rehoused by TLS with speedbooster already in
– With the speedbooster: quite soft edges, very dream (Especially the 35mm and 45mm), 4-5.6 is a good spot, you can go down to 3.5 since you’re using the good part of the glass. The 35 and 45 (1st generation, wide front lens) are pretty terrible lenses.
– Withtout the speedbooster: Way much better performance, very clean, but not as much lenses on the wide end. Sweetspot around 5.6, you can go to 4 depending on the lenses
– To match some lenses: be careful between the Sekor C and Sekor CN lenses, they are different. Also some lenses where made on later stages (ex: 150, 200 2.8), so they could be quite different than the older glass.

PROJET: Workflow LG/RESOLVE

SCENE/TAKE FROM LG TO RESOLVE

Sur le tournage nous mettons les scenes/takes dans le metadata, le lab ou data manager doit rajouter ça manuellement dans LG.

Ceci devrait changer avec LG6 qui permet un export CSV a importer dans Resolve
Cet export CSV doit se baser sur le clipname.
Ce système devrait être simple si on a les metadata importé depuis la cam directement dans LG (Bons Teradeks + bon LUTbox), alors le csv pourra matcher sans soucis avec le clipname (REEL_CLIP_XXXX)
UPDATE : CA NE L’EST PAS, LG encode les CSV completement differement de Resolve.

APRES TEST (03/10/22):
Clip name ne marche pas sans avoir les metadata, mais possibilité d’ajouter une colonne ‘Reel Name’ dans Resolve avec modification du source name, il faut rentrer dans Pattern Reel Name : “*/%R?????????”, ‘?’ x9 car il faut enlever tout les ‘XXXXX’ + ‘.mxf’ du nom du fichier vidéo pour avoir juste ‘REELCLIP’.
Prochain test :
– Lors de Treason, sur les fichiers CSV de Szymon, il y avait la colonne ‘shot’, ‘scene’, etc… comment la rajouter au fichier csv de Resolve. Lors des exports Bridgerton, cela ne marchait pas. Est-ce parceque la colonne était vide? (‘Only metadata fields that have been
populated for at least one clip are exported and listed in this header; unused metadata fields in the
Metadata Editor or Media Pool are ignored.’
cf. DR Ref manual p.355)
– Importation du CSV ne marchait pas si la colonne ‘Reel Name’ étaient dans la même position que dans celle du CSV LG. Comme les fichiers csv sont juste séparé par des virgules, ou tab, slash. Cela aurait du sens du matcher les emplacement des colonnes.
– Fichier CSV Resolve = UTF16 encoding, fichier CSV LG = UTF8 encoding

A FAIRE:
– Réussir a exporter un fichier CSV de Resolve avec les colonnes SHOT, SCENE, TAKE, etc…
– faire un fichier CSV qui match l’emplacement de REEL NAME, SHOT, SCENE, TAKE. Importer ou essayer de matcher via: https://extendsclass.com/csv-diff.html#result

PYTHON PROGRAMME:
Importer CSV LG
Retirer les APOSTROPHES du csv (sinon impossible de match)
Changer les cols de place pour matcher le CSV template de Resolve


CDL FROM LG TO RESOLVE

Deux options, les deux impliquent un fonctionnement de la première partie de l’article:

Option 1:
– Si clipname est le même, on peut exporter les CDL avec le clipname, puis colortrace dans resolve par clipname

Option 2:
– Les metadata des CDL sont embeded avec les clips si importation du CSV fonctionne, dans ce cas y a t-il possibilité d’appliquer ces data dans resolve en tant que grade (voir python programme?)