Towards Better Colors: Finalizing Pilots

This is going to be a relatively short post, as the architecture of the test has been laid out in the previous one, and I’m not deciding to change it very much.

I am considering use custom libraries to produce chromaticity readouts.

But not now. Resolve produces good enough charts for qualifying statements, and making Python to work for me is requiring far more effort than I would like to put in as far as right now.

 

Streamlining Testing Processes

A major issue I overlooked in the early pilot tests was color management. I let the raw processor do its job converting TIFF files, of which I naturally assumed lacked a containing color space much like the raw files themselves. Which, they are not. Appearently my raw processor output TIFF images in Adobe RGB.

I effectively let Resolve do its standard relative colorimetric projection from Adobe RGB to CIE XYZ.

Don’t get the early pilots wrong though. As they all underwent the exact same processing pipeleines, the results were precise in qualifying the differences, but did not possess good enough accuracy for quantifying statements.

It is quite obvious that in Rev. 1.0A, a good number of the pixels were outside the CIE 1931 horseshoe. That was not supposed to happen, usually. There are quite a number of exceptions where even correctly managed profiles would allow imaginary colors to exist, such as ACES AP0 and ProPhoto RGB, by allowing negative primaries to exist. But in whichever way, that should not be the case for Adobe RGB.

I later switched to ACES AP0, for being closer wrapped to the horseshoe, as shown in figure 2, or Rev. 1.1A. I still used relative colorimetric projection, so a number of pixels, especially in the blue and cyan region, still peaked at the edges.

It was not until a week or two later when I realized the TIFF files did in fact had a specific color space.

So Rev. 2.1 looked much better with a healthy color space transformation applied, and all the primaries stayed inside the Adobe RGB triangle as they should be.

I did consider using ProPhoto RGB to mitigate clipping, though I did not think it is an immediate need. I’m planning implementation in Process Revision 2.2. Most photography scenes did not have pure spectral colors, can be well contained within the Adobe RGB range, and the ColorChecker (Classic or Digital SG) are mainly crafted for Adobe RGB anyways. Of course it does not cost much in switching to ProPhoto, but as far as this article, it just does not render enough points (pun unintended).

I tried to let dcraw and RawDigger output CIE XYZ directly. It did not work, unfortunately. I don’t foresee this to be implemented until I could get my custom libraries running.

 

Switching light source produced a small but perceivable effect. Incandescent Tungsten light corrected with a 5500K Full-CTB, and paired with a 6000K Xenonflash, produce an effectively full illumination spectrum, and is close enough to the D55 white. These are pretty much the fullest illumination you can get for a reasonable price by now. There are some incrementally better options, like HMIs or short-arc Xenon tubes. Their price though, are not very incremental.

In effect, it does raise the saturation of the green primaries compared to results of the imperfect LEDs (even the “high CRI” ones).

There are some more bug fixes and efficiency improvements that most of you did not care to hear and I did not care to write about here.

Testing is a tedious process, writing too.

 
 

Solving a few mysteries

So about the spreading of the chroma primaries.

First of all, they should.

Most of everything we see, color targets included, are not perfect bandpass filters. That means, they will allow a number of nearby frequencies around their target ones, and in most cases, the perceived colors remains unaltered. Photosensitive receivers, eyes or CMOS sensors alike, did not care where a photon lands at the spectrum, as long as it provokes the same amount of electrical signal at the certain channel. Of course it may take more photons at the edge of each filtered channel to excite the same response, but you all get the point.

Here is a picture I stole from Wikipedia. Say you wanted to see “yellow.” You could get a band of spectral signals centered around some 580nm or so, but the same response can be provoked through a somewhat more powerful ray at 650nm and 500nm. When they add up, you won’t be able to tell the difference.

That is metamerism.

So yes, the more “perfect” an illuminant is, the more likely it gets for you to see a controlled spectral spread on the chromaticity chart, panic not.

Obviously there are cases it goes beyond spectral spreading, and oh man they will look distinct.

Spectral spreading is usually a nice Gaussian falloff from a centerpoint. Optical aberration induced ones, do not.

Most often, you get spherical aberrations, chromatic aberrations and bad anti-glare being naughty. All three of them encourage mixing with neighboring rays, and you start to lose fidelity. Some like bad anti-glare affect a wide range, some like bad sphericals tend to stay local but has a stronger effect. Lateral stretching of the spread pattern tends to be caused by CA, which usually separates a point into two opposite colors. Most often purple and green, or in rarer cases cyan and orange. They are fairly uniform across the frame, hence a lateral spread.

I have found no way though, in quantifying them. I could calculate a standard deviation in relation to each peak at primaries, but Resolve does not spit out how many pixels exist at a given point. It only tells whether there exists information or not. Another way of telling spread could be assessing the height of each peak. The more concentrated a blob is, the less the spread, and vice versa.

At this point I feel like the rest are pretty straight forward. Shifts and desaturations are caused by underrepresented spectrums, once a pure color is gone, it’s gone for good, and that patch under it gets desaturated compared to the rest.

 

What’s Next

The post pipelines at this point should be fully ready for production by now.

Further smalle adjustments are certainly possible, but I don’t see them making huge impacts with or without at this point. Process Revision 2.1 is already full fledged for good testing purposes.

I’ll just need more lens samples for a demonstration-of-concept test, which shouldn’t be too hard.

Digital ImagingWavechaser X