It seems to me that excepting JPEG shooters, Nikon has completely missed the boat on metering in the digital age. Even for JPEG what follows is foolish: the camera could maximize exposure to minimize noise, biasing the brightness to the proscribed 18% gray metric, but after an optimal exposure is made.
For example, a white or black cat on on a pile of coal will be 18% gray more or less. But the exposure ought be as much as the sensor can handle (to minimize noise). For JPEGs, the saved image would be pulled to the proper brightness, using the 18% gray reference idea. For RAW, the reference offset could be stored and raw processing programs could use that as the default pull value. That is the right way to do it in the digital age, or at least to offer that behavior.
I always shoot raw. I do not want or need the the camera to consult a database to make some exposure judgment based on the non-optimal premise of an 18% average luminance, an idea invented a century ago and still embedded in today’s digital cameras. That is nuts. I wonder if anyone at Nikon has consciously thought about the right way to do it—often inertia drives design and “obvious” improvements are invisible to those inside the anachronism bubble.
That said, the “Live View metering” callout makes me wonder if there is something useful going on with respect to optimal ETTR exposure.
I want the camera to offer a mode that maximizes the sensor dynamic range. It is already displaying 2+ megapixels on the rear LCD 60 times a second or so. Check those pixels real-time, and give me a maximum exposure that does not blow out more than 0.05% (or my choice) of the pixels aka an ETTR exposure.
It is OK and just fine if auto-ETTR can only be done in single frame mode and/or even only in Live View mode: just make it work right already—solve this time-waster problem for me. Because that’s what a good camera does, it eliminates sources of user error, hence many years of improving the AF system (or even having AF). Why does that principle not apply to image quality in the digital age, this is, minimizing noise by maximizing exposure?
Calling this “optimal exposure” is ludicrous, given that the Nikon D810 regularly gives up 1.5 to 2.5 stops of headroom in matrix metering mode (based on years of experience) in real field conditions (Nikon D810). Consulting a database doesn’t cut it for an ETTR exposure. Particularly if the color gamut approaches the limits of detail-destroying sRGB (in some cases) or AdobeRGB.
And that leads me to yet nother gripe: still no wide-gamut color space and that means a mangled histogram when the color gamut is too wide for AdobeRGB.