Latest or all posts or last 15, 30, 90 or 180 days.
Upgrade the memory of your 2020 iMac up to 128GB
877-865-7002
Today’s Deal Zone Items... Handpicked deals...
$1999 $1599
SAVE $400

$5999 $2999
SAVE $3000

$450 $240
SAVE $210

$329 $159
SAVE $170

$1499 $1499
SAVE $click

$1049 $879
SAVE $170

$979 $879
SAVE $100

$2299 $2299
SAVE $click

$2198 $1448
SAVE $750

$2399 $1999
SAVE $400

$1299 $1299
SAVE $click

$3000 $1800
SAVE $1200

$998 $698
SAVE $300

$210 $180
SAVE $30

$2394 $2294
SAVE $100

$999 $849
SAVE $150

$1049 $899
SAVE $150

Sony Pixel Shift: Outstanding Results, but Botched Design for Motion Correction Ignores Benefits of Frame Averaging, Increases Noise —  OOPS

re: Sony pixel shift

See pages on frame averaging and automated script in Making Sharp Images.

I’ve written before on the use of 4-shot Sony pixel shift as the simplest and fastest way to achieve outstanding frame averaging results for ultra low noise, eg the equivalent of ISO 25 noise from ISO 100 exposures. Half the noise is a huge difference to the eye, and greatly extends the robustness of images in post processing.

Noise really begins to matter for large prints, and particularly for images with smooth areas, meaning areas made coarse/pixellated/discolored that ought to be smooth.

Frame averaging is technically trivial to do in-camera, yet Sony not only fails to offer in-camera frame averaging, but fails to do so in Sony computer software. It boggles the mind that such low-hanging fruit, technically trivial to implement, rots unharvested.

Botched motion correction

UPDATE: the above holds, but not in the way I was thinking.

Sony does not off frame averaging in any shape or form, though Nikon did so in the D850 and other cameras years ago, albeit manually via “multiple exposures”. One way on Sony to achieve it is to take N frames manually (eg with remote so no camera movement), debayer them, then average them. Since there is no tool I know of to simply average N raw files (without debayering). A pixel shift capture allows something of this form with the minor step of aligning the frames before averaging, but it requires debayering first. But there is no way to get the Sony camera to take N frames (unshifted), and then average them into a single raw in camera.

To rejigger the idea: a many-capture averaged pixel shift whereby each of the R/G/B values consists of multiple pixels hift captures all averaged into a single raw file. The repeated captures averaged together would greatly reduce motion issues. Take P pixel shift captures (4 shots each), then average them to produce a single true-color RGB raw file. The repeated exposures serves to average out motion and avoid artifacts while producing ultra low noise images too. Enough exposures and motion issues disappear, just as in longer exposures. And the faster it can be done, the better.

... original post...

You can and do get outstanding results with Sony 4-shot pixel shift on the Sony A7R V.

Yet in processing another comparison series vs the Fujifilm GFX100S, what I am seeing is ugly pixellation from noise with a 2X enlargement in areas that have motion correction, such as in moving water.

The Sony Viewer software assembling an ARQ file from the pixel shift frames fills in areas of motion with a single frame. This needlessly crude approach makes little sense, since the noise will be 2X greater than areas without motion correction. Thus the eye picks up a discontinuity and that in itself is bad, even ignoring the noise itself. Moreover, with 4 frames taken, no frame is authoritative; an average is more consistent with a longer exposure, more natural.

While it takes some effort, I can dramatically improve the noisy area by painting-in a frame averaged result, using the same 4 frames that generate the pixel shift ARQ files. A frame generated in seconds using my automated script. Sony could do so easily in software, but fails to do so—why? While it can be the case that a single frame might be preferable in some cases (for its particular motion vs some average), an option to use a single frame vs an average would make a lot of sense.

SUMMARY: instead of using a frame averaging approach to fill in the areas of motion, Sony paints over such areas using a single frame. Not only does this mean an arbitrary fill with just one of the frames (maybe not the best one), the result is 2X noisier and generates a discontinuity in the look of the image—low noise to high noise. And that can be very noticeable under enlargement, particularly in smooth areas such as moving water.

Why does Sony botch pixel shift support in so many ways? There is no technical reason. But it is easily fixed for the camera (firmware), and in the computer software.

Dare we hope that Sony will wake up and fix all the counterproductive limitations and omissions?

Example

This area is not particularly noisy, but I chose it because of the artifacting as seen in PixelShift2DNG; Sony Viewer is obliged to paint-in the single-shot frame over much of this crop. If one likes the single-shot dynamism over the frame average, that is fine, but Sony should make it an option. In other cases, the noise rises too high with single-shot, and in still other cases, the frame averaging goes too soft, such as in very dark areas. Point is, there should be a choice.

Below, 4 results frame one 4-shot pixel shift capture (4 frames as taken by the Sony A7R V).

Four results from one 4-shot pixel shift capture

Deals Updated Daily at B&H Photo
View all handpicked deals...

Canon EOS-1D X Mark II DSLR Camera (Body Only)
$5999 $2999
SAVE $3000

diglloyd Inc. | FTC Disclosure | PRIVACY POLICY | Trademarks | Terms of Use
Contact | About Lloyd Chambers | Consulting | Photo Tours
RSS Feeds | Twitter
Copyright © 2022 diglloyd Inc, all rights reserved.