See my Sony wishlist at B&H Photo.
The new Sony A7R IV has many appealing improvements, with Sony once again leading the technology race and while improving the ergonomics/haptics too. Just how other competitors can respond without still lagging behind is unclear, especially with Sony controlling the sensor market.zz
I will be buying the Sony A7R IV as it is a compelling upgrade in many ways. But I fear that the one feature that most interests me might turn out to be dead on arrival for field shooting: the Sony 16-Shot High-Res Mode feature.
The use of SDXC cards in the Sony 7R IV is disappointing; I vastly prefer XQD cards as used in the Nikon Z7 and Panasonic S1R. SDXC cards are just not robust, with several of my cards disintegrating with use, and the locking pin is frequently being a problem too (self locking upon insertion all too often). There are cards now without the idiotic locking pin, which is all I will buy in the future for SDXC.
How Sony 16-Shot High-Res Mode works as I understand it
Sony has done a poor job of explaining just how the 16-Shot High-Res Mode feature works . The video is a nerd thing showing how pixels and sub-pixels overlap and say nothing about shooting speed, inter-frame time gap (if any), card-write time, or post-processing issues, such as how motion is handled. Or whether Sony’s required computer software processing is better than its prior toy-grade implementation.
Here is what I gather, and I hope I am mistaken about some of these points:
- The Sony A7R IV makes 16 exposures using pixel shift in various ways to in effect quadruple the number of pixels captured, and with each having full RGB true-color information. In theory, wonderful quality. This is somewhat different than Panasonic S1R, which makes eight (8) exposures. So presumably Sony’s peak quality will be superior.
- The Sony A7R IV apparently stores sixteen (16) separate files on the camera card, for a whopping gigabyte or more for a single capture. That’s a striking difference in two ways: 16 separate files and a 1000+ megabytes versus the 342MB single raw files of the Panasonic S1R (plus optional single-shot frame)—Sony is a 3X larger hit—a huge nuisance and waste of space and you'd better have a very fast SDXC card.
- Sony apparently does zero in-camera processing for multi-shot. Compare that to Panasonic which uses sophisticated in-camera processing to produce a single convenient raw file usable immediately in any raw converter.
What’s appears to be unworkable with Sony’s implementation
With those assumptions, here are my concerns, which have many undesirable implications for both capture and processing.
1. Exposure time
Sony’s current pixel shift on the Sony A7R III interposes a minimum 1/2 second delay between exposures (adding 1.5 seconds to the total time required). In the field as a practical matter, this guarantees motion/lighting artifacts resulting in godawful checkerboarding, disgusting me so many times that I gave up on pixel shift on the Sony A7R III. And that is only 4-shot pixel shift! It is why I lauded the Panasonic S1R, which exposes its 8 frames as fast as the exposure time allows and has sophisticated in-camera generation of a single raw file.
The chance of motion/lighting artifacts for 16 vs 8 exposures is in practice not just twice as likely. It is probably 10X higher under so many field conditions. I might be able to time an 8 frame exposure on the Panasonic S1R such that wind or lighting issues are minimized, but it’s drastically harder to do so for twice as long a time—lulls in wind and shifts in lighting are impatient with photographers. Already on the Sony A7R III, 4-frame pixel shift has been proven to be troubled even under ideal conditions.
With sixteen frames instead of four, any delay adds up to a lot of time in terms of subject changes, which includes shadows/lighting as the sun or clouds move!
Even assuming zero inter-frame delay for 16 frames, sixteen exposures is a lengthy duration in which motion artifacts and lighting changes guarantee problems, even under ideal conditions.
Therefore, unless the Sony processing software has exceptionally sophisticated ways of dealing with motion/lighting artifacts that are far superior to the Panasonic S1R, I see no hope for field use.
2. Lack of in-camera processing, no raw file, huge file count
Sony’s 4-frame pixel shift on the Sony A7R III is a huge hassle: if I want to delete the single capture, I have to delete 4 separate files one-by-one. With 16-shot mode, will I have to delete 16 frames one-by-one?
Storage needs are problematic. With the Panasonic S1R, 324MB finished raw files on the Panasonic S1R are one thing, but 1000+ megabytes scattered across 16 separate files is quite another.
Perhaps 120 images on a 128GB card = buy a pocketful of expensive storage cards—fast SDXC UHS-II cards are not cheap, and that speed is important.
Because there is no in-camera raw file produced as with the Panasonic S1R, there is no way to determine whether the final assembled image is OK in terms of focus and depth of field—this cannot be done satisfactorily at normal resolution but that’s all the A7R IV has—16 separate images at standard resolution.
I know this from extensive field experience with the Panasonic S1R that I have to be able to check the full resolution multi-shot image. With the S1R, the shot discipline demands are the hardest of anything I have ever done with any camera, and the Sony A7R IV increases that substantially over the S1R (16% more resolution, linearly). Yet Sony defeats any ability to verify the essentials of focus and depth of field, by not providing any high-res image in-camera.
Not being able to review the full resolution image in camera as with the Panasonic S1R is a very serious field-usage flaw with the Sony A7R IV, bad enough on its own.
But consider that Sony’s solution is to require usage of Sony imaging software. That’s a serious headache for workflow, versus just importing into Lightroom or Photoshop. And it might be iterative, meaning, do and redo and redo and redo in the Sony software with various parameters to fix artifacts. Assuming it even can.
In the best case Sony’s imaging software will perform brilliantly and using all CPU cores and GPU power, with sophisticated handling of motion/lighting artifacts accruing from field use. I am not feeling hopeful on any of those points.
It’s also possible that Adobe could do something useful, but that never happened for 4-shot pixel shift, so it seems doubtful.