Matt G writes:
You touched on this a bit in your post today, I think by far the biggest problem these days is that whilst lens sharpness and sensor resolution just keep getting better better, but focus accuracy and usability hovers around the same level. This means the percentage of shots that are as sharp as they can be decreases with each successive generation of cameras.
Phase detection hardware may be operating near the limit of what can be achieved however there are however some really simple software fixes that would go a long way to alleviating focus problems. Quality software engineering comes at a price, but once the price is paid these improvements can be rolled out across umpteen-thousand units at no additional cost. A few ideas of the top of my head, which have no doubt occurred to others previously:
* Focus bracketing - works just like exposure bracketing but for the focus distance. Would be nice if you could chose the number of shots, but I'd accept three. Memory is cheap and this would do the job in many situations where you need to shoot quickly.
* Post-shot focus confirmation. Use a similar method to focus peaking to give a quick auditory notification if the AF system missed so you can quickly reshoot, instead of having to take the camera away from your eye, turn on the rear LCD, slowly zoom in to 100% then scroll to find the part of the image where the focus point was supposed to be. Could detect camera shake also.
* Auto calibration of phase detect systems using live view. Shooting an appropriate target with PDAF and then compare that result with the camera's own closed loop contrast detect AF system to detect systematic errors. Ideally this could be performed at different distances with different lenses and under different lighting conditions.
* Picture in picture live-view. Chose two points of the image and display them magnified side by side. Great for checking depth of field, working with tilt-shift lenses, checking field curvature or simply shooting portraits of two people.
The last one might depend on the hardware reading the image sensor, but everything else could be accomplished with existing units if manufacturers were willing to take the risk, or even open up the platform for third party development (think of how much more Magic Lantern would have achieved given access to the right documentation)!
DIGLLOYD: I agree that certain things could be done, particularly in the DSLR world where room for improvement is ample, but the suggestions above are limited in scope of applicability, and all of them have drawbacks or limitations.
The entire imaging and usability chain has to be involved: this is a hardware issue first and foremost:
- If there is no spot autofocus point, one cannot even designate the desired point of focus properly. Consider a burred iris and sharp eyebrows because the camera sees the eye, eyebrows, eyelashes, and chooses to focus on the eyebrows. The Nikon D800E drives me crazy this way; its focus points are sloppy. With some Nikon DSLR bodies, the AF point is at a slightly different point than indicated (focus at or near the edge of a wall to test this on your own camera).
- EVF/LCD resolution, image zoom: as an extreme case, the Leica MM and Leica M9 have such poor quality screens and zoomed-in low-res JPEG (in DNG) that it is impossible to determine if an image is sharp even after taking it.
- LCD/EVF resolution: resolution should be retina grade: twice the linear resolution needed for actual pixels, so that the display does not impair the evaluation of the image quality.
- Live View quality: the mangled (subsampled and jagged) Live View of the D800E makes judging sharpness much harder than it ought to be.
- Manual focus “throw”: most AF lenses are difficult to operate in manual focus mode; small movements make large changes.
- Most f/1.4 lenses have low micro contrast: asking the autofocus system (or your own eye) to judge peak focus is inherently difficult (sometimes a slightly defocus actually looks better due to color errors!). Lens quality goes a long way towards enabling precision. This is one reason (my assumption at least) why f/2.8 lenses generally prove satisfactory with camera AF calibration. Or highly corrected f/1.4 lenses with high contrast wide open, such as the Sigma 35/1.4 DG HSM.
- Focus shift: DSLRs focus wide open, so unless and until the camera builds in compensation for focus shift (Hasselblad does this), focus can be perfect as focused wide open, then degrade over the next few stops until DoF catches up.
- Precision: Every Nikon body I’ve used lacks lacks precision with fast f/1.4 lenses: a fixed target (e.g. LensAlign) produces different results over a sequence of shots. Poor precision. Calibration can only center the results about the correct area (accuracy), but one still obtains a scattershot grouping.
Software can do its best with the hardware problems all addressed. Anything else tends to be a Band-Aid, a work around.
As a case in point, the Zeiss 135mm f/2 APO-Sonnar is a joy to focus. Why? Because its wide open performance is so high that there is no ambiguity in ideal focus. That is the fundamental impediment to focusing most lenses: low micro contrast wide open. And the APO-Sonnar has excellent focus throw to make it physically possible to focus precisely.
Autofocus algorithm: we don’t get a choice of focusing algorithms: focus fast or focus absolutely accurately? Phase detect is assumed to have to happen ASAP. For contrast-detect (focusing off the sensor), what I see in Canon is an incremental stepping algorithm that tends to be highly accurate. What I see with Nikon CDAF is a rapid-fire back and forth herky-jerky adjustment that often is off just a little and probably suffers from hysteresis too. So it is my belief that the choice of focusing algorithm exerts an influence, and that can be addressed in software. But to be fair to Nikon, let’s see Canon AF perform as well for a 36+ megapixel camera as on a 22MP.
Focus bracketing might have limited uses and therefore be of some value, but it is problematic for many reasons. I already have too many images to sort, the image is missed if a facial expression changes, low shutter speeds mean some images are no good even if focus is spot-on, framing varies a little. And if one cannot designate the desired focusing spot precisely in the first place, how is the camera to make an evaluation for “best shot”. Does the camera account for focus shift? (no).
Lens calibration can help for an accuracy problem (move the average result right onto the center of the bullseye), but does not solve precision issues (tight repeatable grouping).
For Nikon AF in particular, my conclusion is that precision has been the issue, at least with the low contrast of fast f/1.4 lenses; even the eye has trouble with such lenses. But precision is not addressed by lens calibration.
As per Wikipedia, accuracy and precision:
The accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value.
The precision of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.
A measurement system can be accurate but not precise, precise but not accurate, neither, or both.
John W writes:
Great thoughts in today's post on autofocus accuracy, to which I'll add this:
Especially when compared to ad-hoc focus solutions such as focus bracketing, cameras based on computational photography show another way forward. Consider the Lytro camera (www.lytro.com) as currently released. Stated generously, it's best viewed as a technology preview. The sensor size and final rendered pixel counts are low by digital still camera standards, and I think the form-factor and handling leave much to be desired.
But this reconfiguration of sensor, optics, and processing can be viewed as way to trade the complexity of autofocus for final image quality in a way that leapfrogs conventional downsampling. From comments in public interviews, I guesstimate that Lytro's camera has a 10-to-1 ratio of full-color image sites to final pixels. If we're approaching 140 MP full-frame sensors, then imagine having a 14 effective megapixel camera which can't miss the desired focus and depth-of-field, because those attributes are no longer fixed at shot time. A camera where the shot-as DOF and focus point are, like RAW white balance, merely convenient annotations to the photographer.
DIGLLOYD: agreed in general, but with the very important stylistic differences in lens rendering, I am not so sure that a light field camera will be nearly so appealing, even if it had 500 => 50 megapixels sensing: another tool is always welcome however.