Upgrade the memory of your 2020 iMac up to 128GB
877-865-7002
Today’s Deal Zone Items... Handpicked deals...
$1999 $1599
SAVE $400

$5999 $2999
SAVE $3000

$450 $240
SAVE $210

$329 $159
SAVE $170

$1499 $1499
SAVE $click

$1049 $879
SAVE $170

$979 $879
SAVE $100

$2299 $2299
SAVE $click

$2198 $1448
SAVE $750

$2399 $1999
SAVE $400

$1299 $1299
SAVE $click

$3000 $1800
SAVE $1200

$998 $698
SAVE $300

$210 $180
SAVE $30

$2394 $2294
SAVE $100

$999 $849
SAVE $150

$1049 $899
SAVE $150

Open AI Image Generation — the Death of Photography? +Reader Comment

Is Open AI to be the death of photography? Or a new form of creativity, driven by natural language descriptions? Or the new tool for insecure people to generate artificially attractive images of themselves? And a million other things.

For example, how to make a portrait of a young woman video and be prepared to be astounded at how good it is overall, and yet how many small but problematic issues there are. And then see the community showcase. Assume that the power of the technology will double every year, so that 10 years from now it will be 1024 times more powerful and the kinds will be worked out.

You should not have been trusting any images or videos for years now. This will only get worse and worse, once deep fake videos are readily creatable at the push of a button for anyone. Reality and artificial imagery will merge and you will not be able to tell the difference. Already most people can’t with the good deep fakes.

First try — fail

https://labs.openai.com

Tigger is a mackerel tabby cat, a natural born killer good for ~300 rodents per year. Keeps the entire neighborhood free of vermin. Almost daily cleanup job in the garage. He once killed and ate 5 rabbits in 2.5 days—the whole neighborhood is de-rodented much to the delight of most of my neighbors. We took him in as a feral hissing beast at ~1 year old, healthy and in his prime.

Dall-E refuses to generate a predator/prey image

AI is still like a nanny with an IQ of 63 when it come to judgment. Or maybe it’s just woke programming, which is even scarier. My first attempt at an AI image failed.

mackerel tabby stalking rabbit in field’” is unnaceptable, but “mackerel tabby hunting rabbit in field” is a go.

AI technology is the new good, and the new evil*. This below captures everything you need to know about wet robots (you and I), and how you will be caged by all the technology around you. AI is going to change everything so fast your head will spin and the world will be radically different within 10 years.

* Killer AI drones are already here, flying and quadriped. Humanoid next. The Terminator won’t be science fiction much longer.

Impressive that an AI can turn “mackeral tabby hunting rabbit in field” into these images at all.

But badmashups with the perspective and behavior all wrong. AI will presumably improve. And yes, I know that properly phrased queries can deliver pretty good results. It takes skills. I just wanted to see what it could do.

Dall-E: “mackeral tabby hunting rabbit in field

Not like real life, but interesting: “mackerel tabby cat devouring rabbit”.

Sorry, but “mackerel tabby cat devouring bloody headless rabbit” is forbidden by the AI nanny.

Dall-E: “mackerel tabby cat devouring rabbit”

Wrong summit hut, wrong placement, wrong perspective. But the third one is a decent start. Variations a failure.

Dall-E: “mackerel tabby cat flying over Mt Whitney summit hut”

Reader Jason W writes:

Event Photography —  Zero impact. AI can't generate images of your wedding or the birth of your child, nor would you want it to.

[DIGLLOYD: Why not? Kauai Hawaii seems perfect: “wedding photos in Kauai Hawaii at the ZZZ Hotel courtyard, the beach area, the top of XYZ, boudoir photos of my wife enhanced in suitable places, .... Wife in wedding dress like <famous person>, husband dressed like <whatever>, including Fred, Tom, Joe, Emily, Charlotte and their kids ....". ]

Stock Photography —  This medium is already f*cked by the immoderate heap of real image content already available, that you can use AI to generate already near valueless photography hardly seems destabilizing.

[DIGLLOYD: Shutterstock already uses AI and its use will grow exponentially I’d bet]

Product Photography — AI can't generate images of things it has never seen. Got a new product? You need to take a series of photos of it to get it into the system. After that maybe you can have AI generate images since it has learned what the product looks like, but there's still an initial requirement. For stuff like cars, a lot of it is already 3D imagery to begin with.

[DIGLLOYD: many product images are already computer generated from models, no need for a picture at all. AI will only improve upon that including placing things in scenes]

Portrait Photography —  Again, same problem as stock photography. You need an actual image of yourself first. Maybe you can do this with an iPhone app and then generate stock photos of yourself, but people might object to a machine generating images of them because they know it isn't real.

[DIGLLOYD: lots of existing images—of course AI cannot start from zero... so what?.  And AI can make you younger or older or anything you like].

Fine Art Photography —  The value of art photography depends on placement and the artist. In this area, AI is a tool, not the artist. It'll come down to what attracts people and is original. Overall, AI currently has a style that is identifiable even when it is imitating, and I think that's off putting. It also still needs an original artist to generate like-kind imagery.

[DIGLLOYD: AI has already won art contests.   Maybe it will win them all before long.   Art is a set of rules along with a style that are not that complicated. It's just a matter of the input and maybe AI will be more creative than humans, since it can understand perception and persuasion more than any human can, and thus manipulate those brain functions.].

Landscape Photography —  Most landscape photography is travel photography. It's people documenting a time and place on their personal journey, and there's no point generating fake versions. For high-end art landscape photography, it's fine art and how much people like it and I from personal experience people "default to truth" and assume a fine art landscape print is effectively an accurate representation. When they learn it isn't they tend to not like it or re-categorize it. This is why Epson has altered and unaltered categories for the Pano Awards.

[DIGLLOYD: maybe real photographs might turn into a 'thing' like platinum prints. Most people like fake landscape already; just look online—hyper fake "reality" that never existed with super saturated color and enhancements.].

DIGLLOYD: my comments inline, above. I don’t presume to imagine where this is all going, but I’d bet it’s going much faster and much farther than my imagination can easily contemplate.

OTOH, with Dall-E people are often grotesequely rendered, location and environment way off, etc. I am sure there are way better AI rendering systems out there and indeed there are, at least for people, as in the intro.

For example, just about everything but the crudest details are wrong here. More like a lame science fair project than anything persuasive. But maybe I just lack the right AI and the skills for proper input.

Dall-E: “Mercedes Sprinter van with open sliding door, titanium mountain bicycle leaning against it, top of White Mountain Peak, red head woman straddling bike

Not too bad...

Dall-E: “two mackerel tabby cats perched on chair wearing a party hat and yawning

Reader Comments

Michael E writes:

I’ve been using Midjourney AI pretty much since its inception. For my own use, I don’t imagine I’m making “Art” with it, but rather I use it for illustration. It marks the death of stock photo sites that don’t include AI output as part of their offerings, IMO.

I blog to some 11,000 people each day, and I use Midjourney to Illustrate the stories or articles I write. It not only saves money from licensing graphics, it allows me to tweak images until they reflect the emotional content of the blogs and articles I produce.

I saw the same thing that will happen to stock photos when typesetting vanished back in the late 1960s-1970s, as word processors and graphic software arose.

I saw the same thing when cellphone photos took the heart out of many professional photographers, and left them gnashing their teeth and cursing fate, or when Spotify and the like morphed the music industry.

IMO, AI graphics like Midjourney mark a sea change that will sweep through the world of graphics like a field fire, perhaps the most powerful graphic tool ever imagined, where a picture is not only worth a thousand words, but can be created with carefully arrange words and some patience.

I already use it every day for illustrating. And yes, it will threaten those who do not recognize AI graphics as the tool it is and learn to program and use it. It is a liberator and tool, not the end of art.

DIGLLOYD: it’s always hard to anticipate what disruptive technology will change.

Chris R writes:

Following on from your article the other day regarding AI taking over and virtually rendering mainstream photographers out of future business, here is an image and article just confirming what you have been discussing on your blog.

https://www.thesun.co.uk/tech/21265059/sunset-photograph-wins-contest-scary-twist/

DIGLLOYD: nice "shot". I would have been suspicious of it at the outset, at least as a heavily manipulated image.

OWC Thunderbolt 3 Dock
Ideal for any Mac with Thunderbolt 3


Dual Thunderbolt 3 ports
USB 3 • USB-C
Gigabit Ethernet
5K and 4K display support plus Mini Display Port
Analog sound in/out and Optical sound out

Works on any Mac with Thunderbolt 3

Sony 16-shot Pixel Shift Works in the Field, Trounces the Fujifilm GFX100S, Might Match the PhaseOne IQ4 150

re: Sony pixel shift

I wondered whether Sony 16-shot pixel shift was usable in the field. Would it be better than 4-shot mode, and would it pan-out as an ultra high resolution medium format solution?

So I went and shot my Alpine Creek scene against the Fujifilm GFX100S tonight.

The crop below is at an image size of 180 megapixels. The 60-megapixel sensor of the Sony A7R V with 16-shot makes short work of the GFX100S. And of its own 4-shot mode. Not 240 megapixels worth of detail (lens performance and diffraction impose limits), but 180MP is about right, or you can just cut it in half to 120MP and be blown away by what you get.

This test was not with some easy scene to make it look good; it was 16 frames at 0.8 seconds each along with moving water and foliage that was not entirely still.

There will be practical situational limitations of course eg 16 frames at 4 seconds each might not fly and any wind will be a problem (camera shake), but this first effort makes me wonder whether I might have the equivalent of a PhaseOne IQ4 150 in terms of resolution, and with vastly lower noise*. Something I cannot apply fairly often, but something I can apply quite a lot.

The storage used is annoyingly huge with 16 uncompressed raw files for a single capture. All fixable if Sony gets its act together, via lossless compressed and compressed DNG using Sony motion correction right in camera. With options to save only the stuff I want, and no need whatsoever to do any 'post' other than use the raw file straightaway. It’s such low-hanging fruit that I am baffled why Sony has not already done it. Free money! What’s the holdup, Sony?

Perhaps field shooting will turn up other issues. Perhaps some scenes won’t work out, but since you cannot lose no matter what, it’s a no-brainer for many situations. Given what I am seeing, it appears that I have an about 180 megapixel camera on my hands that I can apply to quite a few real outdoor shooting situations.

I even shot a 2-frame focus stack with the Sony FE 12-24mm f/2.8 GM at 12mm with impressive detail capture. No other camera on the planet can make such an image (that wide, with resolution that high).

* Using 16 frames, the light capture is 16X greater for 4X lower noise, most noise being the square root of the exposure difference. Put another way, ISO 100 is like ISO 6.

Sony A7R V + Voigtlander FE 35mm f/2 APO-Lanthar at f/5.6 vs
Fujifilm GFX100S + Fujifilm GF 35-70mm f/4.5-5.6 @ f/7.1

Actual pixels crop from 180 megapixel image

 

View all handpicked deals...

Canon EOS-1D X Mark II DSLR Camera (Body Only)
$5999 $2999
SAVE $3000

All Too Often: the Garbage Image Quality of Camera Phones with HEIC or JPEG

re: computational photography

What I refer to as computational photography is all the good stuff that improves image quality (resolution, noise, whatever) without losing anything. For example, Adobe Camera Raw Enhance Details, frame averaging, focus stacking, deconvolution (blur removal), etc.

This article below perfectly demonstrates why I keep stating that iPhone image quality (HEIC or JPG) is total freaking garbage. Unless you shoot RAW*. Or unless you pretend it’s fine by looking at it on a small screen (eg iPhone or iPad) whose extreme pixel density hides reality from you.

I keep trying to get my kids to shoot RAW, but since all they see photos on is their iPhone, they keep shooting crap-quality JPEGs on their phones and do not believe me that the photos suck in terms of detail.

For Apple (and presumably other phone vendors), it’s all about minimizing image size while making images look good to the eye at a very crude/coarse level (eg a tiny display).

HEIC is the newest gaslighting joke as it does next to nothing for quality vs JPEG, saving a little more but producing the same trash. For the most part, all fine details are discarded, and this is plain to see for anyone that actually looks. There are exceptions that meet the compression model’s expectations but skin, cloth, any kind of landscape, etc are all mutilated, typically cutting real resolution by a factor of 4X to 10X.

* TIP: ou can shoot your camera phone using RAW. This sidesteps all the issues discussed here. There is nothing inherently bad about a camera phone sensor; all the nasty badness comes in the processing pipeline to HEIC/JPEG. Which need not be so—it is possible to create very high quality JPEG files.

The Limits of "Computational Photography"

Jan 2023, by Will Yager

I was recently discussing laser etching with an engineer/font-designer friend of mine, and I wanted to show him a picture of some really good laser etching on a particular piece of optical equipment.

No problem, I figured - I would just snap a quick picture on my phone and send him a text. Unfortunately, I ran into an unexpected problem; my phone simply could not manage to take a picture of the text. This article is a bit of a rant spurred by this annoyance, so please forgive any hand-waving and epistemic sloppiness I’m engaging in so that I can pound this out before I stop being annoyed.

Every time I tried to take a picture of the engraved text, the picture on my phone looked terrible! It looked like someone had sloppily drawn the text with a paint marker. What was going on? Was my vision somehow faulty, failing to see the rough edges and sloppy linework that my iPhone seemed to be picking up?

   

What is going on here? Well, I noticed that when I first take the picture on my iPhone, for a split second the image looks fine. Then, after some processing completes, it’s replaced with the absolute garbage you see here. Something in the iPhone’s image processing pipeline is taking a perfectly intelligible and representative (if perhaps slightly blurry) image and replacing it with an “improved” image that looks like crap.

...

Significantly more objectionable are the types of approaches that impose a complex prior on the contents of the image. This is the type of process that produces the trash-tier results you see in my example photos. Basically, the image processing software has some kind of internal model that encodes what it “expects” to see in photos. This model could be very explicit, like the fake moon thing, an “embodied” model that makes relatively simple assumptions (e.g. about the physical dynamics of objects in the image), or a model with a very complex implicit prior, such as a neural network trained on image upscaling. In any case, the camera is just guessing what’s in your image. If your image is “out-of-band”, that is, not something the software is trained to guess, any attempts to computationally “improve” your image are just going to royally trash it up.

DIGLLOYD: the example shown is obvious in its failures, though it’s a little unfair in having different size/resolution. Still, the total mutilation of all detail except the coarsest edges is self evident in the phone picture and it is precisely what the iPhone does to just about everything.

Real life is far worse: an iPhone picture of my skin makes me look like I have some nasty disease, skies are banded/stepped, textural detail of just about everything is smeared to oblivion.


Upgrade the memory of your 2020 iMac up to 128GB

Sony Pixel Shift: Why Sony 16-shot pixel shift is a Can’t-Lose Proposition

re: Sony pixel shift

Readers following my recent coverage have seen proven how Sony 4-shot pixel shift not only matches but outperforms the 100-megapixel Fujjifilm GFX100S in every comparison I’ve presented, and with different lenses and focal lengths, even with moving water and foliage*. The potential is truly awesome.

In the past I have dismissed Sony 16-shot pixel shift as not being worthwhile. At least for still photography and static images, I now revoke that assessment for the Sony A7R V, based on yesterday’s demonstration—16-shot has merit, provided that it is properly sharpened during raw conversion and flawless shot discipline is achieved including focusing and aperture choices.

* The issue with wind is primarily its disturbance of the camera eg wind vibrating the tripod which wiggles the camera even a few microns—with 3.76 micron pixels, moving 2 microns (2 millionths of a meter) is unacceptable.

Why Sony 16-shot pixel shift is a can’t-lose proposition

Here I of course speak of images/subjects worthy of the effort. And of course a tripod.

And ruling out any disturbance to the camera such as wind. So long as the camera is not disturbed (totally still), Sony motion correction takes care of the subject motion and what little it misses can be painted-in from any of the single frames.

4 in 1 deal

What I had not realized is that Sony 16-shot pixel shift can be processed as either a 240-megapixel high-res image, or four 4-shot 60MP pixel shift images.

With a single press of the shutter, not just one but four 4-shot pixel shift images are captured. If there are any issues with wind, it is very likely that one of those four may prove viable, even if the other three have issues. Furthermore, it becomes possible to use frame averaging of those four images, albeit at the cost of some sharpness losses**.

Your worst case? Use any of the single-shot frames, possibly with Enhance Details. If it’s windy, one of them is likely to have less subject motion than others.

You cannot lose.

Well, other than space usage on card and computer: until and unless Sony gets their act together by making lossless compressed captures for pixel shift, a 16-shot pixel shift capture is a massive storage bloat. It can be partially addressed via Adobe DNG Converter.

All that sounds good, but will I actually use 16-shot in the field, is it worth the trouble, etc? I don’t yet know, but I will be finding out. I expect to do so sparingly, in the appropriate cases when for example I want every little detail on a granite cliff or tall peak. And maybe Sony motion correction will fail miserably with 16-shot, again, I do not yet know.

* Use at least a 2-second self timer and gently press and/or wireless remote, wireless remote is best as no physical contact is made and a magnified Live View can be monitored on the rear LCD for any vibration up to the moment of capture.

** With a half-pixel differential between frames, upsample each frame by 2x linearly before alignment, since it is not possible to align to the half-pixel otherwise.

Reader Jason W writes:

On your comments on 4-shot vs 16-shot pixel shift, the primary issue is doing long exposures at twilight or with rapidly changing conditions.

16 shots pins you to one shot for an extended period of time, while 4 shots or one shot gets you get an image and then time to reposition for another perspective.

DIGLLOYD: agreed that any exposure with variable light and/or too-long exposure is an issue for pixel shift. There, the higher resolution sensors are much more practical.

But under variable conditions, 4-shot might be as marginal as 16-shot (which is after all four 4-shot captures), given that light that can be fading by the second at dusk and thus causing checkerboarding. And the whole idea is that you have nailed the composition and now want the best possible execution. Perhaps if multiple compositions are desired, then even 4-shot is best not done either.

Field experience needs to chime in here, and I do expect to see practical limits on viability.

Reader Comment on 16-Shot Sony Pixel Shift: “360 MP images looks terrific... but does this matter?”

re: Sony pixel shift

Regarding Sony A7R V Pixel Shift: 16-Shot vs 4-Shot vs Single-Shot: Dolls, reader Vincent L writes:

360 MP images looks terrific, thanks for highlightingSony’s multi-shot benefits.

But, in the end, does this matter : I mean, If you print this big, like at least 1 meter wide, how does it compare to the naked eye, seen at confortable distance, to the regular 61 MP shot ? Thank for investigating this.

DIGLLOYD: good question. It all comes down to purpose and context. And what of the subjective factors like the uglies of color aliasing, stepping/banding, chroma noise, etc? Most people won’t notice and don’t care. The only metric that matters to you is your own.

Just about anyone (including me) would agree that an HEIC or JPG from a current iPhone looks terrific (usually*)—on the iPhone or iPad. Probably 99.99% of images are viewed on a handheld device these days.

The opposite extreme: an 8K or future 16K television of wall size.... say 9 X 5 feet = 45 square feet. Versus the 0.09 square feet of an iPhone—500 times more area. Does it matter, to you? It will to me, and yet if it’s a family photo, not so much.

Do you perhaps enjoy savoring image details that your eye could not discern even while there? I sometimes enjoy like to explore places I’ve visited, such as the detail of our route up “The Notch” near the Mt Whitney summit, as seen from miles away. There may be other reasons for me and for you.

If it’s buy one get two more free, who refuses that offer? Who says it has to be rational?

Vincent L continues:

Are we sure a photographer like you, viewing the image at reasonable consuming distance, would really see the difference. And also, do we have good enough printer/print shop able you to make a difference with a 360 MP images vs 61 Mp at reasonable cost.

I was in the same boat, however it may come at one expense, time. Sometimes, and I’m sure guilty of that, we’re focussing too much on that great shot, while there’s a better one few yards away, so we may loose too those opportunities. I had several instance were a second though shot, turned out better than the "great shot » but will worse light due to too much time focusing on the first.

Sure 4/16 shots are not that much longer, but they’re longer, especially as while exposure you may want to redo due to wind, mouvement (yes sony compensate but not perfectly).

DIGLLOYD: if the image is compelling, all other aspects tend to fall away. That seems implicit in the comment, and I agree with it.

60MP is 9504 pixels across which is good for 40 inches wide = 102cm at 240 DPI—quite sharp. A more demanding standard is 300 DPI ~= 32 inches = 0.8 meters, perhaps too demanding and unnecessary/not really visible, as Vincent is pointing out.

If we restrict it to technical quality only, and discount the resolution to 2/3 of the 9504*2 of 16-shot (fair if f/8 is used), that gives 12672 very high quality pixels (few to no artifacts) across, which yields a 53 in = 135 cm print at 240 DPI.

On my wall is a 6 X 4 ft = 1.83 meters wide print from a Sigma DP Merrill with its low resolution 4800 X 3200 15.3MP true-color sensor. It looks great, and the impression of detail is strong. How much better would it have been had been shot in 16-shot pixel shift? I don’t know, but I can look at it and feel pleasure, even if the fine details are not really quite there. So I agree that the argument for “more is better” can have its weaknesses in real prints.

Is good good enough?

Many of us feel we need a full-frame mirrorless camera, that an iPhone is not good enough. Which is little different than saying sometimes we need a car, not a bicycle. And sometimes a moving van.

With pro-grade cameras, is there a “good enough” such that anything more is of no practical value?

Some of the elite master photographers go to great lengths for the very best image quality and sharpness, such as a PhaseOne IQ4 150 system costing $75K. And obviously the smallish medium format market (44 X 33mm) sensor has become quite popular, though a niche in the overall market. This is all about wanting more sharpness and better a/b/c/d. Albeit a tiny fraction of the market in terms of pictures taken.

For me, it comes down to realism: if the foliage looks smeared or the granite looks like plastic, degrades my sense of the place, my close direct observations—I do not like the conflict. Cameras like the Fujifilm GFX100S teeter on the edge with getting it right.

For those of use thinking that the $7K range is still an outrageous cost, or those of us who do not want to lug around a clunky PhaseOne system, an option that matches or beats those systems (at least under the right conditions) for $5K feels pretty damn good. We might not need it, but why not?

Value proposition

Some people spend considerable time and money to travel to photograph. I do it on the cheap, but I still spend considerable time and physical effort to photograph.

No matter what camera I make the image with, my expenditure of time and effort is a sunk cost. After all that effort, do I want to take the best possible image, or a pretty good one?

Why would I not use Sony A7R V to 16-shot mode (or at least 4-shot mode) and press one button for a particularly special image, perhaps one I’ve tried for, for a day or a week or longer?

For that stated situation ("special image"), the only reason I can think of is if/when motion correction for 16-shot works poorly in outdoor conditions and/or if variable lighting will make a mess of things. Also and well worth noting: 16-shot can alternately be processed to yield four 4-shot pixel shift frames. If one of those four minimizes subject movement more than others, you can pick the best of the four for your 4-shot frame. In other words, 16-shot never be worse than 4-shot mode, and might be better under iffy conditions.

Why would I press that same button and get an inferior result when I cannot lose—worst case I can use one of the single frames (possibly with Enhance Details), or any of the four 4-shot pixel shift frames, and best case I have a stunningly detailed 16-shot frame.

* Closer examination of all too many iPhone HEIC/JPGimages reveals impressive manipulation of human perception, with poor fine detail rendition (almost none), stepping/banding/smearing, etc. But this is hidden from us at even iPad size, due to the extremely high pixel density and a camera resolution just enough to fill it.

Below, this is an actual pixels crop from a 360-megapixel image made with 16-shot Sony pixel shift (240 megapixels upsampled to 360 megapixels).

Actual pixels crop from 360 megapixel image
f5.6 @ 1/4 sec pixel shift16, ISO 50; 2023-01-29 15:30:42
Sony A7R V + Voigtlander FE Macro APO-Lanthar 65mm f/2 Aspherical @ 97mm equiv (65mm) RAW: SmartSharpen{45,1.4,20,0}

[low-res image for bot]

Dr S writes:

For the affordability of the A7R5 and for a tad more a 44x33 sensor cam, I would challenge your assertion, "is good good enough," and say, "Is Great, and even Fantastic, good enough?"

Even without Pixel Shift these cams yield superb results, especially in the right hands. We are in a great time for outstanding imagery. It can get better, but current tech is damn good!

DIGLLOYD: indeed, we are in the golden age of photography, so far as cameras go. Except for the garbage grade quality of iPhone images (unless one shoots RAW), but that is an intentionally destructive design choice.


Upgrade the memory of your 2020 iMac up to 128GB

Extreme Resolution, Ultra Low Noise: Sony A7R V Pixel Shift: 16-Shot vs 4-Shot vs Single-Shot: Dolls

re: Sony pixel shift

Can Sony pixel shift deliver additional detail in 16-shot vs 4-shot pixel shift using a high-grade lens, at least with static subjects under controlled lighting?

Sony A7R V Pixel Shift: 16-Shot vs 4-Shot vs Single-Shot: Dolls

This page looks at f/4, f/5.6, f/8 performance on colorful highly detailed static subject matter using 16-shot, 4-shot, single shot, and single-shot enhanced images at an equivalent enlargement of 360 megapixels.

Whether such results can be obtained outdoors remains to be seen at the time I prepared this page. But the results on static subjects are incredible.

Lighting held constant (indoors and at night) using a 24 X 12 inch 15500 lux LUXLI Taiko remote phosphor LED at 5800°K. The light quality is fantastic and the total light output of is astonishing, taking only 250 watts at full output, which drowns out every light in my house by a huge factor.

Below, this is an actual pixels crop from a 360-megapixel image made with 16-shot Sony pixel shift (240 megapixels upsampled to 360 megapixels).

Actual pixels crop from 360 megapixel image
f5.6 @ 1/4 sec pixel shift16, ISO 50; 2023-01-29 15:30:42
Sony A7R V + Voigtlander FE Macro APO-Lanthar 65mm f/2 Aspherical @ 97mm equiv (65mm) RAW: SmartSharpen{45,1.4,20,0}

[low-res image for bot]

 

OWC Thunderblade Thunderbolt SSD

Blazing fast, up to 32TB.

YEE HAH!



√ No more slow and noisy hard drives!

Sony A7R V Pixel Shift: Diffraction Effects, Pixel Shift vs Single Shot, Enhance Details

re: Sony pixel shift

This page looks at f/4 through f/22 with 4-shot Sony pixel shift versus single-shot and single-shot enhanced using artificial black and white imagery including a Siemens star and resolution bars.

  • How does pixel shift compare to single shot?
  • At which aperture does diffraction start cutting image sharpness in an unrecoverable and/or unacceptable way?
  • How does Adobe Camera Raw Enhance Details hold up versus pixel shift?

Sony A7R V Pixel Shift: Diffraction Effects, Pixel Shift vs Single Shot, Enhance Details

Includes images from f/4 through f/22 with pixel shift, single-shot, and single-shot Enhance Details.

 


Get all the tools you need to upgrade the factory HDD of any 2009-2019 iMac to a larger HDD or a modern SSD.

Sony Pixel Shift on Sony A7R V: Notched “Pac Man” Bites on Edges of Objects

re: Sony pixel shift

What’s causing these “Pac Man” bits out of the edges of Sony pixel shift images?

I first noticed these notched “Pac Man” edges when a reader sent me some images he had taken, of similar black and white targets; he was testing pixel shift vs single shot. I am pretty sure they occur on all other Sony mirrorless cameras with pixel shift, though I have not gone back and checked.

This example compares against a 16-shot pixel shift image. However, I see the same effect with 4-shot images in some cases also, though it is harder to see. Repeatable, every time. And I’ve seen hints of this issue in outdoor images and with colored pixels (eg green).

The notched edges look like movement issues, but that makes no sense by any theory I can think of—how could any movement make a notch exactly every other pixel, and both horizontally and vertically? And why are color effects involved too (those yellow/purple tinges, see the “60” bars for example)?

Note that Adobe Camera Raw Enhance Details does a phenomenal job of delivers results nearly identical to 4-shot pixel shift, though it cannot do as well witih real-world image detail particularly textures. Still—kudos.

Lghting was held constant (indoors and at night) using a 24 X 12 inch 15500 lux LUXLI Taiko remote phosphor LED at 5800°K. The same findings are seen in natural daylight and shot on a concrete floor in garage—no possible vibration.

The 4-shot pixel shift and single shot frames have been enlarged 2X here for direct comparison.

Yes, 16-shot pixel shift does resolve a lot more detail than 4-shot—on resolution bars. On real images, it looks worse by loss of contrast. The resolution is so extreme that it can resolve the increasing diffraction blurring at f/5.6, f/8, f/11, etc; it looks like a widening double image effect (not shown here).

Sony pixel shift 16-shot compared: f5.6 @ 1/4 sec, ISO 50
Sony A7R V + Voigtlander FE Macro APO-Lanthar 65mm f/2 Aspherical

Below, same thing just shown 2X larger, linearly.

Alex Tutubalin of LibRaw writes:

It definitely looks like movement between frame artifacts (this may be caused by a wrong movement compensation or a wrong vibration reduction: camera is stable, but sensor moves to compensate wrong detected movement)

DIGLLOYD: I agree it looks like movement, but I am certain that it is not. But I can repeat the test in natural light on a concrete floor in my garage where there is no air movement and no traffic anywhere nearby. Not that my house had such things either.

IBIS was disabled (and shouldn’t the camera do the right thing anyway?), and the effect is repeatable every time and at f/4, f/5.6, f/8. Here the exposures were at 1/4 second but I see the same thing at 1/8 at f/4 and 1/2 at f/8.

Jason W writes:

This is the same nonsense seen on the Fujifilm GFX pixel shift we were talking about where it's never stable even under ideal circumstances.

We need to start talking about magnetic anomalies or external forces influencing movement on the sensor?

DIGLLOYD: Unstable seems most likely, I would agree; it looks like checkerboarding, which stems from lighting changes (none in this case) or subject motion (none in this case).

I cross checked the findings in my garage on a concrete floor and absolutely still air, diffuse light from a crystal clear blue sky, just no possibility of anything changing. Same findings.

I am hoping that Sony can tell me, but odds are low of getting a serious technical answer due to company culture with some parts of the company willing and others wary of saying anything anytime about nothin'.


Upgrade the memory of your 2020 iMac up to 128GB

Sony A7R V: Sony Pixel Shift Exposure Bug

re: Sony pixel shift

I’ve added an important note on an exposure bug with Sony pixel shift which can result in defective pixel shift captures. See the exposure note on this page, Step 0:

Sony A7R V Pixel Shift: Capture and Workflow


Upgrade the memory of your 2020 iMac up to 128GB

Sony A7R V: Sony Pixel Shift Gotchas, but Foolproof Ways to Optimize

re: Sony pixel shift

Sony 4-shot pixel shift is terrific, but pixel shift can be inferior to single-shot mode in localized areas due to the way Sony motion correction replaces motion-affected areas. Moreover, the use of Adobe Camera Raw Enhance Details can bridge the gap between single-shot Bayer array demosaicing issues and pixel shift and thus easily outperform pixel shift in motion-affected areas.

This page looks at pixel shift vs single shot vs single-shot-enhanced results in a scene strongly affected by various types of subject motion. It shows how to fully optimize the image, obtaining the best sharpness, no matter the conditions, no matter the subject matter, motion or not.

Sony A7R V: Sony Pixel Shift Gotchas, but Foolproof Ways to Optimize

Bottom line: by shooting with Sony 4-shot pixel shift you cannot lose.

Adding a Layer Mask in Photoshop

 


Get all the tools you need to upgrade the factory HDD of any 2009-2019 iMac to a larger HDD or a modern SSD.

Earthquake Swarms

re: earthquake

Last night around 23:09 we had two sharp shocks, I slept through them.

Earthquake map

Just now at around 16:55, we had a small shock just noticeable, then another sharp snap (magnitude 2.9). Epicenter of them very close to where my Alpine Creek photos are taken (it’s actually Corte Madera Creek); that creek follows the fault line. And about 1 mile from my home, west of Palo Alto, CA.

Thse little shocks are oddballs to my experience; sharp like a gunshot, a short snap and that’s it. Tigger says hardly worth cocking an ear.

There are lots and lots of small tremors all the time, but in my 40 years of living in this area, I don’t recall shocks like this.

And I’m not worried about them, having a wood frame house with bolted-on foundation and plywood sheathing on roof, but a really strong one I suppose could damage the house. Still, I’ll take 'em over hurricanes any day.

Sony A7R V: Pixel Shift Capture and Workflow and Post Processing Steps

re: Sony pixel shift

This page covers workflow for Sony pixel shift particularly for Sony mirrorless cameras supporting Sony motion correction.

However, the workflow steps also apply to any camera with pixel shift, so long as you have access to at least one of the single-shot frames of the multi-shot capture.

UPDATE: link was bad, fixed.

Sony A7R V: Pixel Shift Capture and Workflow and Post Processing Steps


Upgrade the memory of your 2020 iMac up to 128GB

Apple MacBook Pro M2 Max

re: Apple MacBook Pro
re: Apple Silicon

If I can get a loaner, I will review the new Apple MacBook Pro M2 Max.

There is a lot to like with the Apple MacBook Pro M2 Max, price excepted, which will keep me from buying one anytime soon.

See my summary of the Apple MacBook Pro M2 Max over at MacPerformanceGuide.com.


Upgrade the memory of your 2020 iMac up to 128GB

Voigtlander FE 65mm f/2 Macro APO-Lanthar Aperture Series: Riparian Creek (Sony A7R V)

This series from f/2 to f/8 evaluates the Voigtlander 65mm f/2 Macro APO-Lanthar on an intensely detailed at medium range, using the 4-shot Sony pixel shift on the Sony A7R V.

Includes images up to full 60MP camera resolution plus a 120 megapixel variant at f/8 in both color and grayscale, upscaled via Gigapixel AI.

Voigtlander FE 65mm f/2 Macro APO-Lanthar Aperture Series: Riparian Creek

Includes images up to full 60MP camera resolution plus a 120 megapixel variant at f/8 in both color and monochrome, upscaled via Gigapixel AI.

The Voigtlander FE 65mm f/2 APO is a reference-grade must-have lens, unless manual focus puts you off. The value proposition is incredibly favorable. You cannot buy a lens this good in this focal length range fom Sony at any price. It is THE lens to have, along with the 50/2 APO and 35/2 APO.

CLICK TO VIEW: Gear used here, and related

Toggle for monochrome.

Riparian Creek
f8 @ 1/4 sec electronic shutter pixel shift, ISO 100; 2023-01-08 15:58:19
Sony A7R V + Voigtlander FE Macro APO-Lanthar 65mm f/2 Aspherical
ENV: Alpine Creek, altitude 550 ft / 168 m, 62°F / 16°C
RAW: LACA corrected, vignetting corrected, +10 Whites, +10 Clarity, diffraction mitigating sharpening

[low-res image for bot]


Upgrade the memory of your 2020 iMac up to 128GB

Sony FE 20-70mm f/4G: Appealing Zoom Range, How Good Will It Be?

Sony FE 20-70mm f/4 G

I’ll be testing the new Sony FE 20-70mm f/4 G just as soon as I can get hold of one. Gotta love that zoom range, highly appealing.

Note the sheer number of advance features in the lens as compared to lenses only half a decade back—see the full description further below. An engineering marvel, including those radically fast Sony AF motors, four of them, allowing up to 30 fps shooting.

Makes me wonder how viable the Sigma lens offerings are these days. There are few reasons left to consider non-Sony AF lenses on Sony.

Get Sony FE 20-70mm f/4 G at B&H Photo starting Jan 18 7 AM Pacific...

CLICK TO VIEW: Mid-range zooms for Sony mirrorless

The use of the f/4 aperture increases the odds that performance will be reasonably good across the zoom range; f/2.8 would have been far too challenging across that wide zoom range.

I am not enthused about the stated design goals, but it might turn out well for landscape and focus stacking given that some of the design traits are ideal for landscape also:

Designed to accommodate new trends in shooting styles popular among vloggers and content creators...

...effectively eliminates focus breathing, focus shift, and axial shift to prevent unwanted movements and angle of view variations during movie recording.

What Sony means by “axial shift” is unclear, but it might refer to the outer-zone focus shift resulting in a change in focal length with stopping down, which I have previously deemed aperture breathing.

Sharpness

What I like to see in a lens is (a) no focus shift, (b) minimal field curvature, (c) edge-to-edge sharpness. This is what the Voigtlander APO lenses give me (a bit of focus shift aside), and what zooms rarely if ever deliver in all three aspects. And it is why the Sony FE 24-70mm f/2.8 GM II was a big disappointment.

Claimed MTF for Sony FE 20-70mm f/4 G, 10/30 lp/mm
diffraction-free computed fantasy MTF eg impossible at f/8

Lens performance is likely to be strong, but with a huge caveat—distortion and the resulting sharpness losses from distortion correction. Thus it might be quite weak even in the face of a strong MTF chart.

In my experience, the fantasy MTF charts from Sony have invariably failed to match what real lenses actually deliver, so claims of high performance need to be met with some skepticism.

The only company with credible MTF charts (measured charts, multiple apertures, white light) was/is Zeiss. That is, credibly representing the performance you will actually see in a real lens. I’ve personally seen how it is done as well. But even with Zeiss there is variation, though much less so with the Zeiss Otus line. Leica does a nice job on MTF charts in having 3 apertures, but AFAIK Leica does not measure MTF with real lenses, or even account from losses from ray angle with M lenses, a gross distortion of reality, so poor credibility there.

As a rule Sony (and most brand) MTF charts fail to describe the performance of actual lenses you can buy, being wildly optimistic given manufacturing challenges. Worse, they mislead by showing optical performance without accounting for distortion correction, which can greatly reduce micro contrast by stretching pixels. Since Sony does not publish distortion charts (why?!), I can conclude nothing about the potential lens performance from the MTF charts.

Specifications for Sony FE 20-70mm f/4 G
Focal length: 20-70mm (nominal)
Aperture range: f/4 - f/22
Focusing range: 8.3in = 21cm
Angle of view: 94° - 34°
Number of elements/groups: 16 elements in 13 groups
Diaphragm: 11 blades, rounded
Magnification: 0.39X = 1:2.56
Filter thread: 72mm
Weight (as weighed): 17.2 oz = 488g (nominal)
Dimensions: Approx. 3.1 X 3.9" (78.7 X 99mm)
Street price: about $1098
Supplied with:

Front Lens Cap
Rear Lens cap
Lens Hood
Limited 1-Year Warranty

Description

Sony web page for 24-70/4G...

From ultra-wide 20 mm to 70 mm

With an unprecedented2 ultra-wide 20 mm to short-telephoto 70 mm zoom range, this versatile lens is an excellent choice for a wide range of movie and still subjects. The ultra-wide 20 mm end of the zoom range maintains a wide angle of view when shooting movies with a 16:9 or 2.35:1 aspect ratio that reduces view angle.

Outstanding G Lens image quality

With two AA (advanced aspherical) elements, one aspherical element, three ED (Extra-low Dispersion) glass elements, and one ED aspherical element, this state-of-the-art lens design corrects both chromatic and spherical aberration for high optical performance right out to the image edges, from ultra-wide 20 mm to 70 mm.

Excellent close-up performance

AA (advanced aspherical) elements help to achieve a minimum focus distance of 0.3 meters (wide) to 0.25 meters (telephoto) when using AF, and 0.25 throughout the zoom range when focusing manually. Maximum magnification is 0.39x for detailed close-ups. Close focus makes it easy to maneuver around small subjects to find the idea angle.

Constant F4 maximum aperture

A constant F4 maximum aperture offers greater flexibility in choosing apertures at any focal length. The selected aperture remains constant throughout the zoom range. There is no need to use extreme ISO settings at the telephoto end of the range, and high shutter speeds can be used as needed to achieve the desired effect.

Peak AF performance for stills and movies

Two XD (extreme dynamic) Linear Motors achieve fast, quiet, smooth focus drive with up to 60% higher AF speed3 and up to two times3 better performance while tracking3 for reliable, precise tracking. Moving subjects are reliably tracked when shooting continuous stills at up to 30 fps4, and high-frame-rate movie capture is smooth.

Compact, lightweight, mobile design

This compact, lightweight lens combines state-of-the-art optical design with Sony’s high-thrust XD (extreme dynamic) Linear Motor technology. Used with a compact mirrorless body, it can deliver superb image quality in any situation at focal lengths from ultra-wide to short telephoto. A great choice for shooting with minimum gear.

Refined movie imagery

Advanced technology reduces focus breathing as well as focus and axial shift when zooming so that distracting image shifts are minimised. XD Linear Motors and a newly developed aperture unit dramatically reduce noise and vibration when shooting movies. The breathing compensation function provided in compatible α series bodies5 is also supported.

Aperture ring with click ON/OFF switch

An independent aperture ring offers direct, intuitive aperture control. A click ON/OFF switch allows the aperture click stops to be turned off when smooth, seamless operation is required.

Active Mode supports ultra-wide images

Active Mode6 image stabilisation in α series cameras works effectively at the ultra-wide 20 mm end of the zoom range, providing excellent image stabilisation performance while maintaining a wide angle of view. This can be particularly useful for shooting smooth, stable handheld footage while walking.

Focus hold buttons and focus mode switch

Customisable focus hold buttons are provided in two locations for easy access and control. Other functions can be assigned to the focus hold button from the body menus to match shooting needs. A focus mode switch makes it possible to switch between autofocus and manual focus on the fly, to quickly adapt to changing shooting conditions.

Dust and moisture resistant design

A dust and moisture resistant design7 provides extra reliability for outdoor use in challenging conditions.

Fluorine front element coating

The front lens element features a fluorine coating that repels fingerprints, dust, water, oil, and other contaminants, while making it easier to wipe off any contaminants that do become attached to the lens surface.

Summary

A groundbreaking standard zoom lens for everyday use, the ultra-wide Sony FE 20-70mm f/4 G spans a popular and versatile range of focal lengths in a compact and lightweight form. Designed to accommodate new trends in shooting styles popular among vloggers and content creators, this ultra-wide to portrait-length zoom provides excellent edge-to-edge sharpness and outstanding image quality for all your imaging needs. Offering an expansive field of view to suit selfie, landscape, cityscape, and other wide-angle applications, this lens also offers fast, precise, and quiet autofocus performance as well as excellent close-up performance at all focal lengths.

G Lens image quality

  • Constant f/4 maximum aperture maintains consistent illumination throughout the entire zoom range.
  • Three ED elements, two AA elements, one ED aspherical element, and one standard aspherical element to effectively eliminate chromatic and spherical aberrations while also rendering excellent edge-to-edge sharpness and color fidelity.
  • Rounded 9-blade aperture provides deep, soft bokeh.

Video and autofocus improvements:

  • Two XD (extreme dynamic) linear motors provide smooth, silent, and precise autofocus performance as well as improved tracking performance for photos and videos.
  • Internal focusing system maintains the overall length of the lens and does not rotate the lens barrel while focusing, enhancing the use of circular polarizers and variable ND filters.
  • Updated optical design effectively eliminates focus breathing, focus shift, and axial shift to prevent unwanted movements and angle of view variations during movie recording.
  • Aperture ring with On/Off click switch and iris lock switch prevents the aperture ring from being accidentally moved between the f/4 and f/22 settings.
  • Optimized for filming, the lens supports active mode image stabilization and micro-step aperture control for smooth and seamless video capture.
  • Two programmable focus hold buttons and a focus mode switch for intuitive focus control.
  • Minimum focusing distance of 9.8" with excellent close-up performance at all focal lengths.

Handling and Design

  • Hybrid construction uses both metal and plastic components to maximize both durability and a low weight design.
  • Fluorine coating on front lens element repels water, dust, fingerprints, and other contaminants while also making it easier to clean the front of the lens.
  • Dust and moisture-resistant design includes a rubber ring to seal the lens mount for working in inclement weather conditions.
 


Get all the tools you need to upgrade the factory HDD of any 2009-2019 iMac to a larger HDD or a modern SSD.

Save the tax, we pay you back, instantly!

diglloyd Inc. | FTC Disclosure | PRIVACY POLICY | Trademarks | Terms of Use
Contact | About Lloyd Chambers | Consulting | Photo Tours
RSS Feeds | Twitter
Copyright © 2022 diglloyd Inc, all rights reserved.