Latest or all posts or last 15, 30, 90 or 180 days.
Upgrade the memory of your 2018 Mac mini up to 64GB
Handpicked deals...
$3397 $2797
SAVE $600

$3397 $2797
SAVE $600

$1499 $1079
SAVE $420

$1499 $1079
SAVE $420

$3297 $2797
SAVE $500

$1999 $1199
SAVE $800

$3399 $2199
SAVE $1200

$400 $300
SAVE $100

$2998 $2498
SAVE $500

$2998 $2498
SAVE $500

Upscaling/Uprezzing Images: Gigapixel AI Stuns Versus Photoshop Upscaling

I was comparing the Sony A7R IV to the Sony A7R III as to how much (if any) additional resolving power is there. Prepare to be surprised for what I show. Anyway, I was not happy with Photoshop upscaling results. The Photoshop upscaling is very good for what it is, but that’s the problem—aging technology.

Update: Gigapixel AI has a bug: if you feed it a grayscale image (e.g., GrayGamma 2.2), it converts it to sRGB as output, but fails to take black point compensation into account. The result is a file that does not match in tonality. The solution is to convert the image properly to sRGB, scale that, then convert back to Gray Gamma 2.2 With 16-bit of course (not 8-bit).

But never mind the upscaling details for Photoshop—you’re wasting your time with any of its image resizing methods (I tried several, best was Preserve Details (enlargement).

Brian K recommended Topaz Labs Gigapixel AI to me at some point, but I didn’t pay proper attention—OMG! It does a fantastic job, my jaw just dropped when I saw the results. Fine lines and detail are much finder and better defined with clean edges, but coarse-looking in the Photoshop upscaling. This might be harder to see on a Retina display however*.

Crops below are from a 60MP Sony A7R IV image upsampled to 121 megapixels (13508 X 8994). To a certain extent, the coarseness of the Photoshop upscaling fools the eye from contrast when viewed improperly*. Not persuaded? See the 243MP upscaling example further below.

* Zooming in a Retina display will blur the image and make the effect less clear.
Click to view at actual pixels, preferably at 110 dpi or less—the difference is dramatic when viewed properly.

Actual pixels from 120 megapixel image, upscaled from 60MP Sony A7R IV image
f4 @ 1/100 sec pixel shift, ISO 100; 2019-09-17 18:48:27
Sony A7R IV + Voigtlander MACRO APO-LANTHAR 65mm F2 Aspherical

[low-res image for bot]

Below, actual pixels crops from a 243 megapixel upscaling (19104 X 12720).

Continues below.

Actual pixels from 243 megapixel image, upscaled from 60MP Sony A7R IV image
f4 @ 1/100 sec pixel shift, ISO 100; 2019-09-17 18:48:27
Sony A7R IV + Voigtlander MACRO APO-LANTHAR 65mm F2 Aspherical

[low-res image for bot]

Computing power for Gigapixel AI

Consult with Lloyd on a high-performance computing system for photographers or video.

You will want at least 8 CPU cores, and preferably more (though I can’t verify CPU scaling and be sure it would help)—Topaz Labs says 2 million operations per pixel—it takes a huge amount of computing power, meaning 30 minutes or so for a single 60MP image. I didn’t time it—that’s a rough guess, but that's a long time on an 2019 iMac 5K with 8-core 3.6 GHz Intel Core i9 CPU—no other computing task I do comes close and no other Mac today with 8 cores is faster.

The GPU is little used by Gigapixel AI (at least on macOS) by default. With a slow GPU, it’s all about CPU speed and CPU cores, as is true with most things even in Photoshop.

Given the seriously long time it takes to scale, I have several suggestions for Topaz Labs:

  • Offer a cloud computing tie-in, so that 30-minute upscaling jobs could be done perhaps in a minute or less.
  • Add support for computing the upscale with all locally networked computers set up to do so.
  • Utilize both the CPU cores and the discrete GPU; right now it is either/or, which roughly doubles the time required.
  • Show previews of various levels of noise/blur settings—it’s hard to figure out what settings might be best—it could take days running variants of an image.

Update Sept 24: a simple trick allows using both the GPU and all CPU cores: duplicate the Topaz Gigapixel AI application, start one going without GPU usage, then start the other app going with GPU usage—both run simultaneously, one using the CPU cores, and the other using the GPU! This works on macOS, but whether it works on Windows I don’t know.

CPU usage of Topaz Labs Gigapixel AI on 2019 iMac 5K 8-core CPU

Preferences can be set to use the GPU, but responsiveness of the Mac becomes so poor that I deem it useful only for batch processing while not using the computer for other things.

Topaz Labs Gigapixel AI Preferences
CPU usage of Topaz Labs Gigapixel AI on 2019 iMac 5K 8-core CPU

Jason W writes:

Holy shit. This is mindblowing. Best upscale I've seen.

For the past two weeks, you've done nothing but posted the most compelling image quality I've seen to date. Well done.

DIGLLOYD: the crops below are actual pixels from a 2X upscaling using Gigapixel AI to 183 megapixels (16576 X 11040).

Actual pixels from 183 megapixel image, upscaled from 45 megapixel Nikon D850 monochrome image
f5.6 @ 10.0 sec, ISO 31; 2019-09-19 19:23:51
NIKON D850 + Zeiss Otus 55mm f/1.4 APO-Distagon RAW: resampled 200% linearly,

[low-res image for bot]
Actual pixels from 183 megapixel image, upscaled from 45 megapixel Nikon D850 monochrome image
f5.6 @ 10.0 sec, ISO 31; 2019-09-19 19:23:51
NIKON D850 + Zeiss Otus 55mm f/1.4 APO-Distagon RAW: resampled 200% linearly,

[low-res image for bot]
Actual pixels from 183 megapixel image, upscaled from 45 megapixel Nikon D850 monochrome image
f5.6 @ 10.0 sec, ISO 31; 2019-09-19 19:23:51
NIKON D850 + Zeiss Otus 55mm f/1.4 APO-Distagon RAW: resampled 200% linearly,

[low-res image for bot]

Lefteris K writes:

Two small crops-details of a landscape image, in which Topaz:

a) In the image “stucco”, it turned the shadow of the window frames into stucco. It also placed a roof on the left lower side that doesn’t exist in real life. The two little circles on the upper part look like someone used a painting brush inside them.

b) In the image “Window”, it painted over the window in the small structure (with brush strokes), and rendered roof tiles as tiles that don’t exist. It also changed the vegetation in the back. The left is Topaz, the right is PS upscalling.

In some areas of the image it did a good job (where the image information was enough and clear), in most it just used its own wild imagination.

My conclusion is that it’s a fun toy – not a prosumer or professional tool. Unless one shoots a flat target where all the details are already well-lit and sharp. After trying 4 images, I uninstalled it for lack of time and interest. Even if we lived to be 1000, there wouldn’t be enough time to try every attempt of programmers to sell “enhanced Photoshop presets”.

DIGLLOYD: I admit to limited experience with Gigapixel AI. I also think there are likely to be issues in how things are scaled—AI is a long way from human brain perception and ability to see obvious errors in pattern or form.

There is indeed some false detail introduced in example (b), almost as if using the clone tool in Photoshop and it chose the wrong source. But the example shows different scale (500% for for Gigapixel AI and 400% for Photoshop), so that's not helpful as to determining whether cloning could solve the issue in that area.

From what I see, certain objects may be scaled out of proportion to each other in small areas, as in the window in example (a). Definitely room for improvement with Gigapixel AI, but overall the perceived quality of both examples is vastly higher with Gigapixel AI.

As for “toy”, all tool has have limitations—if there are issues with a particular image, use a different tool. Such is the case with a lot of tools—if pixel shift causes checkerboarding (far worse than anything described above), that doesn’t make it a toy, rather the single-shot frame can be used. As I’ve said many times, pixel shift is usually useless in the field because of subject movement or lighting changes, but my recent mosaic images for lens evaluations made excellent use of it.

Per K writes:

Very interesting to see your work with Topaz Gigapixel AI and now "averaging"! It's like "how to get my A7R2 to A/R4 IQ wise on a shoestring"!:-)

With gigapixel I made a 2x upscale, then compared to the original using a diagonal line (in the original) enlarged on screen until the line became jagged. The up-ressed the Gigap. version to the same scale, that is measured on screen. The jagged diagonal was now not the least jagged anymore. That is great. However Gigapixel does things to the perceived IQ: lower saturation and increased(micro)contrast. It means PP is needed after Gigapixel. I have used average too with pleasant results as I often want me images a bit soft, much tonality, yet sharp where relevant. Now I will try combining these two step into a process - will start with average and then Gigapixel but could the opposite be better?

DIGLLOYD: actually, I'm not able to show (yet) that the A7R IV provides meaningful gains in sharpness over the A7R III so don’t rush to upgrade unless you want the other improvements too. I’m holding off on comparing the two in order to be able to show overwhelming evidence for that claim. Already one credible person has shot a Siemens chart to prove me wrong—except that I already did that and can show the A7R IV is sharper. Except that on real images with the best lenses, I am having a damned hard time showing that the A7R IV actually captures more detail in pixels shift mde.

I don't know the answer to the last question, but my guess is that scaling first would generate slightly different shapes, possibly unpredictable, that would then not average well. I say that because Gigapixel AI clearly does some variable shaping on the subject matter as per the comments from Lefteris K, above.


Save the tax, we pay you back, instantly!
View all handpicked deals...

Nikon Z 7 Mirrorless Digital Camera (Body Only)
$3397 $2797
SAVE $600

diglloyd Inc. | FTC Disclosure | PRIVACY POLICY | Trademarks | Terms of Use
Contact | About Lloyd Chambers | Consulting | Photo Tours
RSS Feeds | Twitter
Copyright © 2019 diglloyd Inc, all rights reserved.