Samsung caught faking zoom photos of the Moon

Technology
A Samsung smartphone identified a blurry photo of the Moon and added detail to create the above image. Image: u/ibreakphotos
A viral Reddit post has revealed just how much processing the company’s cameras apply to photos of the Moon, further blurring the line between real and fake imagery in the age of AI. For years, Samsung “Space Zoom”-capable phones have been known for their ability to take incredibly detailed photos of the Moon. But a recent Reddit post showed in stark terms just how much computational processing the company is doing, and — given the evidence supplied — it feels like we should go ahead and say it: Samsung’s pictures of the Moon are fake. 

But what exactly does “fake” mean in this scenario? It’s a tricky question to answer, and one that’s going to become increasingly important and complex as computational techniques are integrated further into the photographic process. We can say for certain that our understanding of what makes a photo fake will soon change, just as it has in the past to accommodate digital cameras, Photoshop, Instagram filters, and more. But for now, let’s stick with the case of Samsung and the Moon.

The test of Samsung’s phones conducted by Reddit user u/ibreakphotos was ingenious in its simplicity. They created an intentionally blurry photo of the Moon, displayed it on a computer screen, and then photographed this image using a Samsung S23 Ultra. As you can see below, the first image on the screen showed no detail at all, but the resulting picture showed a crisp and clear “photograph” of the Moon. The S23 Ultra added details that simply weren’t present before. There was no upscaling of blurry pixels and no retrieval of seemingly lost data. There was just a new Moon — a fake one.

This is not a new controversy. People have been asking questions about Samsung’s Moon photography ever since the company unveiled a 100x “Space Zoom” feature in its S20 Ultra in 2020. Some have accused the company of simply copying and pasting prestored textures onto images of the Moon to produce its photographs, but Samsung says the process is more involved than that. 

In 2021, Input Mag published a lengthy feature on the “fake detailed moon photos” taken by the Galaxy S21 Ultra. Samsung told the publication that “no image overlaying or texture effects are applied when taking a photo” but that the company uses AI to detect the Moon’s presence and “then offers a detail enhancing function by reducing blurs and noises.” 

The company later offered a bit more information in this blog post (translated from Korean by Google). But the core of the explanation — the description of the vital step that takes us from a photograph of a blurry Moon to a sharp Moon — is dealt with in obfuscatory terms. Samsung simply says it uses a “detail improvement engine function” to “effectively remove noise and maximize the details of the moon to complete a bright and clear picture of the moon” (emphasis added). What does that mean? We simply don’t know. 

The generous interpretation is that Samsung’s process captures blurry details in the original photograph and then upscales them using AI. This is an established technique that has its problems (see: Xerox copiers altering numbers when upscaling fuzzy originals), and I don’t think it would make the resulting photograph fake. But as the Reddit tests show, Samsung’s process is more intrusive than this: it doesn’t just improve the sharpness of blurry details — it creates them. It’s at this point that I think most people would agree the resulting image is, for better or worse, fake.

The difficulty here is that the concept of “fakeness” is a spectrum rather than a binary. (Like all categories we use to divide the world.) For photography, the standard of “realness” is usually defined by the information received by an optical sensor: the light captured when you take the photo. You can then edit this information pretty extensively the way professional photographers tweak RAW images and adjust color, exposure, contrast, and so on, but the end result is not fake. In this particular case, though, the Moon images captured by Samsung’s phone seem less the result of optical data and more the product of a computational process. In other words: it’s a generated image more than a photo.

Some may not agree with this definition, and that’s fine. Drawing this distinction is also going to become much trickier in the future. Ever since smartphone manufacturers started using computational techniques to overcome the limits of smartphones’ small camera sensors, the mix of “optically captured” and “software-generated” data in their output has been shifting. We’re certainly heading to a future where techniques like Samsung’s “detail improvement engine” will become more common and applied more widely. You could train “detail improvement engines” on all sorts of data, like the faces of your family and friends to make sure you never take a bad photo of them, or on famous landmarks to improve your holiday snaps. In time, we’ll probably forget we ever called such images fake.

But for now, Samsung’s Moon imagery sticks out, and I think this is because it’s a particularly convenient application for this sort of computational photography. For a start, Moon photography is visually amenable. The Moon looks more or less the same in every picture taken from Earth (ignoring librations and rotational differences), and while it has detail, it doesn’t have depth. That makes AI enhancements relatively straightforward to add. And secondly: Moon photography is marketing catnip because a) everyone knows phones take bad pictures of the Moon and b) everyone can test the feature for themselves. That’s made it an easy way for Samsung to illustrate the photographic prowess of its phones. 

Related Topics