Technology: iPhone 11 and Pixel 4 cameras' secret sauce: Why computational photography matters - PressFrom - US
  •   
  •   
  •   

Technology iPhone 11 and Pixel 4 cameras' secret sauce: Why computational photography matters

15:27  10 october  2019
15:27  10 october  2019 Source:   cnet.com

Leak reveals new details about the best camera phone of 2019 (and it’s not an iPhone or Pixel)

Leak reveals new details about the best camera phone of 2019 (and it’s not an iPhone or Pixel) Huawei is rumored to launch the Mate 30 Pro as soon as September 19th, a day before Apple’s iPhone 11 handsets are expected to hit store shelves. The closer we get to these flagship phone launches, the more details we learn about them. And it turns out that both the Mate 30 Pro and the iPhone 11 will deliver a few big camera features, with a huge focus on video recording. A Bloomberg report revealed the iPhone 11’s remaining secrets earlier this week, detailing the features of the cameras that will be included on these devices. The more expensive iPhone 11 Pro and iPhone 11 Pro Max will have three rear cameras each, while the cheaper iPhone 11 is getting two sensors on the back.

The iPhone 11 Pro and Pro Max come with a 12MP 2x optical/10x hybrid zoom telephoto lens, which our tests have shown to be convincingly excellent. However, the Pixel 4 could feature a 16MP telephoto camera . The higher resolution is an obvious bonus, but what would help the Pixel land a clear hit is a

iPhone 11 Pro’s stunning cameras deliver ‘ computational photography mad science’. Camera -wise, the budget-friendly iPhone 11 might offer the best bang for the buck. Starting at 9, the iPhone 11 comes with a camera bump that houses a standard wide-angle lens and new

When Apple marketing chief Phil Schiller detailed the iPhone 11's new camera abilities in September, he boasted, "It's computational photography mad science." And when Google debuts its new Pixel 4 phone on Tuesday, you can bet it'll be showing off its own pioneering work in computational photography.

a camera on a table: The iPhone 11 Pro has three cameras. Óscar Gutiérrez/CNET© Provided by CBS Interactive Inc. The iPhone 11 Pro has three cameras. Óscar Gutiérrez/CNET

The reason is simple: Computational photography can improve your camera shots immeasurably, helping your phone match, and in some ways surpass, even expensive cameras.

But what exactly is computational photography?

Apple's iPhone 11 camera packs wide-angle photography punch

Apple's iPhone 11 camera packs wide-angle photography punch Apple's new iPhone gets a wider angle lens and new computational photography smarts, upgrading what might be the new phone's most important feature.

iPhone 11 rumors that have been floating around since… well, roughly 42 nanoseconds after the iPhone XS was introduced. Then, computational photography — bits and algorithms that could far exceed the atoms and optics. Portrait Mode, Portrait Lighting, Portrait green screen, which is my

In brief, computational photography is using computers , algorithms, and computer processing (processes) to process your photographs to look Why ? It means we are no longer in a megapixel war. We don’t need to keep improving the hardware or the pixel sizes of the phone cameras for

In short it's digital processing to get more out of your camera hardware -- for example, by improving color and lighting while pulling details out of the dark. That's really important given the limitations of the tiny image sensors and lenses in our phones, and the increasingly central role those cameras play in our lives.

Heard of terms like Apple's Night Mode and Google's Night Sight? Those modes that extract bright, detailed shots out of difficult dim conditions are computational photography at work. But it's showing up everywhere. It's even built into Phase One's $57,000 medium-format digital cameras.

First steps: HDR and panoramas

One early computational photography benefit is called HDR, short for high dynamic range. Small sensors aren't very sensitive, which makes them struggle with both bright and dim areas in a scene. But by taking two or more photos at different brightness levels and then merging the shots into a single photo, a digital camera can approximate a much higher dynamic range. In short, you can see more details in both bright highlights and dark shadows.

Deep Fusion is the iPhone's take on AI photography

Deep Fusion is the iPhone's take on AI photography In announcing the iPhones 11 Pro, Phil Schiller tipped us off to a new feature that'll come to the flagship smartphones in the next year. Deep Fusion is a system which Schiller describes as "computational photography mad science," which is likely to be the company's answer, more or less, to Google's Night Sight. As Schiller explained, when you're about to take an image with the new iPhone 11 Pro, the camera will snap 8 images before you press the shutter. When you do, it'll then take one long exposure, and will then stitch a new image together, "pixel-by-pixel" to create one with lots of detail and very little noise.

Now that Apple’s iPhone 11 series has been released and the iPhone 11 Pro and iPhone 11 Pro Max have We’ll have to wait until next week to see just how impressive the cameras on the new Pixel 4 and Why ? Because practically everything else there is to know about the new Google phones has

Confused about why the iPhone 11 isn’t the company’s main flagship? You’re not alone. Many potential buyers have been wondering what the difference between the iPhone 11 , iPhone 11 Pro, iPhone 11 Pro Max and iPhone XR are. For a detailed comparison of the iPhone 11 with Apple’s

There are drawbacks. Sometimes HDR shots look artificial. You can get artifacts when subjects move from one frame to the next. But the fast electronics and better algorithms in our phones have steadily improved the approach since Apple introduced HDR with the iPhone 4 in 2010. HDR is now the default mode for most phone cameras.

a little boy that is sitting on a table: Google Pixel phones offer a portrait mode to blur backgrounds. The phone judges depth with machine learning and a specially adapted image sensor.© CNET

Google Pixel phones offer a portrait mode to blur backgrounds. The phone judges depth with machine learning and a specially adapted image sensor.

Google took HDR to the next level with its HDR Plus approach. Instead of combining photos taken at dark, ordinary and bright exposures, it captured a larger number of dark, underexposed frames. Artfully stacking these shots together let it build up to the correct exposure, but the approach did a better job with bright areas, so blue skies looked blue instead of washed out.

Ultimate flagship showdown compares the unreleased Pixel 4 XL to Apple’s iPhone XS Max

Ultimate flagship showdown compares the unreleased Pixel 4 XL to Apple’s iPhone XS Max We finally know when the Pixel 4 series will launch, and it turns out that leaked October 15th launch date was accurate after all. But even though the phone is more than a month out, we already know just about everything there is to know about it. Not only did Google confirm the Pixel 4’s signature features like the dual-lens camera on the back, 3D face recognition, and Project Soli radar gestures, but a series of video reviews from Asia revealed everything else about the phone. Therefore, we’re not surprised to see this early Pixel 4 XL camera review pop up on YouTube. And, even better, it’s a comparison to the current king of flagship phones, the iPhone XS Max.

The iPhone 11 also features a new anodized aluminum finish, which Apple says is more durable. There is a new Night mode to take illuminated low-light photos similar to the Google Pixel Night Sight feature. Apple is using computational photography to fuse photo data from all three lenses in order

PIXEL . Outside of Apple employees, one of the people most knowledgeable about the iPhone ’s camera is Sebastiaan de With, designer of The piece starts with an explanation of the iPhone ’s new Smart HDR feature, then details the exact reasons why selfies on the new iPhones appear to employ

Apple embraced the same idea, Smart HDR, in the iPhone XS generation in 2018.

Panorama stitching, too, is a form of computational photography. Joining a collection of side-by-side shots lets your phone build one immersive, superwide image. When you consider all the subtleties of matching exposure, colors and scenery, it can be a pretty sophisticated process. Smartphones these days let you build panoramas just by sweeping your phone from one side of the scene to the other.

Seeing in 3D

Another major computational photography technique is seeing in 3D. Apple uses dual cameras to see the world in stereo, just like you can because your eyes are a few inches apart. Google, with only one main camera on its Pixel 3, has used image sensor tricks and AI algorithms to figure out how far away elements of a scene are.

a little boy is smiling at the camera: Google Pixel phones offer a portrait mode to blur backgrounds. The phone judges depth with machine learning and a specially adapted image sensor. Stephen Shankland/CNET© Provided by CBS Interactive Inc. Google Pixel phones offer a portrait mode to blur backgrounds. The phone judges depth with machine learning and a specially adapted image sensor. Stephen Shankland/CNET

The biggest benefit is portrait mode, the effect that shows a subject in sharp focus but blurs the background into that creamy smoothness -- "nice bokeh," in photography jargon.

One photo is all it takes to be wowed by the iPhone 11’s most incredible new feature

  One photo is all it takes to be wowed by the iPhone 11’s most incredible new feature Apple's iPhone 11 series phones have a bunch of new features that make upgrading from an older iPhone a no-brainer, including a faster, more efficient processor; larger batteries with support for faster charging; and — best of all — brand new camera systems that might just be the best mobile cameras on the planet right now. The cameras also get a feature that Google and other Android handset makers have had for a while now, and that’s a new night photography mode — or Night mode, as it’s called on the iPhone 11 phones.

Why dynamic range matters . It is debatable whether the Pixel 2 (and its larger sibling, the Pixel 2 XL) is the best smartphone out there, but it’s certainly one of the best smartphone cameras (the new Samsung Galaxy S9 Plus may have nudged it off the top, but only just).

With iPhone 11 and iPhone 11 Pro, a new Wide camera sensor works with intelligent software and A13 Bionic to let you do what was never possible on iPhone : get beautiful, detailed images in drastically lower light. Night mode comes on automatically when needed — say , in a candlelit restaurant.

It's what high-end SLRs with big, expensive lenses are famous for. What SLRs do with physics, phones do with math. First they turn their 3D data into what's called a depth map, a version of the scene that knows how far away each pixel in the photo is from the camera. Pixels that are part of the subject up close stay sharp, but pixels behind are blurred with their neighbors.

Portrait mode technology can be used for other purposes. It's also how Apple enables its studio lighting effect, which revamps photos so it looks like a person is standing in front of a black or white screen.

Depth information also can help break down a scene into segments so your phone can do things like better match out-of-kilter colors in shady and bright areas. Google doesn't do that, at least not yet, but it's raised the idea as interesting.

Night vision

One happy byproduct of the HDR Plus approach was Night Sight, introduced on the Google Pixel 3 in 2018. It used the same technology -- picking a steady master image and layering on several other frames to build one bright exposure.

Apple followed suit in 2019 with Night Mode on the iPhone 11 and 11 Pro phones.

a colorful flower in front of a christmas tree: With a computational photography feature called Night Sight, Google's Pixel 3 smartphone can take a photo that challenges a shot from a $4,000 Canon 5D Mark IV SLR, below. The Canon's larger sensor outperforms the phone's, but the phone combines several shots to reduce noise and improve color. Stephen Shankland/CNET© Provided by CBS Interactive Inc. With a computational photography feature called Night Sight, Google's Pixel 3 smartphone can take a photo that challenges a shot from a $4,000 Canon 5D Mark IV SLR, below. The Canon's larger sensor outperforms the phone's, but the phone combines several shots to reduce noise and improve color. Stephen Shankland/CNET

These modes address a major shortcoming of phone photography: blurry or dark photos taken at bars, restaurants, parties and even ordinary indoor situations where light is scarce. In real-world photography, you can't count on bright sunlight.

Apple's Deep Fusion photography comes to iPhone 11s in beta

  Apple's Deep Fusion photography comes to iPhone 11s in beta You now have a chance to try Apple's machine learning-based Deep Fusion photography if you're willing to live on the bleeding edge. It's releasing an iOS 13.2 developer beta (public likely to follow soon) that makes Deep Fusion available to iPhone 11 and iPhone 11 Pro owners. The technique uses machine learning to create highly detailed, sharper and more natural-looking photos on the primary and telephoto lenses by combining the results of multiple shots.Deep Fusion takes an underexposed photo for sharpness, and blends that with three neutral pictures and a long high-exposure image on a per-pixel level to achieve a highly customized result.

The iPhone XS and iPhone XS Max feature the same dual- camera and processing hardware; the upcoming iPhone XR also sports the same Apple's computational photography advancements in these models deliver great results most of the time, and point toward more improvements in the future.

To overcome the limitations of smaller camera sensors and lenses, smartphone manufacturers are leaning hard into computational innovation. One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more.

Night modes have also opened up new avenues for creative expression. They're great for urban streetscapes with neon lights, especially if you've got helpful rain to make roads reflect all the color. Night Mode can even pick out stars.

Super resolution

One area where Google lagged Apple's top-end phones was zooming in to distant subjects. Apple had an entire extra camera with a longer focal length. But Google used a couple of clever computational photography tricks that closed the gap.

The first is called super resolution. It relies on a fundamental improvement to a core digital camera process called demosaicing. When your camera takes a photo, it captures only red, green or blue data for each pixel. Demosaicing fills in the missing color data so each pixel has values for all three color components.

Google's Pixel 3 counted on the fact that your hands wobble a bit when taking photos. That lets the camera figure out the true red, green and blue data for each element of the scene without demosaicing. And that better source data means Google can digitally zoom in to photos better than with the usual methods. Google calls it Super Res Zoom. (In general, optical zoom, like with a zoom lens or second camera, produces superior results than digital zoom.)

On top of the super resolution technique, Google added a technology called RAISR to squeeze out even more image quality. Here, Google computers examined countless photos ahead of time to train an AI model on what details are likely to match coarser features. In other words, it's using patterns spotted in other photos so software can zoom in farther than a camera can physically.

See how iPhone 11’s Deep Fusion takes iPhone photography to the next level

  See how iPhone 11’s Deep Fusion takes iPhone photography to the next level Since releasing iOS 13 a few weeks ago, Apple has been pumping out updates at a furious pace. And with good reason, the initial incarnation of iOS 13 was a bit buggy, to say the least. Most recently, Apple earlier today rolled out the first beta of iOS 13.2, an update that finally delivers the long-awaited Deep Fusion mode that will enable iPhone users to take incredibly sharp and crisp photos in medium-light environments. When Apple introducedWhen Apple introduced Deep Fusion at its annual iPhone unveiling last month, Phil Schiller boasted that the feature — which relies upon the Neural Engine of Apple’s A13 bionic processor — was akin to “computational photography mad science.

Computational photography is also responsible for the Pixel 2’s boost in photo quality using an automatic HDR. When a photo is shot on the Pixel 2, the camera takes several images at slightly different exposure levels, then aligns them via software on a pixel -for- pixel basis. With the data from

The Pixel 2’s secret sauce lies in its HDR+ image processing which captures multiple images to produce a single shot that’s sharper, brighter and more vivid. On top of that, the Pixel 2 also use AI rather than a second sensor and lens to capture

iPhone's Deep Fusion

New with the iPhone 11 this year is Apple's Deep Fusion, a more sophisticated variation of the same multiphoto approach in low to medium light. It takes four pairs of images -- four long exposures and four short -- and then one longer-exposure shot. It finds the best combinations, analyzes the shots to figure out what kind of subject matter it should optimize for, then marries the different frames together.

The Deep Fusion feature is what prompted Schiller to boast of the iPhone 11's "computational photography mad science." But it won't arrive until iOS 13.2, which is in beta testing now.

Where does computational photography fall short?

Computational photography is useful, but the limits of hardware and the laws of physics still matter in photography. Stitching together shots into panoramas and digitally zooming are all well and good, but smartphones with cameras have a better foundation for computational photography.

That's one reason Apple added new ultrawide cameras to the iPhone 11 and 11 Pro this year and the Pixel 4 is rumored to be getting a new telephoto lens. And it's why the Huawei P30 Pro and Oppo Reno 10X Zoom have 5X "periscope" telephoto lenses.

You can do only so much with software.

Laying the groundwork

Computer processing arrived with the very first digital cameras. It's so basic and essential that we don't even call it computational photography -- but it's still important, and happily, still improving.

First, there's demosaicing to fill in missing color data, a process that's easy with uniform regions like blue skies but hard with fine detail like hair. There's white balance, in which the camera tries to compensate for things like blue-toned shadows or orange-toned incandescent lightbulbs. Sharpening makes edges crisper, tone curves make a nice balance of dark and light shades, saturation makes colors pop, and noise reduction gets rid of the color speckles that mar images shot in dim conditions.

A new leak might’ve just revealed the Pixel 4’s last remaining secret

  A new leak might’ve just revealed the Pixel 4’s last remaining secret Next week, Google will finally announce its next-generation flagship smartphone lineup alongside a bunch of other new products the company has planned for the holiday season. There is indeed plenty in store for us at Google's big press conference next Tuesday, but the Pixel 4 and Pixel 4 XL will take center stage. Now that Apple's iPhone 11 series has been released and the iPhone 11 Pro and iPhone 11 Pro Max have officially stolen the mobile photography crown that Google and Huawei shared, expectations for the company’s new Pixel 4 camera couldn’t be higher.

The new iPhones have excellent cameras , to be sure. The broadest definition of computational photography includes just about any digital imaging at all. The first real computational photography features were arguably object identification and tracking for the purposes of autofocus.

Long before the cutting-edge stuff happens, computers do a lot more work than film ever did.

Compare photos from the iPhone 11 Pro against last year's iPhone XS

But can you still call it a photograph?

In the olden days, you'd take a photo by exposing light-sensitive film to a scene. Any fiddling with photos was a laborious effort in the darkroom. Digital photos are far more mutable, and computational photography takes manipulation to a new level far beyond that.

Google brightens the exposure on human subjects and gives them smoother skin. HDR Plus and Deep Fusion blend multiple shots of the same scene. Stitched panoramas made of multiple photos don't reflect a single moment in time.

So can you really call the results of computational photography a photo? Photojournalists and forensic investigators apply more rigorous standards, but most people will probably say yes, simply because it's mostly what your brain remembers when you tapped that shutter button.

And it's smart to remember that the more computational photography is used, the more of a departure your shot will be from one fleeting instant of photons traveling into a camera lens. But computational photography is getting more important, so expect even more processing in years to come.

a hand holding a cell phone© Provided by CBS Interactive Inc.
We compare the cameras on the iPhone 11 Pro and iPhone XS

A new leak might’ve just revealed the Pixel 4’s last remaining secret .
Next week, Google will finally announce its next-generation flagship smartphone lineup alongside a bunch of other new products the company has planned for the holiday season. There is indeed plenty in store for us at Google's big press conference next Tuesday, but the Pixel 4 and Pixel 4 XL will take center stage. Now that Apple's iPhone 11 series has been released and the iPhone 11 Pro and iPhone 11 Pro Max have officially stolen the mobile photography crown that Google and Huawei shared, expectations for the company’s new Pixel 4 camera couldn’t be higher.

—   Share news in the SOC. Networks

Topical videos:

usr: 3
This is interesting!