Inside the bigger AI brains and upgraded hardware of Google Pixel 6 cameras

0


The Google Pixel 6 Pro’s camera bar features, from left to right, a 25mm wide-angle main camera, 16mm ultra-wide, 104mm telephoto lens, and flash.

Stephen Shankland / CNET

At this week’s Pixel 6 launch event, Google showcased a handful of AI-based photo tips built into its new phones. Features include erasing photobombers from backgrounds, removing blur from smeared faces, and handling darker skin tones more precisely. These features, however, are just the tip of an artificial intelligence iceberg designed to produce the best images for phone owners.

The $ 599 Pixel 6 and $ 899 Pixel 6 Pro use machine learning, or ML, a type of artificial intelligence, in dozens of ways when you take a photo. The features might not be as stylish as the blur of the face, but it does show up in every photo and video. They are workhorses that touch everything from focusing to exposure.

Pixel 6 cameras are AI engines as much as imaging hardware. AI is so ubiquitous that Pixel Product Manager Isaac Reynolds, in an exclusive interview on the inner workings of Pixel cameras, has to pause to describe all the ways AI is used.

“It’s a tough list because there are about 50 things,” Reynolds said. “It’s actually easier to describe what is not based on learning.”

All AI intelligences are possible thanks to Google’s new Tensor processor. Google designed the chip by combining a variety of Arm’s processor cores with its own AI acceleration hardware. Many other chip designs accelerate AI, but Google’s approach paired its AI experts with its chip engineers to create exactly what it needed.

Camera bumps become a mark of pride

With photos and videos so central to our digital lives, cameras have become an essential component of smartphones. A few years ago, phone designers were working for the most stylish designs possible, but today big cameras are telling consumers that a phone has high-end hardware. That’s why flagship phones these days proudly display their big camera bumps or, in the case of the Pixel 6, a long camera bar running through the back of the phone.

AI is invisible from the outside but is no less important than the camera hardware used. The technology has gone beyond the limits of traditional programming. For decades, programming has been an exercise in if-this-then-that determinism. For example, if the user has dark mode turned on, give the website white text on a black background.

With AI, data scientists train a machine learning model on a huge collection of real-world data, and the system learns the rules and patterns itself. To convert speech to text, an AI model learns from countless hours of audio data accompanied by the corresponding text. This allows the AI ​​to deal with the complexity of the real world which is very difficult with traditional programming.


Now playing:
Check this out:

First impressions of the new Pixel 6 and 6 Pro


9:02

AI technology is a new direction for Google’s years of work in computer photography, marrying digital camera data with computer processing for improvements such as noise reduction, portrait modes and landscapes with high dynamic range.

Google’s AI camera

Some of the ways the Pixel 6 uses AI:

  • With a model called FaceSSD, AI recognizes faces in a scene to define focus and brightness, among other uses. Phones also geometrically change face shapes, so people don’t have the common oblong heads when positioned near the edges of wide-angle shots.
  • For video, the Pixel 6 uses AI to track topics.
  • AI stabilizes videos to counter camera movement of shaking hands.
  • When taking a photo, the AI ​​segments a scene into different areas for individual editing changes. For example, it recognizes the sky for proper exposure and noise reduction to remove annoying spots.
  • The Pixel 6 recognizes who you’ve photographed before to improve future shots of those people.
  • Google’s HDRnet technology applies the company’s AI-based technology for still photos to video images. This gives the videos better attributes, such as exposure and color.
  • For Night Sight shots, a low-light technique developed by Google that can even photograph the stars, hardware-accelerated AI judges the best balance between long exposure times and sharpness.

Every year, Google expands its uses of AI. The previous examples include a portrait mode to blur backgrounds and Super Res zoom to magnify distant subjects.

On top of that, there are the new AI-powered photo and video features in the Pixel 6: Real Tone to accurately reflect the skin of people of color; Face Unblur to refine faces otherwise smudged by movement; Motion mode to add blur to moving elements of a scene such as trains or waterfalls; and Magic Eraser to erase annoying elements from a scene.

Better camera hardware too

To make the most of the debut of its first Tensor phone processor, Google has also invested in improved camera hardware. This should produce better image quality which is the basis for all subsequent processing.

The Pixel 6 and Pixel 6 Pro have the same main wide-angle and ultra-wide cameras. The Pro adds a 4x telephoto lens. In terms of traditional camera lenses, they have focal length equivalents of 16mm, 25mm and 104mm, said Alex Schiffhauer, the Pixel product manager who oversees the hardware. It’s 0.7x, 1x and 4x in the camera interface.

The main camera, the phone’s workhorse, has a sensor that’s 2.5 times the size of last year’s Pixel 5 for better light-gathering capabilities. It’s a 50-megapixel sensor, but it produces 12-megapixel images because Google combines groups of 2×2 pixels into a single effective pixel to reduce noise and improve color and dynamic range.

The ultra-wide camera has improved lenses from last year, Schiffhauer said. It doesn’t have as wide a field of view as the iPhone’s 13mm ultra-wide cameras, but Google wanted to avoid the peculiar perspective that becomes more noticeable as the focal length of the lens becomes shorter. Sure, some photographers like this novelty, but for Google, “We want things to look natural when you photograph them,” Schiffhauer said.

The most unusual camera is the Pixel 6 Pro’s telephoto lens, a “periscope” design for squeezing a relatively large focal length into a slim phone. In most phone cameras, the image sensor is flat in the body of the camera, but the 4x lens first uses a prism to redirect light 90 degrees into the body of the camera. camera to accommodate the relatively long optical path that telephoto lenses require.

The 4x camera is bulky, but in fact it’s the exceptionally large camera main sensor that explains the thickness of the Pixel 6’s camera bar, Schiffhauer said.

Hardware, software and AI

Traditional Google software is also getting an upgrade in the new Pixel phones.

For example, Super Res Zoom, which can use both AI and more traditional computational techniques to digitally zoom in on photos, is getting an upgrade this year. In previous years, the technology collected a bit more color detail on the scene by comparing the differences between multiple images, but now it also varies exposure for better detail, Schiffhauer said.

The fundamental process of taking a typical photo on a Pixel has not changed. With a process called HDR Plus, the camera captures up to 12 frames, most of them significantly underexposed, and then blends them together. This allows a relatively small image sensor to overcome the shortcomings of dynamic range and noise, allowing you to take a photo of a person in front of a bright sky.

It’s a pretty good technique that Apple has adopted for its iPhones. But Google also mixes up to three longer exposure frames for better shadow detail.

With HDR Plus, AI, and enhanced camera modules, the phones embody Google’s three-pronged strategy, Schiffhauer said, “to innovate at the intersection of hardware, software and machine learning.”


Leave A Reply

Your email address will not be published.