With the Pixel 6 series, Google takes portrait mode of selfies to the next level

One of the strengths of Google’s smartphones, since the first generation of Pixel, is the photographic quality. Despite hardware that is often inferior to the competition, the American giant has been able to extract the best from every single pixel available, making use of its experience in computational photography, a sector in which it is definitely at the forefront.

join us on telegram

With the Pixel 6 series, which should soon also arrive in Italy, Google has taken a further step forward thanks to a new portrait mode dedicated to selfies that is able to recognize even single strands of hair, thus allowing to obtain results. much higher than what happened previously.

A new series of models

Emulating a large lens and sensor isn’t always easy, and you need a high-quality software model to bring about appreciable improvements. For this Google has returned to work to create a new series of models, which could improve the recognition of the smallest details, with the help of the performance of Tensor.

In order to correctly instruct a mathematical model, it is necessary to create a set of data at the height of the situation, with shots from all angles and with different lighting sources, in order to generate a more accurate mask than in the past. Thus, the sphere used with Pixel 5 was dusted off, consisting of hundreds of LEDs, depth sensors and cameras, so as to capture a large number of samples with a perfect mask, precisely separating the subject from the background.

Having said that it would seem (almost) simple but in reality, it took other steps before we got to the magic of computational photography. With the shots obtained, several sets of photographs were made, changing the lighting to adapt to the real scene thanks to the depth of field data, ray tracing and a simulation of optical distortion, to obtain a realistic result.

Thousands of photos were then taken in “real” settings, with a precise model that took care of extracting the relative masks and with a visual inspection to use only the highest quality samples. The two sets obtained were then fed to the machine learning systems to adequately instruct the model and make it capable of recognizing a wide range of scenarios, poses and people.

Low and high-resolution masks

At this point we might think that the game is done but there are other steps to get a selfie of excellent quality. If most smartphones capture the image and apply a background mask in order to blur it, the Google Pixel 6 and Google Pixel 6 Pro do a lot more.

Both the photo and the initial mask, decidedly coarse, are passed to the model instructed previously, which generates a more defined mask but with a low resolution. At this point, the model performs an upsampling operation to raise the resolution, based on the original photo and on the first mask. The final result is a high-resolution mask with much higher quality, to be applied to the image to preserve the subject by blurring the background.

Amazing results

Here, then, is how more accurate selfies are born, with a convincing bokeh effect, although not yet as optimal as that of a traditional camera. One of the strengths of Google’s smartphones, since the first generation of Pixel, is the photographic quality. Despite hardware that is often inferior to the competition, the American giant has been able to extract the best from every single pixel available, making use of its experience in computational photography, a sector in which it is definitely at the forefront.

With the Pixel 6 series, which should soon also arrive in Italy, Google has taken a further step forward thanks to a new portrait mode dedicated to selfies that is able to recognize even single strands of hair, thus allowing to obtain results. much higher than what happened previously.

A new series of models

Emulating a large lens and sensor isn’t always easy, and you need a high-quality software model to bring about appreciable improvements. For this Google has returned to work to create a new series of models, which could improve the recognition of the smallest details, with the help of the performance of Tensor.

In order to correctly instruct a mathematical model, it is necessary to create a set of data at the height of the situation, with shots from all angles and with different lighting sources, in order to generate a more accurate mask than in the past. Thus, the sphere used with Pixel 5 was dusted off, consisting of hundreds of LEDs, depth sensors and cameras, so as to capture a large number of samples with a perfect mask, precisely separating the subject from the background.

Having said that it would seem (almost) simple but in reality, it took other steps before we got to the magic of computational photography. With the shots obtained, several sets of photographs were made, changing the lighting to adapt to the real scene thanks to the depth of field data, ray tracing and a simulation of optical distortion, to obtain a realistic result.

Thousands of photos were then taken in “real” settings, with a precise model that took care of extracting the relative masks and with a visual inspection to use only the highest quality samples. The two sets obtained were then fed to the machine learning systems to adequately instruct the model and make it capable of recognizing a wide range of scenarios, poses and people.

Low and high-resolution masks

At this point, we might think that the game is done but there are other steps to get a selfie of excellent quality. If most smartphones capture the image and apply a background mask in order to blur it, the Google Pixel 6 and Google Pixel 6 Pro do a lot more.

Both the photo and the initial mask, decidedly coarse, are passed to the model instructed previously, which generates a more defined mask but with a low resolution. At this point, the model performs an upsampling operation to raise the resolution, based on the original photo and on the first mask. The final result is a high-resolution mask with much higher quality, to be applied to the image to preserve the subject by blurring the background.

Amazing results

Here, then, is how more accurate selfies are born, with a convincing bokeh effect, although not yet as optimal as that of a traditional camera. but with greater attention to even the smallest details. The greater precision of the mask allows you to blur even the small details in the curls of the hair, as you can see from the sample below, certainly not easy to manage.

The new model developed by Google also allows you to optimally manage the different types of skin and hairstyles, ensuring more accurate and realistic results for anyone, regardless of skin color or hair. There is still room for improvement but for sure an important step has been taken in the right direction, making the smartphone camera more and more useful.

After all, it is an object that we almost always have in our pockets and we always expect it to be ready to grasp the essence of what we see.

But with greater attention to even the smallest details. The greater precision of the mask allows you to blur even the small details in the curls of the hair, as you can see from the sample below, certainly not easy to manage.

The new model developed by Google also allows you to optimally manage the different types of skin and hairstyles, ensuring more accurate and realistic results for anyone, regardless of skin color or hair. There is still room for improvement but for sure an important step has been taken in the right direction, making the smartphone camera more and more useful.

After all, it is an object that we almost always have in our pockets and we always expect it to be ready to grasp the essence of what we see.

Leave a Comment