Best Take is a nifty superpower for the family photographer. But it also uses AI to create photographs of scenes that never actually happened, at least not all at exactly the same time. First we had alternative facts – now, alternative faces.
While testing this technology, my mind wavered between two questions: How far can AI go to save bad photos? And also: is this a line we want to cross?
We had Photoshop, beauty and even face swap filters for years, but Best Take gives us something new to wrap our brains around. As much as I enjoyed using it, it’s difficult to let AI edit the faces in the smartphone photos we rely on to archive our memories. This allows AI to help normalize ideas of what happiness looks like – an escalation of the cultural pressure we face on social media to create smiley faces and perfect places that don’t always reflect reality.
To use Best Take today, you’ll need to be using a new Pixel 8 phone, although it can also edit older photos taken with other cameras that meet certain criteria. Google wouldn’t comment on future plans, but I wouldn’t be surprised if it eventually expands Best Take to other Google Photos users – and if other companies launch their own facial correction AI.
Here’s how it works: Best Take’s AI doesn’t actually invent smiles or other expressions. Instead, the software combs through all the photos you’ve taken over several seconds to come up with a few alternatives for each face it can identify. Depending on what you select, it removes the face from the alternative and uses AI to blend it with your original. It’s an instant version of AI using Photoshop to cut out someone’s head from one photo and paste it onto another.
Inasmuch as Dad Instagramthere was an emotional reward both from seeing my son and his friends looking as adorable as possible, and a sort of power trip from choosing from a menu of faces the one that was exactly the right one for that moment.
(I wondered: is this another way to Google to recover our data? Google says Best Take does not store faces for any purpose, including AI training.)
But the system has some quirks. Since Best Take relies on photos taken around the same time, you should pretend you’re at a fashion photo shoot and keep taking photos to increase your options. Unfortunately, it doesn’t use the camera’s video function to continuously take photos for you. If you want a smile in your final photo, you should always make your subject smile in at least one photo.
And, too bad, Best Take doesn’t work on pets.
Sometimes in my testing, Best Take’s results were spectacularly bad, replacing heads in a way that made faces too big or cut off hands and glasses. Once or twice it turned heads the wrong way, “Exorcist” style.
“The best shot may not work or work partially if there is too much variation in pose, including varying distance between the subject and the camera,” said Lillian Chen, product manager at Google, in an email.
These issues aside, Best Take essentially does what it claims. So now the question is: what should we think of this?
Let’s be clear: we already take fake photos. The algorithms in our smartphones brighten eyes and teeth, smooth skin, enhance a sunset, and artfully blur backgrounds. It’s not reality, it’s beauty.
As recently as 2018, smartphones couldn’t really take good photos in the dark. Then Google launched another AI technology called Night view this allowed him to combine a whole bunch of individual shots into one that appears fully lit, with candy-colored details that no human eye could have seen at the time. Other phone makers quickly followed with their own night modes.
Your phone has really high-tech beer glasses, I written at the time.
This isn’t necessarily a bad thing. In the past, photography required many specialized skills. I myself swapped my face in the holiday card I sent last year, cutting my head out with better lighting from one shot and pasting it in another. But this required access to and knowledge of Photoshop.
So what makes me uncomfortable about face swapping coming to phone cameras? This is the power we entrust to AI over something as fundamental as our memories.
Many people – especially women – are already rightly fed up with society telling them to “smile more.” Now a computer can help decide which faces are worth changing and which faces are worth keeping.
Google’s Chen said automated face suggestions in Best Take are “based on desires we’ve heard from users, including open eyes, looking at the camera, and expression.” She noted that users are always offered a choice of which expression they wish to apply.
Google also claims that the photos created by Best Take are not entirely fake. The faces included in the final product were all made by these people within seconds of each other. This is a sort of safeguard to ensure that the final image reflects something close to the original context of the moment. “At a high level, the main goal is to capture the moment the user thought they captured,” Chen said.
Still, Google’s overall approach feels like a slippery slope. Despite its promise to watermark AI-generated images, Google says it does nothing to flag Best Take images. They simply live in your photo collection and are shared like any other, alongside the original photos they replace.
Google was quick to release other AI-powered photo editing tools, such as Magic Eraser, which can remove people and entire objects from photos.
What’s stopping Best Take 2 from opening to captured faces at any time, instead of just for those few seconds? People have filled their Google Photos collections with years of source material. So how much harder would it be for Google to offer fully synthetic versions of the people in your photos, like you can already do? AI selfie apps like Lensa? Next stop: “Hey Google, make everyone in this photo look more in love/surprised/happy.”
Lost along the way: what is a photograph, after all? If it’s not a record of a moment, then maybe we need to find a way to stop treating it like a memory.