Algorithms are digital currency — the companies with the best ones usually make the most money. The better the algorithm, the better the results, the faster they’re delivered, the more predictive the system becomes, and so on. It makes sense. And, by pairing intuitive algorithms with powerful neural networks, machine learning and artificial intelligence (AI), those systems truly become next level. Given the powerful nature of these pairings, researchers are applying those powers to all sorts of use cases, and one of the newest ones really caught my eye (pun intended).

As it turns out, Google may just be a better photographer than you (and quite possibly, most anybody else).

Researchers from Google and MIT recently unveiled a machine learning algorithm that can retouch photos at a professional photographer grade. By using a process we’ll explain more in a moment, you can snap a photo, the neural network identifies how it can be made to look better, and applies those changes in less than a second — whether that’s increasing exposure, bringing down highlights, color correcting, you name it.

Believe it or not, the algorithmic computations are happening so fast, you can see the result of the photo in the viewfinder before you even take the photo; that’s according to Michael Gharbi, an MIT doctoral student and lead author of the paper, as told to Wired.

We all know one of the first tasks humans set machine learning and AI to was image recognition; along with speech recognition, these are two of the more everyday useful use cases for AI, so again, it made sense. But identifying faces in an image and retouching an image to professional quality are different tasks altogether.

The problem with most image filters is that they apply whatever effect their tasked with to the entire image. All the pixels get the same level of bump or nudge in a specific direction (more exposure, greater vibrance, more saturation, you get the idea). But, by creating a more nuanced understanding of images within the algorithm, Gharbi’s process can act with far more subtlety.

Elizabeth Stinson from Wired explains:

Most filters apply editing techniques to the entire image, regardless of whether it needs it. Gharbi’s algorithm can pinpoint specific features within an image and apply the appropriate improvements. “Usually every pixel gets the same transformation,” he says. “It becomes more interesting when you have images that need to be retouched in specific areas.” The algorithm might learn, for example, to automatically brighten a face in a selfie with a sunny background. You could train the network to increase the saturation of water or bump up the green in trees when it recognizes a landscape photo.

So how did the neural network get so smart? By feeding it images that were already touched up by experts. Those, paired with the “before” versions, allowed the algorithm to learn what a “good” image is in comparison to a “bad” one. Furthermore, if you fed the system with a specific artist’s work, it could start to mimic his or her creative style. So, instead of correcting for simple “good” or “bad” as defined by a photography textbook, the algorithm could instead mimic your preferred style and whatever rules your style tend to follow.

That’s impressive in and of itself. But, to be truly useful, the algorithm had to be small enough and portable enough to work on a mobile phone, while still being fast enough to take photos in nearly real time. So, Gharbi et al devised a solution that completely changes the way a computer evaluates such an image:

“The key to making it fast and run in real time is to not process all the pixels in an image,” he says. Instead of analyzing millions of pixels in any given photo, Gharbi’s algorithm processes a low-resolution version of the photo and decides which parts to retouch. The algorithm estimates how to adjust the color, luminosity, saturation, and more based on rules established in the neural network; it makes the changes, then converts the image back to high resolution. Because it’s not processing a full image every time, the system can operate at speeds beyond a phone’s computational abilities. “We’ve found a more efficient way to process an image,” he says.

This application of AI might not be earth-shattering or vitally important to the human race, but it’s still an amazing bit of problem solving and ingenuity. And, given how prevalent digital cameras have become in our lives, there’s no doubt about its potential usefulness should the algorithm live up to the hype. This type of thinking shows just how useful and creative we can be with the application of machine learning in our everyday lives.

Just think of what we can think up for you.



Comments

2 responses to “Google’s new algorithm is a better photographer than you”

  1. Jacalyn says:

    Hi Jeff … thank you for this awesome article of your’s i’ve learned a lot from it thanks for sharing the info keep posting keep it up

  2. Jacob Hill says:

    Brilliant! awesome article very well written and well explained thanks for the info .. keep posting stuffs like this very informative keep it up!

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha *

Read more by

Jeff Francis

Jeff Francis is a veteran entrepreneur and co-founder of Dallas-based digital product studio ENO8. Jeff and his business partner, Rishi Khanna, created ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.

Like what you’re reading?

Start a conversation with our talented team today!

Subscribe to our mailing list to receive updates about new posts to the ENO8 blog