Google is clearly one of the leading powers when it comes to Artificial Intelligence advances, as it seems every other week we have some kind of new research on the topic coming from them. This time around, it’s about an AI that can figure out whether humans would like a certain picture or not.
Google has been relying on artificial intelligence for many of its tools, including ones like Photos, where you can search for “blue hat” for instance and have the app search through all your uploaded pictures and return the ones where you are wearing a blue hat.. or someone else is, or something looks like one. It’s hit and miss sometimes. This time around, however, they used a new process called neural image assessment (NIMA) which uses deep learning to train a convolutional neural network to hand out image ratings.
The white paper the team published claims they’ve managed to create a way for the AI to score images reliably with high correlation to human perception. What’s even cooler is the fact that it can also be used to adapt and optimize photo editing and enhancement algorithms.
So, what can this be useful for? Well, regular users could see this feature in Photos for instance. Open up a photo you took, tap “edit” and check out what adjustments the system advises you to make in order to make your photo more likeable. Professionals photographers, on the other hand, could benefit even more from this by having the software pick out the best shot from a string of 20-50 pictures, or helping them perfect the basic adjustments with just a button tap.
Take another example. You get a new puppy. In the first hour after bringing it home, you take maybe 100 photos, maybe more. But then, which ones will you post on Facebook, or Instagram, or Twitter? Well, the AI can help you pick the best ones that will quite likely earn you the most likes from your friends. Or maybe you’re into taking loads of selfies and you can’t decide which one to post.
The NIMA model uses a ten-point rating scale, handing out scores after analyzing specific pixels in an image and the overall way the image looks. This also means that this AI won’t just be useful for humans in the ways we mentioned above, but also on its own. This technology could eventually be used to help this AIs try its “hand” at getting a bit creative. We already have AIs composing music, or reinventing Christmas tracks so it wouldn’t be too far off to have AIs putting together some kind of visual art in the future.
This new solution from Google’s own labs is also a step forward in having AI understand humans more, although that part is still far off into the distance.