Google’s RAISR Algorithm Makes Your Low -quality Pictures Look Sharp and Live

Back in the day when displays would max out at 800×600 resolutions, having images measure 640×480 wasn’t a big deal. In fact it was considered to be pretty big back then. Fast forward to today with monitors reaching up to 5K, clearly an image with a 640×480 would feel almost unrecognizable on a huge screen.

The Internet is full of low-res images whether due to the limitations of one’s camera or device, or just purposefully down-scaled images for the intents of faster page loading. Enlarging an image without turning it into a mesh of pixels is something we’ve only seen on TV shows and perhaps some movies. But Google has developed a technique that incorporates machine learning in order to produce high-quality versions of low-resolution images.

ap_resize

The company is calling this method as RAISR, which stands for Rapid and Accurate Image Super-Resolution, and helps to seamlessly  upscale images through machine learning.

Upscaling, the process of producing an image of larger size with significantly more pixels and higher image quality from a low quality image, has been around for quite a while. The common approaches to upscaling are linear methods which fill in new pixel values using simple, and fixed, combinations of the nearby existing pixel values.

Related : Google acquires Undecidable Labs, a shopping startup, to boost its image search

According to Google, existing methods of upscaling are fast but not particularly effective because the vivid details in the higher resolution image are lost, meaning that we end up with a blurry-looking photo. With RAISR, Google will use machine learning to identify filters that can help recreate each pixel in high resolution that brings it as close as it can to the original. Essentially, Google trains RAISR on 10,000 pairs of images (one low quality, one high) to create filters that recreate details close in approximation to the original images. There is a far more technical explanation at the source link below, but that’s the general idea.

nexus2cee_RAISR-flowchart-668x269.png

This means that the end result could be an image that looks as close to the real thing as possible, but in a higher resolution. So what does Google hope to accomplish with this method? For starters restoring images taken by low-resolution cameras, as well as possibly improving on the pinch-to-zoom gestures used on mobile devices to zoom in and out of images, websites, and more.

Source : Google Research Blog

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s