Google AI turns pixelated pulp into sharp photos


The Google AI can turn blurry photos into sharp images. (Photo: Emre Akkoyun / Shutterstock)

Photos in a low resolution are not particularly beautiful. Google has trained an AI that is able to transform pixelated images back into detailed, sharp images.

When pictures are sent via Messenger, the quality often suffers. The photos arrive at the recipient in a lower resolution than the original. Using machine learning, Google researchers have developed a model that converts a low-resolution image into a detailed, high-resolution image. The “super-resolution” can be used in a wide variety of areas. Tasks can range from restoring old family portraits to improving medical imaging systems.

Scientist Jonathan Ho and developer Chitwan Saharia posted on Google’s AI blog in mid-July two interrelated approaches that push the boundaries of image synthesis quality for diffusion models. One approach is super-resolution via repeated refinements, known as SR3. SR3 is a high-resolution diffusion model that uses a low-resolution image as input and creates a corresponding high-resolution image from the pure image noise. In other words, from the pixels that differ in color and brightness from those of the actual image due to the low quality.



SR3 goes the opposite way

The model is trained on an image distortion process in which noise is gradually added to a high-resolution image until all that remains is pure noise. This is how artificial intelligence learns to reverse this process. Starting with pure noise, the noise is gradually removed to end up with a high resolution image. SR3 is able to improve faces and natural images step by step. Starting with images that have a resolution of just four to eight pixels, through 64 by 64 and 256 by 256, photos can even be scaled up to 1024 by 1024 with the AI ​​model.

“With SR3, we brought the performance of diffusion models in super-resolution and class-driven Imagenet generation benchmarks to the cutting edge of technology. We look forward to further testing the limits of diffusion models for a variety of generative modeling problems, ”the two scientists write in their blog entry. “Computer, enhance “, known from film and television, is now a reality.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter?
Find out more now

You might be interested in that too


Categories:   General

Comments