Believe it or not, there is actually a grain of truth to the software employed by geeky technicians in TV shows and movies that can seemingly reconstruct a high-resolution crime scene from a woefully pixelated source image or video. There is actually a way of, with software, of increasing the image quality of enlarged images.
The technique is called super-resolution, and there are two basic approaches. The first approach takes a bunch of similar images of the same object, and then uses an algorithm to create a single image with the best/sharpest bits from each. The second approach is slightly more magical. In any given image, the same pattern of pixels usually appears multiple times — tiles on a floor, bricks on a wall, wrinkles on a face, spots on a butterfly. In each case, though, because we live in a 3D world, these patterns are slightly different sizes, and each pattern has a slightly different subpixel shift. If you group together enough of these pixel patterns, and take the best subpixels from each, you can work out how that pattern actuallylooks in reality.
In short, it’s possible to take a blurry or low-resolution image, and gain image quality by enlarging it with super-resolution techniques. As you can see above, and in the examples below, super-resolution can produce some startling results.
These images, which were created using a mix of both super-resolution approaches, come from a Weizmann Institute of Science research paper titled “Super Resolution From a Single Image.” Rather than layering together multiple low-resolution images, the Weizmann technique basically involves turning a single image into lots of tiny images (say, 5×5 pixels each), and then comparing each of these blocks to see if there are any matches. If any matches are made, they can then be combined to create a sharper version. The process isn’t perfect and can create artifacts (check the last line of the eye chart), but in almost every case it can tease a little more detail out of an enlarged image.
The two main uses of super-resolution are obvious — commercial enlargement of images, and crime fightin’ — but a third option, compression, might prove to be an even better use. For example, you can use JPEG compression to turn a 100KB image into a 20KB image without much loss of detail. But imagine if you applied compression and reduced the image’s dimensions, and then used super-resolution to display the image. We could be talking about a very efficient way of reducing our smartphone traffic bills, or bridging the gap between normal and Retina displays.
The only real problem with super-resolution is that it’s computationally expensive. In the Weizmann Institute research paper, there isn’t a single mention of just how long it takes to create each super-resolution image, which suggests that the algorithm is very slow. Some research groups have reported that real-time super-resolution is possible with GPU acceleration, though. It’s also worth pointing out that super-resolution isn’t always the best solution for enlargement: in the case of line art or old-school computer game emulation, a vectorizing algorithm might be a better choice.
If you want to play with super-resolution yourself, Supreme is an open-source implementation written in Python. Commercial products such as Perfect Resize andPhotoAcute also implement super-resolution features that are very similar to the one described here. As for why the kingpin of image manipulation, Photoshop, is stuck with boring ol’ bicubic enlargement… who knows.