Results of Pickup's SR code applied to synthetic data. The images in the top figure are the low-quality images, and the images in the bottom figure show the outputs of various SR techniques, along with the original high quality image. "Huber" refers to a prior over high-resolution images, making the bottom-left image a result of a MAP technique, in contrast to the ML technique demonstrated in the upper-right.
I've read most of Pickup's thesis and played with the code on her site. The above figures were generated by applying her super resolution code to synthetic data. The code she supplies, which is written in Matlab with some mex C files, is able to perform super resolution given a priori knowledge of all the image generation parameters; these are the parameters that give geometric and photometric registrations, as well as the blur kernel. Unfortunately, this means that before her code can be used, all the parameters have to be independently inferred and fixed. This two-step process appears to produce results inferior to those obtained by simultaneous approaches, though the results may be good enough for this project.
At this point, there are at least 2 possible options for the future of this project. 1) I could write or find code that learns the model parameters, and feed the parameters to Pickup's SR code, or 2) I could write the code for a simultaneous algorithm from scratch. The second approach would be more fun, but has a lower chance or working, so I'll likely go with the first option.
Oscar Beijbom, a student of David Kriegman, is also interested in SR, so we may end up developing the code together.
The reading I've done suggests that SR is quite a bit more sophisticated than it was when Capel wrote his thesis. In particular, Bishop and others have adopted fully probabilistic techniques in which image registration is accounted for in the generative models. This allows registration to be learned concurrently with the super resolution image, and even allows the registration parameters to be marginalized over, so that they never have to be explicitly computed. Lindsey Pickup's thesis seems to be a great survey of these recent techniques, and I am in the process of reading it.
This is a research blog for CSE 190A. The goal of this quarter-long project is to arrive at an understanding and implementation of multi-view super-resolution that could be of use to Serge Belongie's research group. This project is a follow-up to previous 190A work.
Super-resolution (SR) describes techniques for taking one or more low-quality images of a scene and producing a high-quality image of the same scene. There are at least two general approaches. In the hallucination approach, prior information about what the high-quality image is likely to look like is used to construct a high-quality approximation from a single low-quality image. In the multi-view approach, minimal prior information is assumed, and instead complementary information from multiple photos of the same scene is used to construct a single high-quality photo. Multi-view SR appears to be a well developed field, as I was able to find a number of commercial implementations of multi-view SR, including an iPhone app. From here on, SR will always refer to multi-view SR. My proposal has more details.
The image was taken from S. Farsiu, M. Elad, and P. Milanfar, “A Practical Approach to Super-Resolution”, Invited paper, Proc. of the SPIE Conf. on Visual Communications and Image Processing, San Jose, January 2006.