An original image, taken with my iPhone.
SR from 5 images.
SR from 10 images.
SR from 19 images. I had to assume a sharper than optimal blur kernel for all the images, especially the last image, because my machine didn't have enough memory for a wider kernel with the current code.
I switched to SIFT descriptors from the Harris corners, using the VLFeat toolbox. They'll fire on lots of things, not just corners, and have enabled me to throw the test card in the wastebin. Also, SIFT appears to reduce raw homog estimation error by about 75% over Harris corners. Above we have the current system running on photos from the 4th floor CSE rec room.
The current system also uses the bundle adjustment from Vincent's toolbox. Vincent's code reduces my BA step from an hour-long process to something that works in less than a second. (What an inefficient approach I had!) Unfortunately, it appears to return slightly inferior registrations as is; I'm guessing the code is using L2 norm error, where (as I mentioned in a previous post), L1 norm error has worked better for me.
3 years ago