Photomerge code - November 2013

Programmer

Project for 15-463 Computational Photography at Carnegie Mellon University, Fall 2013

For the first part of this project, I warped together photos using a homography transformation with manually selected points and then blended them to create panoramas. Images were shot with my Nikon D80 and an inverse warp was used to create the warped images before blending. Blending for this part was simple linear blending, no pyramids used.

For part 2, I implemented the specified feature descriptor computation given points selected from a harris corner detector. Points detected with the corner detector were selected for feature computation with Adaptive Non-Maximal Suppression, which picks the most dominant points in a region. The descriptor computed was an 8x8 patch of the surrounding pixels selected from a 40 x 40 downsampled region. The features were matched using a simple distance metric (the provided dist2 function) and results were thresholded using the ratio between the first and second nearest neighbors. I chose a threshold value of 0.5 for my implementaiton.

The autostitched images were unable to use as many input images in my system, as I added them one at a time and the feature points computed for the already merged image didn't overlap as much with the panorama's massive amounts of features. For speed reasons the autostitched images were computed from half-size source images.

Project Results Website