purpose of this assignment is to showcase one of our photographs and identify methods to improve the image using computational photography techniques. LEARNINGS This assignment served two purposes. Being the first course in the program, this helped me understand the nuances of assignment submission, evaluation process and the logistics in a gentle manner. This also served as a great introduction to the course and the amount of research being done in this field in contemporary times. I cited the techniques of “pixel-wise flow field motion” and “edge flow” methods to identify the motion of the video in each layer and create layer masks to mark the reflection/occlusion parts of the image, in this paper which can be used to improve this image by removing the occluding fence grids.
development environment setup using vagrant and understand simple arithmetic operations on images. This assignment also gives a great introduction into Numpy library and its usage for image manipulations. LEARNINGS This assignment was my first attempt in using Python and the exercises were simple enough to complete without any major impediments. Some of the learnings from this assignment are: • understanding the importance of vectorization • usage of operator broadcasting rather than iteration • built in functions similar to fliplr such as flipud, rot90 and many more for image manipulation • significance of datatypes and their implications for computational manipulations • basic principles of how blending would work Assignment # 2: Image I/O
into black pixels and the boat/sail were converted to white pixels. However, an interesting observation is the presence of black pixels in the water, even though the original image does not give the impression that these are dark grey pixels. Blended Output image Input Images Convert to Black and White Image Average of Two Images Assignment # 2: Image I/O
assignment is to generate a novel image or artifact which combines images of the same scene or object with a variation of just one photographic parameter in the process. SETUP Inspired by this video, I choose to create a stop motion photography artifact where my epsilon is to vary the location of the elements in the composition, slightly between images to create the novel imagery of “stop-motion photography”. The images were of a real rose flower with location of each of its petals changing slightly across different pictures. I took a total of 32 images out of which the last 8 were submitted for the assignment and used to build the final GIF images.
attempt at Epsilon photography, I was very proud of the result. • Testing the setup end to end using sample pictures and generating a test GIF, gave me the confidence in the plan before implementing with the rose petals. • The setup with external flash and tripod worked well, eliminating any need for heavy post processing for alignment. • I realized that the idea of motion is to illustrate more fluidity which can only be achieved with more granular changes. For great output, I would have made really smaller changes (almost 1/3rd of what is performed in these images)
extra credit, i tried to perform a different type of Epsilon photography by changing the focal plane of the images and then combining them to create an all focused image using the technique “Focus stacking” Input Images with varying focal planes Output Image with all planes in focus
of this assignment is to provide the implementation of different functions to investigate various filters and create a rudimentary edge detection pipeline. • The second part is to investigate the effects of different filters such box filter, Gaussian filter and some of the commonly used gradient filters (Sobel filter). • The last part is to use the functions created earlier and the Sobel filter to form an edge detection pipeline and compare results with the Canny edge detector available in the OpenCV library.
the copyMakeBorder and np.pad can create effects such as vignetting, mirroring etc., • Different filter types - Box, Blur, Emboss, Edge etc., and its significance • Importance of cross-Correlation and convolution • How sampling and downsizing of images work • How to use intensity changes and their direction to identify an edge • How edge detection can be used for feature matching • Doing this exercise has taught me to appreciate how a simple filter configuration, Sobel, box filter etc., can perform varying computations to generate novel images • Applications for feature detection, machine learning, artificial intelligence • Learnt the significance of eliminating noise and why sharpness of an image matters for computer vision and computational photography
• Combine the x and y direction magnitudes using np.hypot which provides the discrete magnitude • A rudimentary implementation of edge detection from the gradients is to use the simple convertToBlackAndWhite operation on the gradient magnitude • However, a threshold of 128 gave very few details and missing critical ones while a threshold of 30 included lot of noise and the intermediary value of 90 was still not complete with details. • Though this method was fast and provides some reliable results, it has some disadvantages. FINAL EDGE DETECTION PIPELINE • We need to run the image through a Gaussian filter to remove high intensity spikes and noise • Compute the Sobel gradients on the x and y directions • Combine them to identify the discrete magnitude at each pixel • Perform non-maximum suppression by identifying gradient direction of the Sobel gradient magnitude • Now all pixels are suppressed other than the ones part of the local maxima(edge). • This also helps in “thinning” the edges which was not possible with the simple convertToBlackAndWhite operation.
Input Image #2 Sobel filter - X direction #3 Sobel filter - Y direction #4 Sobel gradient magnitude Thinning of edges / local Maxima Final image after thresholding changes LO=0.1m HI=0.2m
then I had used pictures from the past, but this time I had to “create” one. • This assignment required considerable amount of time for setup and preparation of the location. • As my apartment was on the second floor, it is one of the best lit buildings and covering all the light sources in the room was the biggest task. • Moving boxes, black plastic sheeting, Gorilla tape and a sharp knife were used to aid the setup. • The image captured is of my apartment complex as seen from my bedroom window, which was covered and used as the camera obscura. LEARNINGS / OUTCOME • This assignment reproduces one of the oldest image capturing techniques used from 400 BC. • Multiple images were stitched using Photoshop to create the final output image. • During “BULB” mode, AF (auto focus) should be turned off and long exposures drain the battery quickly. Assignment # 5: Camera Obscura
steps necessary to build a Gaussian and Laplacian pyramids and use them to blend two images using a pre-generated mask. • This exercise introduces the concept of pyramid building using REDUCE and EXPAND algorithm and multiple other uses of the Gaussian and Laplacian pyramids. • Multiple functions which aid the pyramid building and the blending of the images were created as part of the assignment • These functions are then used to demonstrate the capability of blending using these pyramids. LEARNINGS / OUTCOME • The information retained in the final blended image was a great learning exercise and gives a peek into how this could be useful for progressive transmission of image data. • The blended output image was extremely close to the output from Photoshop. • The resulting image demonstrated the clear usage and purpose of the Laplacian pyramid and the minute details in the image which were retained in the final output
investigating the other types of blending algorithms and capabilities available in OpenCV which can perform the blending with better smoothing and better tonal retention • Common ones used are linear blending using weighted or arithmetic operations to incorporate an opaque image into another image. • However, one of the more recent and highly used algorithm is the Poisson distribution based blending which has better retention of tonal details. Image 2 Image 1 Blended Image with Poisson
the function blocks to perform feature matching between different images of the same object under different conditions of scale, lighting, rotation and among other objects. • The second part of the assignment deals with experimenting with various parameters of the ORB algorithm used for feature matching and its results. LEARNINGS / OUTCOME • This assignment opened up avenues of how feature matching works and various factors influencing good and bad results. • Choosing a good template image is critical to ensure good results. • Image quality and closeness in scale and intensity are important factors for feature detection. • Pre-enhancing the image definitely helps in feature detection. • A single pipeline cannot be fit to match all types of images. • There are multiple optimizations possible at each step of the pipeline for feature matching. A better descriptor or a faster detector etc., can be utilized based on the needs of the pipeline.
IMAGES SIFT MATCHES • For my A&B, I tried to investigate the other detectors and descriptors which are available and their results against my images. • Tested Brute-Force Matching with SIFT Descriptors and FLANN Matchers. • The results suggest that ORB is a very good and extremely fast alternative to the commercially available feature matchers available.
a pipeline which can stiches multiple images from left to right creating a panorama image. • The functions help find features, calculate the homography, warp images from left to right, align and then blend the images to create the final artifact. LEARNINGS / OUTCOME • Feature matching and alignment based on RANSAC worked perfectly without any false positives. • Blending seemed to be a process which becomes increasingly complex with need for better quality. However, basic blending algorithms such as linear weighted, center weighted does give a great result. • The panorama pipeline is very consistent with no specialization required for different image sets. • Even while stitching five images, the results were consistent and high quality. • It is possible to achieve commercial software quality output with a simple pipeline. Assignment # 8: Panoramas
Gaussian pyramid blending For my A&B, I experimented with different blending functions - linear weighted, center weighted and gaussian pyramid blending out of which linear and center weighted does give a great result.
code to combine images of varying exposures to build a composite image which captures a wider dynamic range of irradiance. • The building block functions computes the response curve and radiance map which then is used to form the final image by using pixels from all the different exposure images. LEARNINGS / OUTCOME • Choosing a scene for HDR needs preparation and proper bracketing techniques to capture the wide range of the spectrum. • As there was no tone mapping, the results were dull and not exhibiting the bright intensity exhibited by the original image. Adding tone mapping via one of the algorithms such as Durand, Drago or Reinhard helps the image get displayed better. • Adding a tone map to better visually display the image, different weighting algorithms and vectorization would be enhancements possible for the pipeline.
this assignment is to build a point cloud and panorama using available software to learn the usage and capabilities of the basics learnt earlier. • A point cloud is a set of data points in a three-dimensional coordinate system and are used to represent three dimensional objects or capture a location in three dimensions. • The point cloud and panorama was captured in Central Park, Santa Clara, CA which is a public park with a great water fountain filled with ducks and a trail around the fountain. LEARNINGS / OUTCOME • This assignment helped learn the different projection systems for creating a panorama and helped identify my attempt as Cylindrical panorama with a planar reprojection • The point clouds also related to the Microsoft Photosynth product help learn the usage of multiple images to form a novel image and capability to provide crowd sourced image clouds.
images available here. Created using OpenSFM • The images to form the point cloud was captured in Central Park, Santa Clara, CA. I wanted to create the point cloud as a "Walk" and attempt to capture various image along the path. • To provide a wider view of the scene, I took three overlapping picture for every step which is evident from the camera angles in the point cloud.
i attempted different spatial projections as earlier ones were only horizontal. • So i took images of the top and bottom sections of the view (vertical projections) and stitched them all together using Photoshop to a FOV of 170o • The image is available at Google Drive and at Kuulu
a part of a video which can generate an infinite loop video with minimal or no visible break. • As part of this assignment, we build a pipeline to compute the similarity matrix between frames of a video and use them to identify and create a video texture. • The assignment focuses on finding the biggest loop possible from the similarity matrix and generating the infinite looping video. LEARNINGS / OUTCOME • Being the first assignment using a video instead of images, this assignment helped understand the structure and how multiple frames compose a video. • The building blocks of the assignment provided insight into how the pipeline would function to create a video texture. Assignment #11: Video Textures
few other video clips available from the internet Source https://www.youtube.com/watch?v=5lCK4jkqQxA Link to video texture gif http://imgur.com/a/OauHQ Source https://www.youtube.com/watch?v=WE-nFpUScEU Link to video texture gif http://imgur.com/a/V2BgY
this paper (Whiteboard scanning and image enhancement) which discusses the method of extracting usable documentation out of whiteboard images which is a useful artifact for many groups. • The scope of this project is to take a picture of whiteboard in any perspective and produce a perspective corrected, trimmed and enhanced image which can make it so much easier to save and process the whiteboard images further. • As part of the project, I also included the capability to take in multiple images forming a panorama of the whiteboard and produce a corrected output. • I also ported the code to Android and create an app to perform the similar activity as described.
Using a Camera Video of the app usage - Using an existing image • This was a great learning experience about uses of feature detection and customization to address specific image types, generate novel images and find useful data. • This project helped me learn about different contrast and image data enhancement techniques.
- OMSCS • Prof. Irfan Essa for the lectures and such an enriched course • The TAs who are always available to discuss and help our questions and thoughts on both Piazza and Slack. • Stackoverflow which helped with all the questions i had regarding Numpy operations, OpenCV errors etc., Thanks!