Slide 1

Slide 1 text

Thresholding and Smoothing Jeyaram Ashokraj 1. Fixed thresholding Fixed thresholding is the basic binarization technique. It compares each pixel intensity with the user given threshold and sets the pixel to one of the two levels (​ for example 0 and 1​ ). (i, ) if I (i, ) Threshold Inew j = 0 old j < 1 Otherwise = (Fig 1.1 Original Image) (Fig 1.2 Thresholding applied to entire image) (Fig 1.3 Binarized image with 2 ROI) (Fig 1.4 Binarized image with 4 ROI)

Slide 2

Slide 2 text

(thres​ 1​ =120, thres​ 2​ =150) (thres​ 1​ =120, thres​ 2​ =150, thres​ 3​ =60, thres​ 4​ =200) Some samples of fixed thresholding: Fig 1.5a. Fig 1.5b. Fig 1.5c. (T=150) (T=120,100,90) (T=120) Fig 1.5d. Fig 1.5e. (T=120,120) (T=120,120,100,70,90)

Slide 3

Slide 3 text

2. Variable thresholding Variable thresholding is adaptive binarization technique. Its better than the fixed thresholding since it factors the brightness of the local neighbourhood. This helps to isolate the background better. hreshold (i, ) k (i, ) k (i, ) Threshold T new j = 1 * μ j + 2 * σ j + old Where k1 and k2 are user supplied weights ​ (in this assignment implement k1 = k2 = S) mean of the neighbourhood window μ → standard deviation of the neighbour window σ → (i , ) μ = 1 m n * ∑ m −m ∑ n −n I + m j + n σ = √ (i , ) ∑ m −m ∑ n −n I + m j + n − μ Where m and n are the window sizes.

Slide 4

Slide 4 text

Fig 2.2a Fig 2.2b Fig 2.2c Original Image Variable thresholding Fixed Thresholding (win​ size​ =7, thres=70) (thres=70) Comparison of Variable thresholding vs Fixed thresholding For the same threshold value, Fixed thresholding does not perform well. The edges are not identified in fixed thresholding. Also when increasing the threshold for fixed thresholding the background mixes with the object. The left side images shown with threshold values increased in Fixed implementation. In Fig 2.3b the background mixes with object. Variable thresholding produces better results than fixed thresholding Fig 2.3a Fixed thresh(t=120) Fig 2.3b Fixed thresh (t=150)

Slide 5

Slide 5 text

Some samples of Variable thresholding: ​ ​ Fig 2.4a Fig 2.4b Fig 2.4c (S=0.5,W=7, T=45) (S=0.5W=7, T=45,45,45) (S=0.5,=7, T=60) ​ Fig 2.4d Fig 2.4e (S=0.5,W=7, T=45,45) (S=0.5W=7, T=30,30,30,30,30) ​ Fig 2.4f Fig 2.4g (S=0.5,W=7, T=45) (S=0.5, W=7, T=45,45)

Slide 6

Slide 6 text

3. Color Image Thresholding: The similar concept can be extended to color images as well. Color images are usually represented in 3D color space. RGB is a frequently used representation. Here the implementation is based on the euclidean distance between the user defined color value and pixels in the image. When the distance is less than given threshold, the pixel is set to white and black otherwise. istance d = √(R ) G ) B ) s − Rd 2 + ( s − Gd 2 + ( s − Bd 2 Where R​ s​ G​ s​ B​ s​ ­ Pixel in source image R​ d​ B​ d​ G​ d ​ ­ Color given by user (i, ) 0 if distance threshold in RGB channel I j = < 1 otherwise = ​ Fig 3.1a Original Fig 3.2 b Thresholded image (parameters rgb = (60,60,60) threshold = 70)

Slide 7

Slide 7 text

Some samples of Color Thresholding: Fig 3.2a Fig 3.2b T=100 RGB = (60,60,60) T= 75, RGB = (50,50,50) T=120 RGB = (45,75,45) Fig 3.2c Fig 3.2d T=130 RGB = (12,64,60) T=70 RGB = (60,60,60) T=80 RGB = (155,20,60) T=85 RGB = (70,70,70) T=100 RGB = (70,58,120) T=200 RGB = (23,200,60) T=50 RGB = (30,50,90)

Slide 8

Slide 8 text

4. Gaussian Smoothing: Smoothing is used to reduce image noise. There are several filters available to implement smoothing ​ (Uniform smoothing, Median smoothing, Weighted smoothing, etc.,)​ . Gaussian smoothing is one such method which effectively reduces impulse noise. The weights are calculated using 1D gaussian functions, which is then convoluted with the original pixels in the window. The kernel weights are calculated using (x) e f = 1 √2Πσ2 − (x−μ) 1 2σ2 2 The property of the weights is such that they add to 1. The weights are then convoluted with source image. (i, ) I (i , ) IN j = ∑ m=k m=−k ∑ n=k n=−k amn O + m j + n Where > new pixel intensity IN − > original pixel intensity IO − > window size m * n − > Gaussian weight matrix of size m amn − * n Fig 4.1a. Gaussian smoothing applied without edge preservation. The edges are blurred with this approach. (Sigma = 0.9) 5. Edge Preserving Gaussian Smoothing:

Slide 9

Slide 9 text

Edge preserving smoothing techniques retains the edges when they smooth other areas of the image. This is implemented using the user given threshold value. (i, ) I (i, ) if |I (i, ) I | threshold INN j = N j N j − O(i,j) < I (i, ) Otherwise = O j here I > New intensity after edge preserving operation W NN − > Pixel intensity after applying gaussian smoothing IN − > original pixel intensity IO − Fig 5.1a. Gaussian smoothing with edge preservation, the edges are more visible now. (Sigma =0.9 and threshold = 50) Some samples of images applied with edge preserving gaussian smoothing Fig 5.1 b (left) (sigma=0.9, threshold=) Fig 5.1 (right) (sigma=0.8, threshold=40) Original Images ​ (Kodak True Color Images: ​ http://r0k.us/graphics/kodak/​ )

Slide 10

Slide 10 text

● These images were converted to grayscale (.pgm) and color (.ppm) image files using ImageMagick convert utility.

Slide 11

Slide 11 text

Image Histogram Equalization Jeyaram Ashokraj 1. Histogram Equalization Image histogram represents the frequency of pixel intensities the image. Image histogram equalization is a brightness transformation process. Through this process a better contrast is obtained for the image. The aim is to obtain a uniformly distributed brightness over the whole brightness scale. Image Source: http://en.wikipedia.org/wiki/Histogram_equalization 2. Grey Level Equalization Algorithm: 1. construct image histogram (p) g 1.1. For every pixel in M x N region, read each pixel and increase the count of relevant pixel level(255 levels in this case) in (p) g 2. construct cummulative histogram (p) c 2.1. = (p) c (p ) g(p) where p , ...255 g − 1 + = 0 1 3. construct a equalized histogram 3.1. (p) where M is the total number of pixels τ = M N * c(p) 255 * * N 4. obtain the new calculated pixel value by looking up in the equalized histogram (p) τ

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

2.1 With Histogram Clipping: Before performing the histogram, pixel intensities less than a given threshold are set to zero. ​ threshold: 60 threshold: 60 Sample Output: 3. Color Equalization 3.1 RGB Channel The histogram equalization is performed on individual RGB channels. This produces Pseudocolor in the image. The process is similar to gray level histogram equalization but applied to individual RGB channels. Red Channel:

Slide 14

Slide 14 text

Original Equalized Green Channel: Original Equalized Blue Channel: Original Equalized Sample Output:

Slide 15

Slide 15 text

3.2 Intensity Channel The image is converted to HSI scale. And then histogram equalization is applied to Intensity channel.

Slide 16

Slide 16 text

Sample Output: When compared to RGB histogram equalization, this equalization on Intensity channel in HSI produces the desired effect.

Slide 17

Slide 17 text

3.3 HSI Channel The histogram equalization is performed on individual HSI channels. This produces Pseudocolor in the image. The image is converted from RGB representation to HSI model using the previous formula and histogram equalization is performed on individual HSI channels and then converted back to RGB representation. Sample Output: Histogram: Hue: Original Equalized

Slide 18

Slide 18 text

Saturation: Original Equalized

Slide 19

Slide 19 text

Intensity: Original Equalized References: 1. Image Processing, Analysis, and Machine Vision by Milan Sonka, Vaclav Hlavac, Roger Boyle. 2. Digital Image Processing by Rafael C. Gonzalez , Richard E. Woods.

Slide 20

Slide 20 text

Edge Detection & Hough Transforms Jeyaram Ashokraj Edge detection is one of the important dimension reduction/ feature extraction step in the preprocessing stage in the image processing pipeline. Edges contain important information about the important object or geometric shape in the image. There are many operators for edge detection, among them sobel operator is dealt in this assignment. Other operators include prewitt, roberts, canny, etc., Sobel Operator has two 3 X 3 kernels, one for along the x and y direction. Each kernel is convolved with the original image to find the gradient intensity along x and y direction respectively. The magnitude of the edge is given by The gradient direction is given by Formula Src: ​ http://en.wikipedia.org/wiki/Sobel_operator Gray scale edge detection: Several gray scale images were convoluted with Sobel operator to find the edges. a) Original Image b) Edge amplitude image

Slide 21

Slide 21 text

Angle Thresholding: a) Original Image b) Edge Amplitude c) Edge thresholded at 200 d) Angle thresholded at 45​ o Other Samples: Original images and the thresholded amplitude image

Slide 22

Slide 22 text

Color edge detection: 1) On Individual RGB channel: Edge detection on individual channels does not provide the desired results. This is a obvious behaviour since edges are defined by the changes in brightness intensities. RGB channels capture the color frequency better than pixel intensities. a) Original Image b) Red channel amplitude image c) Green channel amplitude image d) Blue Channel amplitude image 2) On Intensity channel The RGB color image is transformed to HSI and edge detection is performed on the Intensity channel. This provides the desired effect as compared to edge detection on RGB channels.

Slide 23

Slide 23 text

Edge detection on intensity channel in HSI model captures the edges quite well. The variations in light are captured well by the intensity channel. Other samples: a) Original Image b) Edge amplitude image on Intensity Channel

Slide 24

Slide 24 text

Hough transform: Hough Transform is a feature extraction technique which is used to find geometric shapes(lines, circles and ellipses). Random shapes can be found using randomized hough transformation technique.The idea is to transform the image points to a parameter space (a,b,r) when the equation of line is given by (x­a)​ 2​ + (y­b)​ 2​ = r​ 2 Where x,y <­ Image edge points a,b <­ center of the circle r <­ radius The hough space is represented by a accumulator array and each pixel(image point), it is represented as a circle in the hough space, and there are number of circles created. A voting technique is used, where the circles intersect. This done by incrementing the accumulator array. Algorithm: ● The 3d accumulator [a][b][r] space is set to 0. ● For each(edge point in image(i, j)): Increment all cells which according to the equation of a circle( (i­a)² + (j­b)² = r² ) could be the center of a circle, for a,b,r values in the accumulator ● Search for the local maxima cells, these cells represent the center of circle. ● Using the local maxima, the circle is drawn in the original image using parametric equation x = a + rcos(theta) y = b + rsin(theta) since we have a,b,r values from the local maxima. Below is the sample of hough transform applied to image to find circles within radius 73 to 76

Slide 25

Slide 25 text

a) Input Image b) Edge Intensity(Sobel) c) Hough space (accumulator array d) object detected (radius: 75) Other Samples: Conclusion:

Slide 26

Slide 26 text

The computation is expensive when a range of radius is given for the circle to the detected. Otherwise if the exact radius is known, the accumulator space is converted to a two dimensional matrix. Also the threshold parameter for calculating the local maxima should be tuned to get the desired results and circle to be detected.

Slide 27

Slide 27 text

Discrete Fourier Transforms Jeyaram Ashokraj Discrete Fourier and Inverse Fourier Transforms: Fourier function transforms the image from spatial domain to frequency domain. It decomposes the image into sine and cosine components. Fourier transforms has applications in image filtering and image compression. 2D fourier transform is given by 2D Inverse fourier transform is given by Original image Fourier domain amplitude image

Slide 28

Slide 28 text

Low Pass Filter: Low pass filter is a smoothing filter and allows low frequencies to pass through. This removes noise but also blurs the image and edges. Low pass vs Regular smoothing: Low pass filter does not produce the ideal smoothing effect it causes the ringing effect High Pass Filter: High pass filter allows high frequencies to pass through. Examples are high frequencies are edges where pixel intensities change rapidly. High pass filter alone is not sufficient for edge detection. Edge detection is possible when the fourier domain is multiplied with edge detection kernel like Sobel, Prewitt and then applying Inverse fourier transform

Slide 29

Slide 29 text

Band Pass Filter: Band pass filter allows a particular interval of frequency to pass through. It keeps the medium frequencies. Band Stop Filter: Band pass filter cuts off a particular interval of frequency and allows the remaining frequency to pass through. Color Images: Fourier transforms were applied to RGB images by first converting them to HSI model. Then fourier transform function was applied to Intensity channel and then converted back from HSI to RGB model. Both fourier and inverse fourier transform were applied. Images obtained from microscope were used.

Slide 30

Slide 30 text

Implementation: This assignment uses a fast fourier implementation provided by ​ http://www.fftw.org/​ . This provides faster implementation of the fourier transform with runtime of O(N log N). References: 1. http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_tr ansform.html 2. http://www.fftw.org/ 3. http://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm