# 数字图像处理作业3 边缘检测

Image Processing & Computer Vision

Exercise 3 Edge Detection and Morphological Operations

Zhongli Liu S1018531 M-EMSYS

Address: Calslaan 26-004 Email: z.liu@student.utwente.nl 28-2-2010

1

Image Processing & Computer Vision

Part I: Edge based image segmentation The original image is shown in Figure 1.

Figure 1 In advance, the image is set to be double type and rescaling the data for the following processing. img=imread('im_test.png'); img=im2double(img);

1. Use the function fspecial to create a Gaussian PSF-matrix h with a scale parameter (the width of the Gaussian) of σ . The size of h should be such that the truncation error of the tails of the Gaussian is negligible. Apply the filter to the test image. Apply this with four different values of σ . That is: σ = 1, 3, 5 en 9. Explain the results. Ensuring the truncation error of the Gaussian filter is negligible, the standard deviations of the Gaussian should be chosen according to: K ≥ 3σxΔ and L ≥ 3σyΔ. As required that σ = 1, 3, 5, 9 and Δ=1, if σ is assumed to be i, the size of h is (2K+1)*(2L+1) >= (6i+1)*(6i+1). sigma = [1 3 5 9]; % fill array with sigma values for i=1:4 % create the PSF, K=L>=3i, h.size=(2K+1)*(2L+1) h = fspecial('gaussian',(6*i+1),sigma(i)); imfiltered = imfilter(img,h,'replicate'); % apply the filter figure(1), subplot(2,2,i); imshow(imfiltered,[]); title(['sigma = ' num2str(sigma(i))]); end Gaussian filter is used to generate a scale-space representation of the given image and reduce some obvious noise of it. The result is shown in Figure 2.1~2.4. As σ increases, the image is smoothing with a larger and larger filter, which is removing more and more of the details the image contains. Therefore, when scale is too small, the image has many irrelevant details and is noise sensitive; when scale is too large, the image has some mixing adjacent objects and more details are lost.

2

Image Processing & Computer Vision

Figure 2.1

Figure 2.2

Figure 2.3
2.

Figure 2.4

In the scale space theory it has been argued that the best low-pass filter needed for differentiation is the Gaussian:
h ( x, y ) = x2 + y 2 exp 2πσ 2σ 2 1
2

o o o o o

Give an expression for hx ( x, y ) . Create a PSFmatrix hx that contains a sampled version hx (nΔ, mΔ) of hx ( x, y ) . Implement this with σ = 7Δ (or equivalently σ = 3 and Δ = 1 ). Use psf2otf and mesh to obtain a 3D graph of the OTF (hint: use the appropriate arguments). Apply the convolution with hx to the test image. Explain the results.

The expression for hx ( x, y ) is given below:

hx ( x, y ) =

x2 + y 2 exp 2πσ 2 2σ 2 1

x ( 2 ) σ
x

The hx in the term of PSF matrix is built as below, obviously, the matrix is easy to obtained by multiply (

σ2

)

3

Image Processing & Computer Vision

to each column of the matrix h. After filtering by hx the image is shown in Figure 3. The generated image is established by distributing the derivatives of grey levels in the original image and Gaussian filtering with scale=7. At the location in which the grey level is increasing in the X direction, the generated values of relative pixels are positive, and at the location in which the grey level is decreasing in the X direction, the generated values of relative pixels will be negative. h = fspecial('gaussian',(6*7+1),7); % 2K+1=2L+1=43, K=L=21 h_dx = (-1/49) * ((meshgrid(-21:21, -21:21)) .* h); figure(2), imshow(imfilter(img,h_dx,'replicate'),[]);

Figure 3 The conversion of PSF to OTF of matrix hx is implemented as below, the magnitude of OTF is show in Figure 4, and the image is done after ffishift. h_dx_OTF = fftshift(psf2otf(h_dx,[250 200])); figure(3), mesh(abs(h_dx_OTF)), axis on

Figure 4 4

Image Processing & Computer Vision

Other derivatives, like f y (n, m) , f xx (n, m) , f yy (n, m) , and f xy (n, m) can be obtained in likewise manner1. The function ut_gauss is an easy-to-use implementation for that. Apply these operations to the test image. Use σ = 7 . Describe and explain. The image is implemented and shown as below. Here we have the filtering in the horizontal direction, the vertical direction, or combination of the two. First derivative of Gaussian filter functions as mentioned above (for hy it is similar with hx but only in the vertical direction). Second derivative is the straightforward extension of the first derivative, which pay attention to the changing and the changing speed of the grey levels in the first derivative. Actually, second derivative (dxdx or dydy) coarsely focus on one edge and its opposite edge of a object in a image, which is reasonablely used in the following steps.
img_dx = ut_gauss(img,7,1,0); img_dy = ut_gauss(img,7,0,1); img_dxdx = ut_gauss(img,7,2,0); img_dydy = ut_gauss(img,7,0,2); img_dxdy = ut_gauss(img,7,1,1);

Figure 5.1

Figure 5.2

Figure 5.3

Figure 5.4

Figure 5.5

It is assumed that Δ = 1 as the default value.

5

Image Processing & Computer Vision

3. Using the results from question 2, calculate the gradient magnitude and the Laplacian of the test image. Don't use for-loops. Use σ = 7 . Describe and explain. The gradient magnitude and the Laplacian of the image is calculated and shown in Figure 6 and Figure 7 respectively. The result of calculating the gradient magnitude for a pixel gives the largest gradient magnitude (as absolute value) of the pixel in the image when every possible direction is taken into account. So white area in the generated image indicates the edge of an object whose gradient magnitude is large, and area with deep color indicates that at the location the grey level does not change obviously, which means it is more possible inside or outside an object. Laplacian is combined by second derivatives both in X and Y direction which is used for searching zero-crossing points in the following. gradi_mag = sqrt ((img_dx .* img_dx) + (img_dy .*img_dy));

Figure 6
laplacian = img_dxdx + img_dydy;

Figure 7 4. A cheap (but not very accurate) zero crossing detection is accomplished by first finding areas with a positive Laplacian, and next to find the boundary of these areas. We apply this procedure to the Laplacian of the test image obtained in question 3: o Create a binary image in which a pixel is set to 1 if the corresponding Laplacian for that pixel is 6

Image Processing & Computer Vision

o o

positive. (Don't use for-loops!) Determine the boundary pixels of the binary image, and display it on the screen. (Hint: use bwmorph with option 'remove'). This method for finding the zero crossings has some disadvantages. Can you mention them?

The processing is implemented as following code, and the result of first and second steps are shown in Figure 8.1 and Figure 8.2 respectively. The disadvantage is that because Laplacian does not pay attention to the magnitude of the gradients which indicates the contrastness of the "objects" in the image, every object's edge including noise in the image is detected as shown in Figure 8.2. img_lap = im2bw(laplacian, 0); mask_lap = bwmorph(img_lap,'remove');

Figure 8.1

Figure 8.2

7

Image Processing & Computer Vision

Figure 9 The code of using a single threshold and the combination of both large and small thresholds are shown below. The results are shown in Figure 10.1 and Figure 10.2 respectively. It is reasonable to say the threshold of 0.008 is proper for the detection because there is no missing edge and almost no noise (except for some noise in the boundary of the image which continuously connect to the detected edge). Meanwhile, the threshold 0.005 brings a lot of fault edges and threshold 0.015 results some weak edges are missing. However, reconstruction by combining the two cases generates a acceptable edge as shown in Figure 10.2. img_edge1 = im2bw(gra_masked, 0.008); marker = im2bw(gra_masked, 0.015); mask = im2bw(gra_masked, 0.005); img_edge2 = imreconstruct(marker,mask);

Figure 10.1

Figure 10.2

6. Edge detection o Describe in one sentence the main difference between the localization criterion of the Marr-Hildreth edge detector and the one of the Canny edge detector. 8

Image Processing & Computer Vision

Answer: The main difference between the localization criterion of the Marr-Hildreth and the Canny edge detector is that Canny edge detector's localization is based on the extremes of the first derivative of the Gaussian filter but Marr-Hildreth edge detector's is based on the zero-crossings of the Laplacian of the Gaussian filter. o Apply ut_edge to the test image in order to get an edge map in which the edge elements form two non-fragmented edge segments. Select the options of ut_edge as follows: Set the option 'c' That is, apply Canny. Later we will also use 'm' (Marr-Hildreth) Use option 's' to set the scale. For now use σ = 7 . Later we will vary this parameter. Use the option 'h' (hysteresis thresholding). The thresholds are specified as two additional parameters. Read the help of the function to understand how these parameters are defined. Select them interactively such that two non-broken edge segments arise. Perhaps, you need some iterations to establish the right thresholds. Display the resulting edge map. Describe the results.

o

The generated images by Canny edge detector and the Marr-Hildreth edge detector are shown in Figure 11.1 and Figure 11.2 respectively. The larger threshold is chosen 0.1 and the smaller one is 0.0035. As shown in Figure 11.2, it seems the Canny edge detector performs better for that the detected edge is more smooth and more close to the real situation.
img_edge3 = ut_edge(img,'c','s',7,'h',[0.1,0.0035]); img_edge4 = ut_edge(img,'m','s',7,'h',[0.1,0.0035]);

Figure 11.1

Figure 11.2

Part II: Estimation of the nominal diameter and the diameter of the stenose 7. Determine for each x (column) in the edge map the two boundary pixels of the vessel. You may use the knowledge that the vessel is horizontally aligned, and centred in the y -direction. Next, determine for each column the diameter (expressed in pixels). The result should be an array diam such that diam(c) yields the diameter at the c-th column. Create and display a plot of diam. In this step, the img_edge3 which is detected by Canny edge detection is used and the measuring is implemented as below, in which L1 indicates the location of the first white pixel in the column and L2 indicates the location of the second white pixel in the column. It is obvious the width of the vessel is the difference of the 9

Image Processing & Computer Vision

absolute value of L1 subtracts L2. The plot of in Figure 12 shows the diameter is around 32 pixels. [V1 L1] = max(img_edge3(1:125,1:200)); [V2 L2] = max(img_edge3(126:250,1:200)); L2=125+L2; diam = abs(L1-L2); figure(9), plot(diam), axis([1 200 0 100]);

Figure 12 8. Estimate the nominal diameter by calculating the mean of the diameters obtained in 7. Use the function mean. By mean() function, the nominal diameter is 30.9150 pixels which equals to 1.2366 mm. diam_mean = mean (diam); diam_mean_mm = diam_mean*1.5/37.5;

10

Image Processing & Computer Vision

Appendix: Matlab code (used in Matlab 7.1)
% zhongli, 12-3-2010 % step 0 img=imread('C:\Users\lenovo\Desktop\courses\Image Processing and Computer Vision\dbb_exercises\im_test.png'); img=im2double(img); figure(10), imshow(img); % step 1 sigma = [1 3 5 9]; % fill array with sigma values for i=1:4 h = fspecial('gaussian',(6*i+1),sigma(i)); % create the PSF, K=L>=3i, h.size=(2K+1)*(2L+1) imfiltered = imfilter(img,h,'replicate'); % apply the filter figure(1), subplot(2,2,i); imshow(imfiltered,[]); title(['sigma = ' num2str(sigma(i))]); end % step 2 h = fspecial('gaussian',(6*7+1),7); % 2K+1=2L+1=43, K=L=21 h_dx = (-1/49) * ((meshgrid(-21:21, -21:21)) .* h); figure(2), imshow(imfilter(img,h_dx,'replicate'),[]); h_dx_OTF = fftshift(psf2otf(h_dx,[250 200])); figure(3), mesh(abs(h_dx_OTF)), axis on img_dx = ut_gauss(img,7,1,0); figure(4); subplot(2,3,1); imshow(img_dx,[]), title('dx'); axis on img_dy = ut_gauss(img,7,0,1); subplot(2,3,2); imshow(img_dy,[]), title('dy'); axis on img_dxdx = ut_gauss(img,7,2,0); subplot(2,3,3); imshow(img_dxdx,[]), title('dxdx'); axis on img_dydy = ut_gauss(img,7,0,2); subplot(2,3,4); imshow(img_dydy,[]), title('dydy'); axis on img_dxdy = ut_gauss(img,7,1,1); subplot(2,3,5); imshow(img_dxdy,[]), title('dxdy'); axis on % step 3 gradi_mag = sqrt ((img_dx .* img_dx) + (img_dy .*img_dy)); laplacian = img_dxdx + img_dydy; figure(5); subplot(1,3,1), imshow(laplacian,[]);

11

Image Processing & Computer Vision

12

Laplacian 算子 ( c )8.采用模板[-1 1]主要检测___方向的边缘。 a.水平 ...少差、 名词解释(每小题3分,共15分) 1.数字图像是将一幅画面在空间上...

2.图像锐化与图像平滑有何区别与联系? 3.伪彩色增强与假彩色增强有何异同点? 4.梯度法与 Laplacian 算子检测边缘的异同点? 《数字图像处理》试卷(A 卷) 参考...