当前位置:首页 >> IT/计算机 >>

数字图像处理作业3 边缘检测


Image Processing & Computer Vision

Exercise 3 Edge Detection and Morphological Operations

Zhongli Liu S1018531 M-EMSYS

Address: Calslaan 26-004 Email: z.liu@student.utwente.nl 28-2-2010

1

Image Processing & Computer Vision

Part I: Edge based image segmentation The original image is shown in Figure 1.

Figure 1 In advance, the image is set to be double type and rescaling the data for the following processing. img=imread('im_test.png'); img=im2double(img);

1. Use the function fspecial to create a Gaussian PSF-matrix h with a scale parameter (the width of the Gaussian) of σ . The size of h should be such that the truncation error of the tails of the Gaussian is negligible. Apply the filter to the test image. Apply this with four different values of σ . That is: σ = 1, 3, 5 en 9. Explain the results. Ensuring the truncation error of the Gaussian filter is negligible, the standard deviations of the Gaussian should be chosen according to: K ≥ 3σxΔ and L ≥ 3σyΔ. As required that σ = 1, 3, 5, 9 and Δ=1, if σ is assumed to be i, the size of h is (2K+1)*(2L+1) >= (6i+1)*(6i+1). sigma = [1 3 5 9]; % fill array with sigma values for i=1:4 % create the PSF, K=L>=3i, h.size=(2K+1)*(2L+1) h = fspecial('gaussian',(6*i+1),sigma(i)); imfiltered = imfilter(img,h,'replicate'); % apply the filter figure(1), subplot(2,2,i); imshow(imfiltered,[]); title(['sigma = ' num2str(sigma(i))]); end Gaussian filter is used to generate a scale-space representation of the given image and reduce some obvious noise of it. The result is shown in Figure 2.1~2.4. As σ increases, the image is smoothing with a larger and larger filter, which is removing more and more of the details the image contains. Therefore, when scale is too small, the image has many irrelevant details and is noise sensitive; when scale is too large, the image has some mixing adjacent objects and more details are lost.

2

Image Processing & Computer Vision

Figure 2.1

Figure 2.2

Figure 2.3
2.

Figure 2.4

In the scale space theory it has been argued that the best low-pass filter needed for differentiation is the Gaussian:
h ( x, y ) = x2 + y 2 exp 2πσ 2σ 2 1
2

o o o o o

Give an expression for hx ( x, y ) . Create a PSFmatrix hx that contains a sampled version hx (nΔ, mΔ) of hx ( x, y ) . Implement this with σ = 7Δ (or equivalently σ = 3 and Δ = 1 ). Use psf2otf and mesh to obtain a 3D graph of the OTF (hint: use the appropriate arguments). Apply the convolution with hx to the test image. Explain the results.

The expression for hx ( x, y ) is given below:

hx ( x, y ) =

x2 + y 2 exp 2πσ 2 2σ 2 1

x ( 2 ) σ
x

The hx in the term of PSF matrix is built as below, obviously, the matrix is easy to obtained by multiply (

σ2

)

3

Image Processing & Computer Vision

to each column of the matrix h. After filtering by hx the image is shown in Figure 3. The generated image is established by distributing the derivatives of grey levels in the original image and Gaussian filtering with scale=7. At the location in which the grey level is increasing in the X direction, the generated values of relative pixels are positive, and at the location in which the grey level is decreasing in the X direction, the generated values of relative pixels will be negative. h = fspecial('gaussian',(6*7+1),7); % 2K+1=2L+1=43, K=L=21 h_dx = (-1/49) * ((meshgrid(-21:21, -21:21)) .* h); figure(2), imshow(imfilter(img,h_dx,'replicate'),[]);

Figure 3 The conversion of PSF to OTF of matrix hx is implemented as below, the magnitude of OTF is show in Figure 4, and the image is done after ffishift. h_dx_OTF = fftshift(psf2otf(h_dx,[250 200])); figure(3), mesh(abs(h_dx_OTF)), axis on

Figure 4 4

Image Processing & Computer Vision

Other derivatives, like f y (n, m) , f xx (n, m) , f yy (n, m) , and f xy (n, m) can be obtained in likewise manner1. The function ut_gauss is an easy-to-use implementation for that. Apply these operations to the test image. Use σ = 7 . Describe and explain. The image is implemented and shown as below. Here we have the filtering in the horizontal direction, the vertical direction, or combination of the two. First derivative of Gaussian filter functions as mentioned above (for hy it is similar with hx but only in the vertical direction). Second derivative is the straightforward extension of the first derivative, which pay attention to the changing and the changing speed of the grey levels in the first derivative. Actually, second derivative (dxdx or dydy) coarsely focus on one edge and its opposite edge of a object in a image, which is reasonablely used in the following steps.
img_dx = ut_gauss(img,7,1,0); img_dy = ut_gauss(img,7,0,1); img_dxdx = ut_gauss(img,7,2,0); img_dydy = ut_gauss(img,7,0,2); img_dxdy = ut_gauss(img,7,1,1);

Figure 5.1

Figure 5.2

Figure 5.3

Figure 5.4

Figure 5.5

It is assumed that Δ = 1 as the default value.

5

Image Processing & Computer Vision

3. Using the results from question 2, calculate the gradient magnitude and the Laplacian of the test image. Don't use for-loops. Use σ = 7 . Describe and explain. The gradient magnitude and the Laplacian of the image is calculated and shown in Figure 6 and Figure 7 respectively. The result of calculating the gradient magnitude for a pixel gives the largest gradient magnitude (as absolute value) of the pixel in the image when every possible direction is taken into account. So white area in the generated image indicates the edge of an object whose gradient magnitude is large, and area with deep color indicates that at the location the grey level does not change obviously, which means it is more possible inside or outside an object. Laplacian is combined by second derivatives both in X and Y direction which is used for searching zero-crossing points in the following. gradi_mag = sqrt ((img_dx .* img_dx) + (img_dy .*img_dy));

Figure 6
laplacian = img_dxdx + img_dydy;

Figure 7 4. A cheap (but not very accurate) zero crossing detection is accomplished by first finding areas with a positive Laplacian, and next to find the boundary of these areas. We apply this procedure to the Laplacian of the test image obtained in question 3: o Create a binary image in which a pixel is set to 1 if the corresponding Laplacian for that pixel is 6

Image Processing & Computer Vision

o o

positive. (Don't use for-loops!) Determine the boundary pixels of the binary image, and display it on the screen. (Hint: use bwmorph with option 'remove'). This method for finding the zero crossings has some disadvantages. Can you mention them?

The processing is implemented as following code, and the result of first and second steps are shown in Figure 8.1 and Figure 8.2 respectively. The disadvantage is that because Laplacian does not pay attention to the magnitude of the gradients which indicates the contrastness of the "objects" in the image, every object's edge including noise in the image is detected as shown in Figure 8.2. img_lap = im2bw(laplacian, 0); mask_lap = bwmorph(img_lap,'remove');

Figure 8.1

Figure 8.2

5. Marr-Hildreth o Determine the gradient magnitude, as in question 3. Mask this image by the Marr-Hildreth zero crossings obtained in question 5. That is, all gradient magnitude pixels that are not a zero crossing should be made zero, while all other pixels are retained. o Compare the masked gradient magnitude image against a suitable threshold T; and show an edge map consisting of zero crossings for which the gradient magnitude is larger than T. Show the result on the screen. You can interactively select the threshold by visually evaluating the edge map. o Apply a threshold operation with a threshold that is a bit larger than the threshold T. Name the resulting image marker. o Apply a threshold operation with a threshold that is a bit smaller than the threshold T. Name the resulting image mask. o Propagate the marker image into the mask image, and show the result. Describe and discuss the results. (Tip: use the function imreconstruct). The first step is implemented as below and the result is shown in Figure 9. The generated image shows that by masking the gradient magnitude, Laplacian operator keeps the most obvious edges as large value of grey level and blurry edges including noises turn to be deep more or less.
gra_masked = mask_lap .* gradi_mag;

7

Image Processing & Computer Vision

Figure 9 The code of using a single threshold and the combination of both large and small thresholds are shown below. The results are shown in Figure 10.1 and Figure 10.2 respectively. It is reasonable to say the threshold of 0.008 is proper for the detection because there is no missing edge and almost no noise (except for some noise in the boundary of the image which continuously connect to the detected edge). Meanwhile, the threshold 0.005 brings a lot of fault edges and threshold 0.015 results some weak edges are missing. However, reconstruction by combining the two cases generates a acceptable edge as shown in Figure 10.2. img_edge1 = im2bw(gra_masked, 0.008); marker = im2bw(gra_masked, 0.015); mask = im2bw(gra_masked, 0.005); img_edge2 = imreconstruct(marker,mask);

Figure 10.1

Figure 10.2

6. Edge detection o Describe in one sentence the main difference between the localization criterion of the Marr-Hildreth edge detector and the one of the Canny edge detector. 8

Image Processing & Computer Vision

Answer: The main difference between the localization criterion of the Marr-Hildreth and the Canny edge detector is that Canny edge detector's localization is based on the extremes of the first derivative of the Gaussian filter but Marr-Hildreth edge detector's is based on the zero-crossings of the Laplacian of the Gaussian filter. o Apply ut_edge to the test image in order to get an edge map in which the edge elements form two non-fragmented edge segments. Select the options of ut_edge as follows: Set the option 'c' That is, apply Canny. Later we will also use 'm' (Marr-Hildreth) Use option 's' to set the scale. For now use σ = 7 . Later we will vary this parameter. Use the option 'h' (hysteresis thresholding). The thresholds are specified as two additional parameters. Read the help of the function to understand how these parameters are defined. Select them interactively such that two non-broken edge segments arise. Perhaps, you need some iterations to establish the right thresholds. Display the resulting edge map. Describe the results.

o

The generated images by Canny edge detector and the Marr-Hildreth edge detector are shown in Figure 11.1 and Figure 11.2 respectively. The larger threshold is chosen 0.1 and the smaller one is 0.0035. As shown in Figure 11.2, it seems the Canny edge detector performs better for that the detected edge is more smooth and more close to the real situation.
img_edge3 = ut_edge(img,'c','s',7,'h',[0.1,0.0035]); img_edge4 = ut_edge(img,'m','s',7,'h',[0.1,0.0035]);

Figure 11.1

Figure 11.2

Part II: Estimation of the nominal diameter and the diameter of the stenose 7. Determine for each x (column) in the edge map the two boundary pixels of the vessel. You may use the knowledge that the vessel is horizontally aligned, and centred in the y -direction. Next, determine for each column the diameter (expressed in pixels). The result should be an array diam such that diam(c) yields the diameter at the c-th column. Create and display a plot of diam. In this step, the img_edge3 which is detected by Canny edge detection is used and the measuring is implemented as below, in which L1 indicates the location of the first white pixel in the column and L2 indicates the location of the second white pixel in the column. It is obvious the width of the vessel is the difference of the 9

Image Processing & Computer Vision

absolute value of L1 subtracts L2. The plot of in Figure 12 shows the diameter is around 32 pixels. [V1 L1] = max(img_edge3(1:125,1:200)); [V2 L2] = max(img_edge3(126:250,1:200)); L2=125+L2; diam = abs(L1-L2); figure(9), plot(diam), axis([1 200 0 100]);

Figure 12 8. Estimate the nominal diameter by calculating the mean of the diameters obtained in 7. Use the function mean. By mean() function, the nominal diameter is 30.9150 pixels which equals to 1.2366 mm. diam_mean = mean (diam); diam_mean_mm = diam_mean*1.5/37.5;

10

Image Processing & Computer Vision

Appendix: Matlab code (used in Matlab 7.1)
% zhongli, 12-3-2010 % step 0 img=imread('C:\Users\lenovo\Desktop\courses\Image Processing and Computer Vision\dbb_exercises\im_test.png'); img=im2double(img); figure(10), imshow(img); % step 1 sigma = [1 3 5 9]; % fill array with sigma values for i=1:4 h = fspecial('gaussian',(6*i+1),sigma(i)); % create the PSF, K=L>=3i, h.size=(2K+1)*(2L+1) imfiltered = imfilter(img,h,'replicate'); % apply the filter figure(1), subplot(2,2,i); imshow(imfiltered,[]); title(['sigma = ' num2str(sigma(i))]); end % step 2 h = fspecial('gaussian',(6*7+1),7); % 2K+1=2L+1=43, K=L=21 h_dx = (-1/49) * ((meshgrid(-21:21, -21:21)) .* h); figure(2), imshow(imfilter(img,h_dx,'replicate'),[]); h_dx_OTF = fftshift(psf2otf(h_dx,[250 200])); figure(3), mesh(abs(h_dx_OTF)), axis on img_dx = ut_gauss(img,7,1,0); figure(4); subplot(2,3,1); imshow(img_dx,[]), title('dx'); axis on img_dy = ut_gauss(img,7,0,1); subplot(2,3,2); imshow(img_dy,[]), title('dy'); axis on img_dxdx = ut_gauss(img,7,2,0); subplot(2,3,3); imshow(img_dxdx,[]), title('dxdx'); axis on img_dydy = ut_gauss(img,7,0,2); subplot(2,3,4); imshow(img_dydy,[]), title('dydy'); axis on img_dxdy = ut_gauss(img,7,1,1); subplot(2,3,5); imshow(img_dxdy,[]), title('dxdy'); axis on % step 3 gradi_mag = sqrt ((img_dx .* img_dx) + (img_dy .*img_dy)); laplacian = img_dxdx + img_dydy; figure(5); subplot(1,3,1), imshow(laplacian,[]);

11

Image Processing & Computer Vision

% step 4 img_lap = im2bw(laplacian, 0); subplot(1,3,2), imshow(img_lap); mask_lap = bwmorph(img_lap,'remove'); subplot(1,3,3), imshow(mask_lap); % step 5 figure(6); subplot(1,2,1), imshow(gradi_mag,[]); gra_masked = mask_lap .* gradi_mag; subplot(1,2,2), imshow(gra_masked,[]); figure(7); img_edge1 = im2bw(gra_masked, 0.008); subplot(1,2,1), imshow(img_edge1,[]); marker = im2bw(gra_masked, 0.015); mask = im2bw(gra_masked, 0.005); img_edge2 = imreconstruct(marker,mask); subplot(1,2,2), imshow(img_edge2,[]); % step 6 img_edge3 = ut_edge(img,'c','s',7,'h',[0.1,0.0035]); figure(8), subplot(1,2,1), imshow(img_edge3); img_edge4 = ut_edge(img,'m','s',7,'h',[0.1,0.0035]); subplot(1,2,2), imshow(img_edge4); % step 7 [V1 L1] = max(img_edge3(1:125,1:200)); [V2 L2] = max(img_edge3(126:250,1:200)); L2=125+L2; diam = abs(L1-L2); figure(9), plot(diam), axis([1 200 0 100]); % step 8 diam_mean = mean (diam); diam_mean_mm = diam_mean*1.5/37.5;

12

相关文章:
数字图像处理技术作业解答
图像经过 3*3 箱式滤波处理后的结果。 边界采用 6.5.2 节中方法 3-B ...图像平滑、边缘检测、边缘锐化均可以由图像与一个滤波矩阵的卷积得到。分析一 下...
数字图像处理几种边缘检测算子的比较
数字图像处理 几种边缘检测算子的比较 边缘检测是图像处理和计算机视觉中的基本...图 3 所示的 2 个卷积核 dx ,不要形成了 Prewitt 算子。与 Sobel 算子的...
数字图像处理边缘检测算子
3 4 5 材料名称 录 清 单资料数量 1 1 1 备注 课程设计任务书 课程设计...(系、部) 信息与计算科学 专业 1102 班级 数字图像处理 经典边缘检测算子比较 ...
数字图像处理及MATLAB实现边缘检测
实验报告 实验名称 课程名称 边缘检测数字图像处理及 MATLAB 实现 专 业: 班学...('灰度图像'); I2=edge(I1,'log'); subplot(2,2,3); imshow(I2); ...
数字图像处理大作业
数字图像处理 1.图像工程的三个层次是指哪三个层次?各个层次对应的输入、输出...1) 利用 matlab 提供的 edge 函数, 选择三种边缘检测算子, 分别对图像 Lena....
数字图像处理实验报告实验三
中南大学数字图像处理实验报告 实验三数学形态学及其应用 实验三 数学形态学及其...2.设计一个检测图 3-2 中边缘的程序,要求结果类似图 3-3,并附原理说明 ...
数字图像处理边缘检测算子
8页 20财富值 数字图像处理边缘检测算... 3页 5财富值如要投诉违规内容,请到百度文库投诉中心;如要提出功能问题或意见建议,请点击此处进行反馈。 ...
数字图像处理作业
数字图像处理作业 题目:图像膨胀与图像腐蚀 学院: 班级: 学号: 姓名: 1/ 10...('原图像'); figure,imshow(BW2) , title('边缘检测后'); 效果图 三、...
西安交通大学数字图像处理第三次作业
西安交通大学数字图像处理作业_工学_高等教育_教育专区。数字图像与视频...为实现最优阈值的自动选取,必须对直方图中的谷点 进行效果检测, 选取恰当的指标...
数字图像处理 作业题及部分答案
数字图像处理 作业题及部分答案_工学_高等教育_教育...图像处理可分为哪三个阶段? 它们是如何划分的?各...一阶微分检测边缘的依据是边缘灰度曲线的一阶导数...
更多相关标签: