当前位置:首页 >> IT/计算机 >>

数字图像处理作业3 边缘检测


Image Processing & Computer Vision

Exercise 3 Edge Detection and Morphological Operations

Zhongli Liu S1018531 M-EMSYS

Address: Calslaan 26-004 Email: z.liu@student.utwente.nl 28-2-2010

1

Image Processing & Computer Vision

Part I: Edge based image segmentation The original image is shown in Figure 1.

Figure 1 In advance, the image is set to be double type and rescaling the data for the following processing. img=imread('im_test.png'); img=im2double(img);

1. Use the function fspecial to create a Gaussian PSF-matrix h with a scale parameter (the width of the Gaussian) of σ . The size of h should be such that the truncation error of the tails of the Gaussian is negligible. Apply the filter to the test image. Apply this with four different values of σ . That is: σ = 1, 3, 5 en 9. Explain the results. Ensuring the truncation error of the Gaussian filter is negligible, the standard deviations of the Gaussian should be chosen according to: K ≥ 3σxΔ and L ≥ 3σyΔ. As required that σ = 1, 3, 5, 9 and Δ=1, if σ is assumed to be i, the size of h is (2K+1)*(2L+1) >= (6i+1)*(6i+1). sigma = [1 3 5 9]; % fill array with sigma values for i=1:4 % create the PSF, K=L>=3i, h.size=(2K+1)*(2L+1) h = fspecial('gaussian',(6*i+1),sigma(i)); imfiltered = imfilter(img,h,'replicate'); % apply the filter figure(1), subplot(2,2,i); imshow(imfiltered,[]); title(['sigma = ' num2str(sigma(i))]); end Gaussian filter is used to generate a scale-space representation of the given image and reduce some obvious noise of it. The result is shown in Figure 2.1~2.4. As σ increases, the image is smoothing with a larger and larger filter, which is removing more and more of the details the image contains. Therefore, when scale is too small, the image has many irrelevant details and is noise sensitive; when scale is too large, the image has some mixing adjacent objects and more details are lost.

2

Image Processing & Computer Vision

Figure 2.1

Figure 2.2

Figure 2.3
2.

Figure 2.4

In the scale space theory it has been argued that the best low-pass filter needed for differentiation is the Gaussian:
h ( x, y ) = x2 + y 2 exp 2πσ 2σ 2 1
2

o o o o o

Give an expression for hx ( x, y ) . Create a PSFmatrix hx that contains a sampled version hx (nΔ, mΔ) of hx ( x, y ) . Implement this with σ = 7Δ (or equivalently σ = 3 and Δ = 1 ). Use psf2otf and mesh to obtain a 3D graph of the OTF (hint: use the appropriate arguments). Apply the convolution with hx to the test image. Explain the results.

The expression for hx ( x, y ) is given below:

hx ( x, y ) =

x2 + y 2 exp 2πσ 2 2σ 2 1

x ( 2 ) σ
x

The hx in the term of PSF matrix is built as below, obviously, the matrix is easy to obtained by multiply (

σ2

)

3

Image Processing & Computer Vision

to each column of the matrix h. After filtering by hx the image is shown in Figure 3. The generated image is established by distributing the derivatives of grey levels in the original image and Gaussian filtering with scale=7. At the location in which the grey level is increasing in the X direction, the generated values of relative pixels are positive, and at the location in which the grey level is decreasing in the X direction, the generated values of relative pixels will be negative. h = fspecial('gaussian',(6*7+1),7); % 2K+1=2L+1=43, K=L=21 h_dx = (-1/49) * ((meshgrid(-21:21, -21:21)) .* h); figure(2), imshow(imfilter(img,h_dx,'replicate'),[]);

Figure 3 The conversion of PSF to OTF of matrix hx is implemented as below, the magnitude of OTF is show in Figure 4, and the image is done after ffishift. h_dx_OTF = fftshift(psf2otf(h_dx,[250 200])); figure(3), mesh(abs(h_dx_OTF)), axis on

Figure 4 4

Image Processing & Computer Vision

Other derivatives, like f y (n, m) , f xx (n, m) , f yy (n, m) , and f xy (n, m) can be obtained in likewise manner1. The function ut_gauss is an easy-to-use implementation for that. Apply these operations to the test image. Use σ = 7 . Describe and explain. The image is implemented and shown as below. Here we have the filtering in the horizontal direction, the vertical direction, or combination of the two. First derivative of Gaussian filter functions as mentioned above (for hy it is similar with hx but only in the vertical direction). Second derivative is the straightforward extension of the first derivative, which pay attention to the changing and the changing speed of the grey levels in the first derivative. Actually, second derivative (dxdx or dydy) coarsely focus on one edge and its opposite edge of a object in a image, which is reasonablely used in the following steps.
img_dx = ut_gauss(img,7,1,0); img_dy = ut_gauss(img,7,0,1); img_dxdx = ut_gauss(img,7,2,0); img_dydy = ut_gauss(img,7,0,2); img_dxdy = ut_gauss(img,7,1,1);

Figure 5.1

Figure 5.2

Figure 5.3

Figure 5.4

Figure 5.5

It is assumed that Δ = 1 as the default value.

5

Image Processing & Computer Vision

3. Using the results from question 2, calculate the gradient magnitude and the Laplacian of the test image. Don't use for-loops. Use σ = 7 . Describe and explain. The gradient magnitude and the Laplacian of the image is calculated and shown in Figure 6 and Figure 7 respectively. The result of calculating the gradient magnitude for a pixel gives the largest gradient magnitude (as absolute value) of the pixel in the image when every possible direction is taken into account. So white area in the generated image indicates the edge of an object whose gradient magnitude is large, and area with deep color indicates that at the location the grey level does not change obviously, which means it is more possible inside or outside an object. Laplacian is combined by second derivatives both in X and Y direction which is used for searching zero-crossing points in the following. gradi_mag = sqrt ((img_dx .* img_dx) + (img_dy .*img_dy));

Figure 6
laplacian = img_dxdx + img_dydy;

Figure 7 4. A cheap (but not very accurate) zero crossing detection is accomplished by first finding areas with a positive Laplacian, and next to find the boundary of these areas. We apply this procedure to the Laplacian of the test image obtained in question 3: o Create a binary image in which a pixel is set to 1 if the corresponding Laplacian for that pixel is 6

Image Processing & Computer Vision

o o

positive. (Don't use for-loops!) Determine the boundary pixels of the binary image, and display it on the screen. (Hint: use bwmorph with option 'remove'). This method for finding the zero crossings has some disadvantages. Can you mention them?

The processing is implemented as following code, and the result of first and second steps are shown in Figure 8.1 and Figure 8.2 respectively. The disadvantage is that because Laplacian does not pay attention to the magnitude of the gradients which indicates the contrastness of the "objects" in the image, every object's edge including noise in the image is detected as shown in Figure 8.2. img_lap = im2bw(laplacian, 0); mask_lap = bwmorph(img_lap,'remove');

Figure 8.1

Figure 8.2

5. Marr-Hildreth o Determine the gradient magnitude, as in question 3. Mask this image by the Marr-Hildreth zero crossings obtained in question 5. That is, all gradient magnitude pixels that are not a zero crossing should be made zero, while all other pixels are retained. o Compare the masked gradient magnitude image against a suitable threshold T; and show an edge map consisting of zero crossings for which the gradient magnitude is larger than T. Show the result on the screen. You can interactively select the threshold by visually evaluating the edge map. o Apply a threshold operation with a threshold that is a bit larger than the threshold T. Name the resulting image marker. o Apply a threshold operation with a threshold that is a bit smaller than the threshold T. Name the resulting image mask. o Propagate the marker image into the mask image, and show the result. Describe and discuss the results. (Tip: use the function imreconstruct). The first step is implemented as below and the result is shown in Figure 9. The generated image shows that by masking the gradient magnitude, Laplacian operator keeps the most obvious edges as large value of grey level and blurry edges including noises turn to be deep more or less.
gra_masked = mask_lap .* gradi_mag;

7

Image Processing & Computer Vision

Figure 9 The code of using a single threshold and the combination of both large and small thresholds are shown below. The results are shown in Figure 10.1 and Figure 10.2 respectively. It is reasonable to say the threshold of 0.008 is proper for the detection because there is no missing edge and almost no noise (except for some noise in the boundary of the image which continuously connect to the detected edge). Meanwhile, the threshold 0.005 brings a lot of fault edges and threshold 0.015 results some weak edges are missing. However, reconstruction by combining the two cases generates a acceptable edge as shown in Figure 10.2. img_edge1 = im2bw(gra_masked, 0.008); marker = im2bw(gra_masked, 0.015); mask = im2bw(gra_masked, 0.005); img_edge2 = imreconstruct(marker,mask);

Figure 10.1

Figure 10.2

6. Edge detection o Describe in one sentence the main difference between the localization criterion of the Marr-Hildreth edge detector and the one of the Canny edge detector. 8

Image Processing & Computer Vision

Answer: The main difference between the localization criterion of the Marr-Hildreth and the Canny edge detector is that Canny edge detector's localization is based on the extremes of the first derivative of the Gaussian filter but Marr-Hildreth edge detector's is based on the zero-crossings of the Laplacian of the Gaussian filter. o Apply ut_edge to the test image in order to get an edge map in which the edge elements form two non-fragmented edge segments. Select the options of ut_edge as follows: Set the option 'c' That is, apply Canny. Later we will also use 'm' (Marr-Hildreth) Use option 's' to set the scale. For now use σ = 7 . Later we will vary this parameter. Use the option 'h' (hysteresis thresholding). The thresholds are specified as two additional parameters. Read the help of the function to understand how these parameters are defined. Select them interactively such that two non-broken edge segments arise. Perhaps, you need some iterations to establish the right thresholds. Display the resulting edge map. Describe the results.

o

The generated images by Canny edge detector and the Marr-Hildreth edge detector are shown in Figure 11.1 and Figure 11.2 respectively. The larger threshold is chosen 0.1 and the smaller one is 0.0035. As shown in Figure 11.2, it seems the Canny edge detector performs better for that the detected edge is more smooth and more close to the real situation.
img_edge3 = ut_edge(img,'c','s',7,'h',[0.1,0.0035]); img_edge4 = ut_edge(img,'m','s',7,'h',[0.1,0.0035]);

Figure 11.1

Figure 11.2

Part II: Estimation of the nominal diameter and the diameter of the stenose 7. Determine for each x (column) in the edge map the two boundary pixels of the vessel. You may use the knowledge that the vessel is horizontally aligned, and centred in the y -direction. Next, determine for each column the diameter (expressed in pixels). The result should be an array diam such that diam(c) yields the diameter at the c-th column. Create and display a plot of diam. In this step, the img_edge3 which is detected by Canny edge detection is used and the measuring is implemented as below, in which L1 indicates the location of the first white pixel in the column and L2 indicates the location of the second white pixel in the column. It is obvious the width of the vessel is the difference of the 9

Image Processing & Computer Vision

absolute value of L1 subtracts L2. The plot of in Figure 12 shows the diameter is around 32 pixels. [V1 L1] = max(img_edge3(1:125,1:200)); [V2 L2] = max(img_edge3(126:250,1:200)); L2=125+L2; diam = abs(L1-L2); figure(9), plot(diam), axis([1 200 0 100]);

Figure 12 8. Estimate the nominal diameter by calculating the mean of the diameters obtained in 7. Use the function mean. By mean() function, the nominal diameter is 30.9150 pixels which equals to 1.2366 mm. diam_mean = mean (diam); diam_mean_mm = diam_mean*1.5/37.5;

10

Image Processing & Computer Vision

Appendix: Matlab code (used in Matlab 7.1)
% zhongli, 12-3-2010 % step 0 img=imread('C:\Users\lenovo\Desktop\courses\Image Processing and Computer Vision\dbb_exercises\im_test.png'); img=im2double(img); figure(10), imshow(img); % step 1 sigma = [1 3 5 9]; % fill array with sigma values for i=1:4 h = fspecial('gaussian',(6*i+1),sigma(i)); % create the PSF, K=L>=3i, h.size=(2K+1)*(2L+1) imfiltered = imfilter(img,h,'replicate'); % apply the filter figure(1), subplot(2,2,i); imshow(imfiltered,[]); title(['sigma = ' num2str(sigma(i))]); end % step 2 h = fspecial('gaussian',(6*7+1),7); % 2K+1=2L+1=43, K=L=21 h_dx = (-1/49) * ((meshgrid(-21:21, -21:21)) .* h); figure(2), imshow(imfilter(img,h_dx,'replicate'),[]); h_dx_OTF = fftshift(psf2otf(h_dx,[250 200])); figure(3), mesh(abs(h_dx_OTF)), axis on img_dx = ut_gauss(img,7,1,0); figure(4); subplot(2,3,1); imshow(img_dx,[]), title('dx'); axis on img_dy = ut_gauss(img,7,0,1); subplot(2,3,2); imshow(img_dy,[]), title('dy'); axis on img_dxdx = ut_gauss(img,7,2,0); subplot(2,3,3); imshow(img_dxdx,[]), title('dxdx'); axis on img_dydy = ut_gauss(img,7,0,2); subplot(2,3,4); imshow(img_dydy,[]), title('dydy'); axis on img_dxdy = ut_gauss(img,7,1,1); subplot(2,3,5); imshow(img_dxdy,[]), title('dxdy'); axis on % step 3 gradi_mag = sqrt ((img_dx .* img_dx) + (img_dy .*img_dy)); laplacian = img_dxdx + img_dydy; figure(5); subplot(1,3,1), imshow(laplacian,[]);

11

Image Processing & Computer Vision

% step 4 img_lap = im2bw(laplacian, 0); subplot(1,3,2), imshow(img_lap); mask_lap = bwmorph(img_lap,'remove'); subplot(1,3,3), imshow(mask_lap); % step 5 figure(6); subplot(1,2,1), imshow(gradi_mag,[]); gra_masked = mask_lap .* gradi_mag; subplot(1,2,2), imshow(gra_masked,[]); figure(7); img_edge1 = im2bw(gra_masked, 0.008); subplot(1,2,1), imshow(img_edge1,[]); marker = im2bw(gra_masked, 0.015); mask = im2bw(gra_masked, 0.005); img_edge2 = imreconstruct(marker,mask); subplot(1,2,2), imshow(img_edge2,[]); % step 6 img_edge3 = ut_edge(img,'c','s',7,'h',[0.1,0.0035]); figure(8), subplot(1,2,1), imshow(img_edge3); img_edge4 = ut_edge(img,'m','s',7,'h',[0.1,0.0035]); subplot(1,2,2), imshow(img_edge4); % step 7 [V1 L1] = max(img_edge3(1:125,1:200)); [V2 L2] = max(img_edge3(126:250,1:200)); L2=125+L2; diam = abs(L1-L2); figure(9), plot(diam), axis([1 200 0 100]); % step 8 diam_mean = mean (diam); diam_mean_mm = diam_mean*1.5/37.5;

12


相关文章:
数字图像处理第三次作业.doc
数字图像处理第三次作业 - 实验报告 实验名称 实验课程 实验三 图像边缘检测 数字图像处理 姓名 成绩 班级 电子信息 0902 班 学号 日期 2011 年 12 月 6 ...
数字图像处理边缘检测.ppt
数字图像处理边缘检测 - 第六章 图像分割和分析 第六章图像分割和分析 ? 6.
数字图像处理边缘检测.ppt
数字图像处理边缘检测 - 第六章 图像分割和分析 第六章图像分割和分析 ? 6.
数字图像处理期末大作业.doc
数字图像处理期末大作业 - 采用Hough变换来检测图像中圆的过程... 数字图像处理期末大作业一、问题描述 实现第十章...3.边缘检测 对水平,垂直,+45 度,-45 度方向...
数字图像处理试卷及答案.doc
Laplacian 算子 ( c )8.采用模板[-1 1]主要检测___方向的边缘。 a.水平 ...少差、 名词解释(每小题3分,共15分) 1.数字图像是将一幅画面在空间上...
数字图像处理图像分割与边缘检测.ppt
数字图像处理图像分割与边缘检测 - 第五章:图像分割与边缘检测 一.图像分割: 二.边缘检测 :.轮廓跟踪: 第五章: 第五章:图像分割与边缘检测 图像分割 边缘...
第3章 空间域图像增强2模板处理.ppt
第3章 空间域图像增强2模板处理_工学_高等...1, j ? 1) | 数字图像处理 常用边缘检测算子算子...Others(滤镜 其它) 数字图像处理 作业 三 1. ...
数字图像处理的边缘检测技术研究.pdf
数字图像处理边缘检测技术研究 - 第 18 卷第 3 期 2007 年 6 月
数字图像处理技术作业解答.doc
图像经过 3*3 箱式滤波处理后的结果。 边界采用 6.5.2 节中方法 3-B ...图像平滑、边缘检测、边缘锐化均可以由图像与一个滤波矩阵的卷积得到。分析一 下...
数字图像处理matalab,图像边缘检测.ppt
数字图像处理matalab,图像边缘检测 - 数字图像处理 数字图像处理作业一 图像边缘检测 数字图像处理 设计内容: 1)采用适当的方法对图像imageA.bmp,进行 处理,得到...
数字图像处理试卷及答案.pdf
数字图像处理试卷及答案_数学_高中教育_教育专区。一...3、Canny 边缘检测器 答:Canny 边缘检测器是使用...数字图像处理与分析习题... 暂无评价 37页 1下载券...
数字图像处理中边缘检测算法的对比研究.pdf
图像边缘检测算法比较与分... 3页 免费 数字图像处理边缘检测算... 4页 ...作业井架的力学分析与承载能力研究 [D ]. 大庆: 大 庆石油学院, 2004: ...
数字图像处理期末复习题3.doc
数字图像处理期末复习题3_工学_高等教育_教育专区。二、名词解释( 每小题 5 ...3、Canny 边缘检测器 答:Canny 边缘检测器是使用函数 edge 的最有效边缘检测器...
电子科技大学14秋《数字图像处理》在线作业2答案.doc
电子科技大学14秋《数字图像处理》在线作业2答案_...常见边缘检测算子包括()
数字图像处理大作业.doc
数字图像处理作业 [HW5] (1)程序代码 ) %灰度图像转换 RGB=imread('lzh....边缘检测图像'); subplot(2,2,3);imshow(BW2);title('prewitt 边缘检测图像...
数字图像处理的matlab作业.doc
数字图像处理的matlab作业 - 分析图像边缘检测原理,给出梯度算子、拉普拉斯算子和 canny 算子原理分析过程。选取两幅图像,编程对其进行边 缘检测(要求使用 sobel 算子...
数字图像处理中边缘检测算法的对比研究.pdf
数字图像处理边缘检测算法的对比研究 - 第 2期(第1 3期)总5 200 9
数字图像处理试卷及答案真不错.doc
2.图像锐化与图像平滑有何区别与联系? 3.伪彩色增强与假彩色增强有何异同点? 4.梯度法与 Laplacian 算子检测边缘的异同点? 《数字图像处理》试卷(A 卷) 参考...
图像分割综述(数字图像处理大作业).doc
关键字:数字图像处理;图像分割;双峰法;迭代法;K 均值聚类;边缘检测 Overview ...(a)原图 (b)sobel 处理后 图 3 sobel 效果图 3 阈值分割阈值分割是一种...
数字图像处理中边缘检测算法的对比研究.pdf
数字图像处理边缘检测算法的对比研究 - 第 2 期 ( 总第 153 期) 2
更多相关标签: