Salman Bin Abdulziz University College of Applied Medical Sciences Medical Equipment Technology Department Computer Image Processing (BMTS 492) Dr. Omar Alfarouk 1 Objectives: Upon completion of this course, students should be able to: • Understand the basic concepts of digital image processing. • Perform various image processing techniques such as image filteration, image segmentation, image enhancement, and restoration.. • Use MATLAB for image processing. Course Description. This course deals with digital image processing on computer includes: statistics on the image, the notion of pixel, value representation in gray level images, color images, and operation on pixels for image enhancement. It covers also convolution application for different type of filters on images for noise reduction, enhancement using operation on histograms, linear and non linear filters, image enhancement by histogram equalization, filter based on Fourier space and image restoration. 2 Topics to be Covered List of Topics Introduction to digital image processing, image representation Digital Image Fundamentals Image Transforms Image Enhancement and Restoration Image Segmentation No of Cont Week actho s urs 4 1 2 2 2 2 8 8 8 8 Representation and Description 1 4 Recognition and Interpretation 2 8 Image Compression 2 8 3 1. Required Text(s) An Introduction to Digital Image Processing with MATLAB, Alistair Mc.Andrew ISBN: 0534400116 2. Essential References 1.Digital Image Processing, Gonzalez Rafeal, Prentice Hall, 2001 2.Digital Image Processing, William Pratt, John Wiley, 1991 3.Digital Image Processing, Kenneth Castleman, Prentice Hall, 1996 4 • Chapter I: Basics of Image Analysis – Sampling and Quantization • To be suitable for computer processing an image function f(x,y) must be digitized both spatially and Amplitude. • Digitization of the spatial coordinates (x,y) is called image sampling, and amplitude quantization is called gray-level quantization. 5 All of the sampling theorem concepts apply to the sampling of 2D signals or images . 6 In most real-life applications of imaging and image processing, it is not possible to estimate the frequency content of the images. 7 • Adequate sampling frequencies need to be established for each type of image or application based upon prior experience and knowledge. 8 (a) 225x250 pixels; (b) 112 x 125 pixels; (c) 56 x 62 pixels; and (d) 28 x 31 pixels. All four images have 256 gray levels at 8 bits per pixel. 9 8-bits (256 grey levels) image 10 1-bit (2-grey levels) image 11 • A digitized image function can be represented by a matrix F(x,y) ≈ f (0,0) f (0,1) f (0, M 1) f (1,0) f (1,1) f (1, M 1) f ( N 1,0) f ( N 1, M 1) 12 13 • • • • • • • • Each element of the matrix represent a picture element or a pixel. The digitization process requires, as a common practice: N = 2n , M = 2k And the gray levels (the pixel amplitude) G = 2m The total number of bits required to store a digitized image: B = NxMxm If N =M, B = mN2 For example a 512x512 image with 256 gray levels requires 2097152 bits or 2Mbytes of space. The resolution of an image depends very much on the number of pixels and the number of gray levels. 14 1.2 Brightness and Contrast • The higher the amplitude or the intensity I(x,y) of the pixel the brighter the pixel and hence the image looks brighter if a large number of pixels have high intensities, if B=0, identifies an empty image. The image Brightness is defined as average pixel value: 15 B M 1 N 1 1 MN f ( x, y) y 0 x 0 16 • And image Contrast is defined as the spread of pixel values about the average, the lower the spread the lower is the contrast: 17 C M 1 N 1 1 MN [ f ( x, y) B] 2 Y 0 X 0 18 • At the pixel level, the contrast is defined as the difference between its value I(x,y) and the average background intensity normalized by the full intensity range: 19 C( x, y) I ( x, y ) I I max 0 20 1.3 Arithmetic Operations • Addition and Subtraction • The most commonly required arithmetic operations for combining two separate images are (pixel-by-pixel) addition and subtraction: 21 • Addition: I(x,y) = min[I1(x,y) + I2(x,y); Imax], • Subtraction: I(x,y) = max[I1(x,y) - I2(x,y); Imin], 22 • For an 8-bit gray level image Imin = 0 and Imax = 255. The most important thing to be aware is overflow and underflow that leads to image clipping. In addition if sum exceeds Imax will be set to Imax. If the difference falls below 0, it will be set to 0. 23 • Image clipping can be avoided if the range of intensity values is rescaled before the images are combined. You need to identify maximal and minimal output intensities (I1 + I2)max and (I1 - I2)min respectively. These values are used as scale factors. 24 • Modified Addition I(x,y) = [I1(x,y) + I2(x,y)]x255 (I1 + I2)max • Modified Subtraction I(x,y) = [I1(x,y) - I2(x,y) + |(I1-I2)min|]x255 [255 + |(I1 - I2)min|] 25 • Example 1.0 • Add the following 4-pixel images: • Image (1) 90 200 Image (2) 70 • 70 0 10 100 50 26 • Solution: By scanning the two images, • (I1 + I2)max = 200+100=300 • Image (1) + Image (2) = (90+70)x255/300 (200+100)x255/300 (70+10)x255/300 ( 0 + 50)x255/300 27 • = 139 68 255 43 28 • Example 1.1 • Image (1) 90 200 Image (2) 70 • 70 0 10 Subtract Image (2) from Image (1) ______________________________ • Solution • (I1 - I2)min = - 50 • Image (1) - Image (2) = 100 50 29 (90-70+50)x255/(255+50) (200 -100+50)x255/(255+50) (70-10+50)x255/(255+50) (0 - 50 +50)x255/(255+50) 30 = 59 92 125 0 31 • Addition has application in noise averaging and subtraction is used in digital angiography. • Division I(x,y) = • [I1(x,y)/((I2(x,y) + 1))]x255 ((I1/(I2+1))max 32 • Addition has application in noise averaging and subtraction is used in digital angiography. • Division I(x,y) = • [I1(x,y)/(I2(x,y) + 1)]x255/((I1/(I2+1))max 33 • Division is used in flat fielding as in video microscopy when images are recorded with cameras that exhibit nonlinear output characteristics. Boolean combination of images also possible. 34 Addition noise reduction 35 Spinal column subtraction 36 Types of Digital images • Four Basic types of images • Binary. Each pixel is just black or white (see Fig 1.2b), 1-bit per pixel. Such images are very efficient in terms of storage. Text, fingerprints, or architectural plans are examples of binary images. The matrix of binary images contain one and zeros. • Grayscale. Each pixel is a shade of gray from 0 (black) to 255 (white), needs 8-bits representation, see Fig 1.2a. Find application in medicine (x-rays). 37 RGB • True color or red-green-blue RGB. Each pixel is a mixture of three amount of RGB colors. If each component has a range of 0-255, this gives a total of 2563 = 16777216 different possible colours. The total number of bits for each pixel is 24. RGB image consists of a stack of three matrices. 38 39 RGB Color Model 40 A NERVE CELL 41 Indexed Images • Index color images uses a colormap. The colormap sets colors to the matrix. A colormap is an m-by-3 matrix of real numbers between 0.0 and 1.0. Each row is an RGB vector that defines one color. The kth row of the colormap defines the k-th color. 42 An Indexed Image 43 Image Format • Tagged image file format (TIFF), is particularly general format, because it allows binary, grayscale, RGB, and indexed color images, as well as different amounts of compression. TIFF is thus a good format for transmitting images between different operating systems and environments. TIFF also allows more than one image per file. 44 1.4 Intensity Histogram • A histogram is a plot showing the number of image pixels that display each of the possible discrete intensity values. 45 • The number of pixels is represented by a histogram pin height. If only one bin is occupied then the corresponding image is completely featureless (uniformly white, black, or gray). 46 • If all the bins are occupied then the image brightness is well spread or the contrast is high. For examples (see Fig1.0 and Fig1.1) 47 Fig 1.0 (a) An image of a liver tissue biopsy (b) The histogram computed from the image in (a). Note that the larger peak represent the gray background and the smaller (plateau like) represents the small black regions foreground 48 Fig1.1 (a) Gaafer : Three distinct regions, gives three Histogram peaks (b) Gaafer Histogram. 49 • 2.0 Image Enhancements in the Spatial Domain • 2.1 Histogram Expansion: Low contrast images (characterized by a histogram with a narrow peak) can result from limited dynamic range in the imaging system. The idea behind histogram expansion is to increase the dynamic range of the gray level. It is a straightforward and conservative linear transformation. 50 • First, the histogram of the original image is examined to determine the value limits (lower = a, upper = b) in the unmodified picture. 51 • Next step is to determine the limits over which image intensity values will be extended. These lower and upper limits will be called c and d, respectively (for standard 8-bit grayscale pictures, these limits are usually 0 and 255). 52 • Then for each pixel, the original value r is mapped to output value s using the function: 53 • (linear mapping, an equation of a straight line) 54 • Example 2.1 • The image in Fig 2.1a is enhanced by histogram expansion. Where c= 0, d =256, a =7 and b = 120. a and b are found from the histogram shown in Fig 2.1b. Fig 2.1c is the resultant enhanced image and Fig 2.1d is its histogram. Note that the histogram expansion did not change the shape of the histogram. 55 56 • Example 2.2 • The image in Fig 2.2a, a MRI of brain is enhanced by histogram expansion. Where c = 0, d =255, a =60 and b = 188. a and b are found from the histogram shown in Fig 2.2b. Fig 2.2c is the resultant enhanced image and Fig 2.2d is its histogram. Note that the histogram expansion did not change the shape of the histogram. 57 58 • Example : • Suppose we have a 4-bit image with the histogram shown in Figure 2.3, which is associated with a table of the numbers r of gray values, • (with the total number of pixels = 330). We can stretch out the gray levels in the center of the range by applying the linear function shown earlier. This function has the effect of stretching the gray levels 5—9 to gray levels 2—14. 59 60 a = 5, b = 9, c = 2, d = 14. 61 • Yields: • rk 5 • Sk 2 6 5 7 8 8 11 9 14 62 63 B C M 1 N 1 1 MN f ( x, y) y 0 x 0 M 1 N 1 1 MN [ f ( x, y) B] 2 Y 0 X 0 64 The brightness B = 6.67 and Contrast C = 1.31 Orignal Gray levels No. of Pixels ∑∑f(x,y) [f(x,y)-B]^2 5 70 350 202.3 6 110 660 53.9 7 45 315 4.05 8 70 560 118.3 9 35 315 185.15 2200 563.7 Sum = B=2200/330 6.7 C= 1.31 65 New Gray levels No. of Pixels ∑∑f(x,y) [f(x,y)-B]^2 2 70 140 1750 5 110 550 440 8 45 360 45 11 70 770 1120 14 35 490 1715 2310 5070 Sum= B=2310/330 7 C=3.92 66 • Histogram piecewise linear stretching could be effected by a transformation shown in Fig2.5 (implemented by histpwl Matlab function). The location of the points (r1,S1) and (r2,S2) control the shape of the transformation function. If r1 = S1 and r2 = s2, the transformation is a linear function that produces no change in gray levels. If r1 = r2 and S1 = 0 and S2 = L-1, the transformation becomes a thresholding function that creates a binary image. 67 A piecewise linear stretching 68 Histogram Equalization • Histogram stretching is that they require user inputs. • A better approach is provided by histogram equalization, which is an entirely automatic procedure. • The idea is to change the histogram to one that is uniform; 69 • Suppose our image has L different gray levels, 0, 1, 2, . , L-1, and gray level i occurs n, times in the image. Suppose also that the total number 70 EXAMPLE: Suppose a 4-bit grayscale image has the histogram shown in Figure 3.1, associated with a table of the numbers n of gray values (with n = 360). 71 To equalize this histogram, we form running totals of the ni and multiply each by 15/360 = 1/24. 72 73 74 75 76 77 Intensity Transformations 78 A non-linear stretching can be achieved by using power of gamma 79 Intensity Transformations 80 Intensity Transformations 81 Thresholding • A gray image is turned into binary image (1 & 0, white or black) by choosing a gray level T and then turning every pixel into 1 or 0 according to 82 Thresholding Bacteria 83
© Copyright 2026 Paperzz