Abstract
A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of two stages. The first stage computes a fuzzy derivative for eight different directions. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Both stages are based on fuzzy rules which make use of membership functions. The filter can be applied iteratively to effectively reduce heavy noise. In particular, the shape of the membership functions is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. A statistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. Experimental results are obtained to show the feasibility of the proposed approach. These results are also compared to other filters by numerical measures and visual inspection. Index Terms—Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction.
Introduction
HE application of fuzzy techniques in image processing is a promising research field [1]. Fuzzy techniques have already been applied in several domains of image processing (e.g., filtering, interpolation [2], and morphology [3], [4]), and have numerous practical applications (e.g., in industrial and medical image processing [5], [6]). In this paper, we will focus on fuzzy techniques for image filtering. Already several fuzzy filters for noise reduction have been developed, e.g., the well-known FIRE-filter from [7]–[9], the weighted fuzzy mean filter from [10] and [11], and the iterative fuzzy control based filter from [12]. Most fuzzy techniques in image noise reduction mainly deal with fat-tailed noise like impulse noise. These fuzzy filters are able to outperform rank-order filter schemes (such as the median filter). Nevertheless, most fuzzy techniques are not specifically designed for Gaussian(-like) noise or do not produce convincing results when applied to handle this type of noise.
Therefore, this paper presents a new technique for filtering narrow-tailed and medium narrow-tailed noise by a fuzzy filter. Two important features are presented: first, the filter estimates a “fuzzy derivative” in order to be less sensitive to local variations due to image structures such as edges; second, the membership functions are adapted accordingly to the noise level to perform “fuzzy smoothing.” The construction of the fuzzy filter is explained in Section II. For each pixel that is processed, the first stage computes a fuzzy derivative. Second, a set of 16 fuzzy rules is fired to determine a correction term. These rules make use of the fuzzy derivative as input. Fuzzy sets are employed to represent the properties , , and . While the membership funcand are fixed, the membership tions for is adapted after each iteration. The adaptafunction for tion scheme is extensively explained in Section III and can be combined with a statistical model for the noise. In Section IV, we present several experimental results. These results are discussed in detail, and are compared to those obtained by other filters. Some final conclusions are drawn in Section V.
Fuzzy Filter
The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges.1
The main concern of the proposed filter is to distinguish between local variations due to noise and due to image structure. In order to accomplish this, for each pixel we derive a value that expresses the degree in which the derivative in a certain direction is small. Such a value is derived for each direction corresponding to the neighboring pixels of the processed pixel by a fuzzy rule (Section II-A). The further construction of the filter is then based on the observation that a small fuzzy derivative most likely is caused by noise, while a large fuzzy derivative most likely is caused by an edge in the image. Consequently, for each direction we will apply two fuzzy rules that take this observation into account (and thus distinguish between local variations due to noise and due to image structure), and that determine the contribution of the neighboring pixel values. The result of these rules (16 in total) is defuzzified and a “correction term” is obtained for the processed pixel value (Section II-B). A. Fuzzy Derivative Estimation Estimating derivatives and filtering can be seen as a chicken-and-egg problem; for filtering we want a good indication of the edges, while to find these edges we need filtering. 1Other fuzzy filters, such as the smoothing fuzzy control based filter [12], also take care of edges, but after instead of simultaneous with the noise filtering.
Manuscript received November 24, 2001; revised June 27, 2002 and November 13, 2002. The work of D. Van De Ville was supported by the Fund for Scientific Research—Flanders (Belgium) through a mandate of Research Assistant. The work of M. Nachtegael and D. Van der Weken was supported by the GOA-project 12.0513.98 by Ghent University, Belgium. D. Van De Ville was with Ghent University, Belgium. He is currently with the Biomedical Imaging Group, the Swiss Federal Institute of Technology Lausanne (EPFL), CH1015 Lausanne, Switzerland. M. Nachtegael, D. Van der Weken, and E. Kerre are with the Fuzziness and Uncertainty Research Modeling Unit, the Department of Applied Mathematics and Computer Science, Ghent University, B9000 Ghent, Belgium. W. Philips is with the Department of Telecommunications and Information Processing, Ghent University, B9000 Ghent, Belgium. I. Lemahieu is with the Department of Electronics and Information Systems, Ghent University, B9000 Ghent, Belgium. Digital Object Identifier 10.1109/TFUZZ.2003.814830
In our approach, we start by looking for the edges. We try to provide a robust estimate by applying fuzzy rules. neighborhood of a pixel as displayed Consider the in Fig. 1(a). A simple derivative at the central pixel poin the direction ( sition ) is defined as the and its neighbor in the difference between the pixel at . direction . This derivative value is denoted by
Next, the principle of the fuzzy derivative is based on the following observation. Consider an edge passing through the in the direction. The neighborhood of a pixel derivative value will be large, but also derivative values of neighboring pixels perpendicular to the edge’s direc-direction tion can expected to be large. For example, in the , we can calculate the values and [see Fig. 1(b)]. The idea is to cancel out the effect of one derivative value which turns out to be high due to noise. Therefore, if two out of three derivative values are small, it is safe to assume that no edge is present in the considered direction. This observation will be taken into account when we formulate the fuzzy rule to calculate the fuzzy derivative values. In Table I, we give an overview of the pixels we use to calculate the fuzzy derivative for each direction. Each direction (column 1) corresponds to a fixed position (column 2); the sets in column 3 specify which pixels are considered with respect to . the central pixel To compute the value that expresses the degree to which the fuzzy derivative in a certain direction is small, we will make set. These rules are implemented using the minimum to represent the AND-operator, and the maximum for the OR-operator. A defuzzification is not needed since the second stage, i.e., the fuzzy smoothing, directly uses the membership degrees to . The robustness we try to achieve by the fuzzy derivative is by combining multiple simple derivatives around the pixel and by making the parameter adaptive. The proper choice of will be discussed later. B. Fuzzy Smoothing for the processed pixel To compute the correction term value, we use a pair of fuzzy rules for each direction. The idea behind the rules is the following: if no edge is assumed to be present in a certain direction, the (crisp) derivative value in that direction can and will be used to compute the correction term. The first part (edge assumption) can be realized by using the fuzzy derivative value, for the second part (filtering) we will have to distinguish between positive and negative values. . Using the For example, let us consider the direction and , we fire the following two values and : rules, and compute their truthness use of the fuzzy set for the property where is an adaptive parameter (see Section III). for For example, the value of the fuzzy derivative in the -direction is calculated by applying the pixel the following rule:
Eight such rules are applied, each computing the degree of mem, , to the bership of the fuzzy derivatives
For the properties and , we also use linear membership functions [see Fig. 2(b) and (c)]. Again, we implement the AND-operator and OR-operator by respectively the minimum and maximum. This can be done for each direction . The final step in the computation of the fuzzy filter is the defuzzification. We are interested in obtaining a correction term , which can be added to the pixel value of location . and , (so Therefore, the truthness of the rules for all directions) are aggregated by computing and rescaling the mean truthness as follows: (4) contains the directions and represents the number where of gray levels. So, each directions contributes to the correction term . III. ADAPTIVE THRESHOLD SELECTION Instead of making use of larger windows to obtain better results for heavier noise, we choose to apply the filter iteratively. is adapted each The shape of the membership function iteration according to an estimate of the (remaining) amount of noise. The method assumes that a percentage of the image can be considered as homogeneous and as such can be used to estimate the noise density. nonoverlapWe start by dividing the image in small ping blocks. For each block , we compute a rough measure for the homogeneity of this block by considering the maximum and minimum pixel value.
This measure is commonly used in the context of fuzzy image processing [13]. Next, a histogram of the homogeneity values is computed, and the hypothesis comes in: the percentile of the most homogeneous blocks is determined. We assume this percentile is a measure for the homogeneity of “typical” noise in the image. Using a statistical model for the noise distribution, we will show that there is a linear relationship between the homogeneity and the standard deviation. noise samples , , independently Assume and identically distributed, with a probability density funcand cumulative density function (CDF) tion (PDF) . Since a change of the standard deviation rescales samples are scaled the PDF, the maximum and minimum of the same way. This establishes a linear relationship between the homogeneity and the standard deviation. This can also be derived more formally. We assume the expectation value to be zero, and the variance to be . If we scale the PDF with a factor , we can obtain the following general result: (6) (7) Next, we define the maximum and minimum of the as samples for which we can derive the CDFs as Using (6) and (7), we can show that scaled according to , i.e., and are
Therefore, there is a linear relationship between the (expectation samples and the standard value of the) homogeneity of the deviation (8) where is the slope. Note the correspondence of to . can be determined empirically. The value of the factor ) are genA large number of synthetic patches (of size erated. Each patch consists of noise with the presumed distribution. The effective noise level and the homogeneity of each patch are measured. The mean value and standard deviation are calculated for the whole test set. This experiment is done for several noise levels, resulting in the relationship between the homogeneity and the noise level. Fig. 3 shows the result for the and 200 experiments for several noise levels. The case of errorbars indicate the standard deviation on the noise level estimates.2 We carried out this experiment for Gaussian, Laplacian, of, respectively, 52.1, 41.8, and uniform noise, obtaining a and 75.2. Next, we use the hypothesis that at least a percentage of the blocks were originally homogeneous (before the noise degradation). The histogram of the homogeneity of the blocks in the image is computed, and a percentile of the most homogeneous of this percentile is related to blocks is obtained. The value our estimate for the noise variance by the linear relationship we derived before. A final amplification factor (see later for its choice) is used to get the parameter (9)
This scheme is applied before each iteration to obtain the parameter , which determines the shape of the membership func. tion Compared to the direct calculation of the variance of (a part of) the image, the current scheme distinguishes between blocks containing mainly noise and blocks containing both image structure and noise. This is done by the sorting operation of the histogram operation on the homogeneity values. As a result, the estimate of the noise variance is based on smooth blocks only, for as long as the initial hypothesis remains true. IV. RESULTS The proposed filter is applied to grayscale test images (8-bit, ), after adding Gaussian noise of different levels. Such a procedure allows us to compare and evaluate the filtered image 2We also note that the standard deviation of the estimated homogeneity is very low.
Fig. 5 shows the normalized histogram of the homogeneity of “cameraman,” for the original image, but also for the image , , and corrupted with different noise levels, i.e., . Using the 20% percentile and (8), the estimates for the noise levels are, respectively, 5.2, 9.4, and 17.7. For these noise levels our filter is applied using different values for the . To evaluate the amplification factor , namely results, we computed the mean squared error (MSE) between the original image and the filtered image. Figs. 6 and 7 show a plot of the MSE as function of the and . number of iterations for added noise with Notice that for low noise levels (Fig. 6), one iteration is sufficient to efficiently remove the noise. Also, a low amplication factor gives the best results. The MSE of “cameraman” surprisingly increases with the number of iterations, this is mainly due the image content, i.e., the grass is very similar to noise and gets increasingly filtered. For other images, such as “boats,” this increase does not occur. Therefore, images with low noise levels and containing fine textures should be treated carefully. For high noise levels (Fig. 7), the results of “cameraman” are much more stable. A few iterations (3–4) are sufficient to effectively smooth out the noise. Also, a somewhat higher value of gives better results. for the “boats” test image. Fig. 8 shows the parameter Since depends on the estimate for the remaining noise level , we expect this curve to decrease as iterations go on. Based on an estimate for the “natural” or “acceptable” amount of noise (depending on the application), we could use the estimate of as a stop criterion as it gets sufficiently low. Another possible with respect to the stop criterion could be when the change previous iteration is small. The parameter affects the amount of smoothing which is applied by the filter. Based on our observations of the MSE- curves, could also be determined using the estimate : a high noise level corresponds to a higher value of , while a low noise level corresponds to a lower value of . We also compared our fuzzy filter with several other filter techniques: the mean filter, the adaptive Wiener filter [14], fuzzy median (FM) [15], the adaptive weighted fuzzy mean (AWFM1 and AWFM2) [10], [11], the iterative fuzzy filter (IFC), modified iterative fuzzy filter (MIFC), and extended iterative fuzzy filter (EIFC) [12]. Table II summarizes the results we obtained. Quite different results are obtained between “cameraman” and “boats.” For “cameraman,” the proposed filter performs very filter is able to preserve the very small details (such as the narrow ropes). On the other hand, the proposed filter gives a more “natural” image without the “patchy look” of the adaptive Wiener filter. Finally, we like to show a practical application of the fuzzy filter. In particular, this image restoration scheme could be used to enhance satellite images. Of course, since the original image is already corrupted by noise, it is not possible to obtain a numerical measure which indicates how “good” the image is. Fig. 13 shows the original image and the results after fuzzy filtering with different parameters.
Depending on the application (e.g., visual inspection, segmentation), one could prefer lighter or heavier filtering (by choosing correspondingly).
Conclusion
This paper proposed a new fuzzy filter for additive noise reduction. Its main feature is that it distinguishes between local variations due to noise and due to image structures, using a fuzzy derivative estimation. Fuzzy rules are fired to consider every direction around the processed pixel. Additionally, the shape of the membership functions is adapted according to the remaining amount of noise after each iteration. Experimental results show the feasibility of the new filter and a simple stop criterion. Although its relative simplicity and the straightforward implementation of the fuzzy operators, the fuzzy filter is able to compete with state-of-the-art filter techniques for noise reduction. A numerical measure, such as MSE, and visual observation show convincing results. Finally, the fuzzy filter scheme is sufficiently simple to enable fast hardware implementations.
nly the fuzzy median (FM) gives a better MSE for . A closer inspection of Fig. 9 shows that the proposed filter better preserves details such as the grass (right side, just below the building) and the background (left side, small buildings). Also the face is slightly sharper. The detail images in Fig. 10 confirm these results. Note that the grass is better preserved by the proposed filter than using the fuzzy mean. The “boats” image ), the proprovides a different result. For low noise levels ( posed filter still performs best, but for higher noise levels, the AWFM2 filter gives the best results. Fig. 11 shows the filtered images. The detail images of Fig. 12 reveal that the AWFM2
Ieee Transactions on Fuzzy Systems, Vol. 11, No. 4, August 2003
Dimitri Van De Ville (M’02) was born in Dendermonde, Belgium, in 1975. He received the Engineering and Ph.D. degrees in computer science from Ghent University, Ghent, Belgium, in 1998 and 2002, respectively. He worked in the Medical Image and Signal Processing Group (MEDISIP) and the MultiMedia Lab, both part of Department of Electronics and Information Systems (ELIS), Ghent University. His main research interests are in signal and image processing, in particular, interpolation and resampling related topics. Currently, he is working as a Senior Researcher at the Swiss Federal Institute of Technology Lausanne (EPFL) in the Biomedical Imaging Group (BIG), Lausanne, Switzerland.
Etienne E. Kerre was born in Zele, Belgium, in 1945. He received the M.Sc. and Ph.D. degrees in mathematics from Ghent University, Ghent, Belgium, in 1967 and 1970, respectively. Since 1984, he has been a Lector and, since 1991, a Full Professor at Ghent University. In 1976, he founded the Fuzziness and Uncertainty Research Modeling Unit (FUM) and, since then, his research has been focused on the modeling of fuzziness and uncertainty, and has resulted in a great number of contributions in fuzzy set theory and its various generalizations, and in evidence theory. The theories of fuzzy relational calculus and fuzzy mathematical structures owe a very great deal to him. Over the years, he has also been a promoter of 16 Ph.D. degrees on fuzzy set theory. His current research interests include fuzzy and intuitionistic fuzzy relations, fuzzy topology, and fuzzy image processing. He has authored or coauthored eleven books and more than 100 papers of his have appeared in international refereed journals. Dr. Kerre is a referee for more than 30 international scientific journals, and is also Member of the Editorial Board of international journals and conferences on fuzzy set theory. He was an Honorary Chairman at various international conferences.
Mike Nachtegael was born in Sint-Niklaas, Belgium, in 1976. He received the M.Sc. degree in mathematics from Ghent University, Ghent, Belgium, in 1998. In the same year, he joined the Fuzziness and Uncertainty Modeling Research Unit of Prof. E. Kerre, where he received the Ph.D. degree on fuzzy techniques in image processing in 2002. Currently, he is active as a Postdoctoral Researcher in the Department of Applied Mathematics and Computer Science, Ghent University. After secondary school, he published two reference books on mathematics (1995) and on chemistry and physics (1996). He has authored or coauthored more than 20 papers, he has coedited two books on fuzzy techniques in image processing, he has coorganized three sessions at international conferences and he was comanager of the International FLINS 2002 Conference.
Wilfried Philips (S’90–M’93) was born in Aalst, Belgium, in 1966. He received the Diploma degree in electrical engineering and the Ph.D. degree in applied sciences from Ghent University, Ghent, Belgium, in 1989 and 1993, respectively. From October 1989 to October 1998, he was with the Department of Electronics and Information Systems, the University of Ghent, for the Flemish Fund for Scientific Research (FWO-Vlaanderen), first as a Research Assistant and later as a Postdoctoral Research Fellow. Since November 1997, he has been a Lecturer with the Department of Telecommunications and Information Processing, Ghent University. His main research interests are image and video restoration, image analysis, lossless and lossy data compression of images and video, and processing of multimedia data.
Dietrich Van der Weken was born in Beveren, Belgium, in 1978. He received the M.Sc. degree in mathematics from Ghent University, Ghent, Belgium, in 2000. In September 2000, he joined the Department of Applied Mathematics and Computer Science, Ghent University, where he is a member of the Fuzziness and Uncertainty Modeling Research Unit working toward the Ph.D. degree with a thesis on fuzzy techniques in image processing under the promotorship of Prof. E. Kerre. One of his main research topics is the measurements of similarity between images. He has authored or coauthored 14 papers, he has co-edited one book on fuzzy techniques in image processing, and organized one session at an international conference.
Ignace Lemahieu (M’92–SM’00) was born in Belgium in 1961. He graduated in physics and received the Ph.D. degree in physics from Ghent University, Ghent, Belgium, in 1983 and 1988, respectively. He joined the Department of Electronics and Information Systems (ELIS), Ghent University in 1989 as a Research Associate with the Fund for Scientific Research (F.W.O.-Flanders), Belgium. He is now a Professor of Medical Image and Signal Processing and Head of the MEDISIP Research Group. His research interests comprise all aspects of image processing and biomedical signal processing, including image reconstruction from projections, pattern recognition, image fusion, and compression. He is the coauthor of more than 200 papers. Dr. Lemahieu is a Member of SPIE, the European Society for Engineering and Medicine (ESEM), and the European Association of Nuclear Medicine (EANM).
References
- E. Kerre and M. Nachtegael, Eds., Fuzzy Techniques in Image Processing. New York: Springer-Verlag, 2000, vol. 52, Studies in Fuzziness and Soft Computing.
- D. Van De Ville, W. Philips, and I. Lemahieu, Fuzzy Techniques in Image Processing. New York: Springer-Verlag, 2000, vol. 52, Studies in Fuzziness and Soft Computing, ch. Fuzzy-based motion detection and its application to de-interlacing, pp. 337–369.
- M. Nachtegael and E. E. Kerre, “Connections between binary, gray-scale and fuzzy mathematical morphologies,” Fuzzy Sets Syst., to be published
- “Decomposing and constructing fuzzy morphological operations over -cuts: Continuous and discrete case,” IEEE Trans. Fuzzy Syst., vol. 8, pp. 615–626, Oct. 2000.
- B. Reusch, M. Fathi, and L. Hildebrand, Soft Computing, Multimedia and Image Processing—Proceedings of the World Automation Congress. Albuquerque, NM:
TSI Press, 1998, ch. Fuzzy Color Processing for Quality Improvement, pp. 841–848. - S. Bothorel, B. Bouchon, and S. Muller, “A fuzzy logic-based approach for semiological analysis of microcalcification in mammographic images,” Int. J. Intell. Syst., vol. 12, pp. 819–843, 1997.
- F. Russo and G. Ramponi, “A fuzzy operator for the enhancement of blurred and noisy images,” IEEE Trans. Image Processing, vol. 4, pp. 1169–1174, Aug. 1995.
- “A fuzzy filter for images corrupted by impulse noise,” IEEE Signal Processing Lett., vol. 3, pp. 168–170, June 1996.
- F. Russo, “Fire operators for image processing,” Fuzzy Sets Syst., vol. 103, no. 2, pp. 265–275, 1999.
- C.-S. Lee, Y.-H. Kuo, and P.-T. Yu, “Weighted fuzzy mean filters for image processing,” Fuzzy Sets Syst., no. 89, pp. 157–180, 1997.
- C.-S. Lee and Y.-H. Kuo, Fuzzy Techniques in Image Processing. New York: Springer-Verlag, 2000, vol. 52, Studies in Fuzziness and Soft Computing, ch. Adaptive fuzzy filter and its application to image enhancement, pp. 172–193.
- F. Farbiz and M. B. Menhaj, Fuzzy Techniques in Image Processing. New York: Springer-Verlag, 2000, vol. 52, Studies in Fuzziness and Soft Computing, ch. A fuzzy logic control based approach for image filtering, pp. 194–221.
- H. Haussecker and H. Tizhoosh, Handbook of Computer Vision and Applications. New York: Academic, 1999, vol. 2, ch. Fuzzy Image Processing, pp. 708–753.
- J. S. Lim, Two-Dimensional Signal and Image Processing. Upper Saddle River, NJ: Prentice-Hall, 1990, ch. Image Restoration, pp. 524–588.
- K. Arakawa, “Median filter based on fuzzy rules and its application to image restoration,” Fuzzy Sets Syst., pp. 3–13, 1996.