MRF-based multispectral image fusion using an

MRF‐based multispectral image fusion using an adaptive approach based on edge‐guided interpolation Mohammad Reza Khosravi*, Suleiman Mansouri, Ahmad Keshavarz, and Habib Rostami School of Engineering, Persian Gulf University, Bushehr 75169, Iran *Corresponding author: [email protected] Abstract: In interpretation of remote sensing images, it is possible that the images which are supplied by different sensors wouldn't be understandable or we could not get vital information from them. For better visual perception of images, it is essential to operate series of pre‐processing and elementary corrections and then operate a series of main processing for more precise analysis on the image. There are several approaches for processing which depend on type of remote sensing image. The discussed approach in this article, i.e. image fusion, is using natural colors of an optical image for adding color to grayscale satellite image which gives us the ability to better observe the HR image of the OLI sensor of Landsat. This process previously with emphasis on details of fusion technique was performed, but we are going to relieve concept of interpolation process that did not have suitable attentions in past. In fact we see many important software tools such as ENVI and ERDAS as most famous remote sensing image processing software tools have only classical interpolation techniques (such as BL and CC). Therefore ENVI‐based and ERDAS‐based researches in image fusion area and even other fusion researches often don’t use new and better interpolations and only are concentrating on fusion details for achievement of better quality, so we only focus on interpolation impact in fusion quality in a specific application, i.e. Landsat multispectral images. The important feature of this approach is using a statistical, adaptive, edge‐guided and MRF‐based interpolation method for improving color quality in MRF‐based images with maintenance of high resolution in practice. Numerical Simulations show selection of suitable interpolation technique in MRF‐based images creates better quality rather than classical interpolations. Keywords: Satellite Image Fusion, Statistical and Adaptive Interpolation, Multispectral Image Processing, Markov Random Field (MRF), IHS Image Fusion Technique, Natural Sense Statistics (NSS). 1. Introduction Remote sensing image processing is considered for achieving information for different usages of very important research interests. These images are used in different fields such as physical geography and satellite photogrammetry, study of climate change, earth physics and earth quick engineering, hydrology and soil erosion, jungle studies and different fields of agriculture. Remote sensing images contain different types such as ground photos, airborne photos and satellite images. Satellite images can be divided into the images in the visible region (optical images), thermal images, radar images and laser images. For better visual perception of images, it is essential to operate series of pre‐
processing and elementary corrections and then operate a series of main processing for more precise analysis on the image [7]. Each of these images are provide with their own special means and used for special usages. For example visible area images are appropriate because of being understandable. The problem of visible images is that all the surface information is not reachable and this is the reason of using other sensor images. In visible area, image quality is inappropriate in bad climate and dark night and the reason of this is passive system in the most optical sensors. Thermal images are provided in IR frequency spectrum and usually are used in warming studies and temperature changing. The major problem of these images is that the spatial accuracy of IR sensors is usually low. In between, RADAR images which are obtained from active RADARs, have wide usages. Such as: extracting objects, buildings and roads, exploration if natural resources and identifying facilities and complication of earth surface [1]. Several remote sensing satellites are launched for different purposes. One or more sensors are placed on each of these satellites for different purposes. Regarding the usages, each sensor takes images in special frequency bands and gives the proportional information. Landsat satellite series made from a group of 8 satellites and are designed by incorpora on of NASA and USGS and they have launched since 1972. Recently just landsat7 and 8 satellites are ac ve that are launched in 1999 and 2013, respectively. The most popular satellite of this group is Landsat7 which its mul spectral sensor that named ETM+ is one the most popular remote sensing sensors. Landsat8 satellite is intended for at least 10 years mission and two mul spectral sensors are placed on it. These two sensors named OLI and TIRS and provide images respec vely In 9 and 2 frequency bands. The OLI sensor provides multispectral images which contain almost all of the ETM+ bands. But they are improved in SNR and spatial resolution (positional accuracy). This sensor takes image in visible and IR area and TIRS sensor which is considered as a thermal sensor, it just takes images in IR bands. Some of the OLI sensors bands are presented in Table1. Lansat8 collects spa al data with medium resolu on (30m resolu on). The TIRS thermal sensor has 100 m resolu on in both of its frequency bands which is considered weak. Landsat8 images are free. Their format is GeoTIFF. Therefore, they are not requiring to geometrical correc ons. Combina on of 2, 3 and 4 bands makes a visible image with 30m resolu on and the 8th band which has the wide spectrum of visible light (contains about 2/3 of it) has 15m resolu on (each pixel shows 225m2) and has a good resolu on as seen in table1. So this band which is named panchromatic is considered as the highest resolution band in OLI Sensor [2]. As follow, we overview fusion schemes but due to concentrate on another process (i.e. interpolation), this overview is short. Optical/passive remotely sensed satellites provide multispectral (MS) and panchromatic (PAN) images that have different spatial/spectral resolutions and also temporal resolutions. The multispectral images have high spectral information and low spatial information, whereas the PAN image has reverse characteristics. Fusion of the low‐spatial‐
resolution multispectral (i.e. LRMS) images and the high‐spatial‐resolution PAN (HRP) images becomes a key and hot problem. The synthetic high‐spatial and high‐spectral‐resolutions MS (HRMS) image can provide increased interpretation capabilities and more reliable results since images with different characteristics are combined [18‐20]. During the last two decades, various fusion approaches have been developed. These methods are mainly divided into four categories: projection–substitution‐based methods, relative spectral contribution methods, ARSIS‐based fusion methods and model‐based methods [17, 20]. In [17] differences among these schemes are explained, explicitly. The rest of this paper is organized into four sec ons. Sec on 2 expresses founda ons of proposed method, Sec on 3 overviews specific interpola on algorithm en tled LMMSE, Sec on 4 evaluates proposed idea for be er image fusion in Landsat sensors and Sec on 5 is conclusion of all topics in this paper. Table1. Spectrums of OLI Band Number 2 3 4 8 Spectral Band Name Blue Green Red Panchromatic
Wavelength Range (micro meter) 0.450 – 0.515
0.525 – 0.600
0.630 – 0.680
0.500 – 0.680
Spatial Resolution 30 m 30 m 30 m 15 m 2. Fundamentals and an insight on proposed method There are two principals in colorization of grayscale images. First group are the ways which use virtual color to produce color images. Still the only way of colorization in some specific applications is using these techniques although they are ancient. The second group that is our purpose contains the ways that use other color image to produce color. Generally this color image might be produced by other tools or the same tools with this condition that they may have weak resolution due to be color at first. In this article the second way is studying which means we want to colorize the black and white (gray‐scale) image with the color sample which is provided by the same sensor. This process is named image fusion and previously with emphasis on details of fusion technique was performed, but we are going to relieve concept of interpolation process that did not have suitable attentions in past. In fact we see many important software tools such as ENVI and ERDAS as most famous remote sensing image processing software tools have only classical interpolation techniques (such as BL and CC) and therefore ENVI‐based and ERDAS‐based researches in image fusion area and even other fusion researches often don’t use new and better interpolations and only are concentrating on fusion details for achievement of better quality. So we only focus on interpolation impact in fusion quality in a specific application, i.e. Landsat multispectral images. Obviously images don’t need correction because they were provided by the same sensor. The important point in these images is fusing high and middle resolution images. We use an elementary‐based way called IHS [4] for color production process in colorless images. Fig. 1 shows general processing steps in image fusion and also IHS fusion technique used in here. We show interpolation importance in this figure as a middle process. According to IHS technique we transform the colored image from RGB to HSI color space. Then color information which are in hue and saturation attribution, are taken and after resizing with panchromatic band image, are fused with this image and the result convert to RGB image again. Color transformations have different ways in various spaces and because of it, in this article the Kruse and Raines method [5] is used which is optimal for USGS images. Fig. 1. IHS Technique Steps, Part (a) shows general process in all fusion techniques and part (b) represents process steps in IHS. Therefore in addition to IHS method, other question should be answered, which is the resizing [3] of the color components of the color image with medium resolution and components of the color image in high resolution. This is possible with use of specific interpolation algorithm based on statistical linear estimation. The property of this method that comparing to the traditional interpolation technique like cubic convolution (CC) method is considering Landsat image has MRF‐based behavior in most geographical regions, therefore interpolation won't have good results based on using far pixels and resulting image being blur. Totally, all the interpolation methods often work based on statistical averaging and this act like low pass filter (LPF), so there will be better methods in estimation of missing data amounts which use better information and so for estimation of all the correlated pixel values with appropriate impact factor and also not using the uncorrelated pixels (Fig. 2‐a). 3. LMMSE interpola on: a sta s cal, adaptive, edge‐guided and MRF‐based algorithm In this part, we discuss about LMMSE interpola on method which is presented in [15], this scheme is a statistical edge‐guided scheme and uses 4 to 6 nearest neighbors to es mate pixels and so have suitability for MRF‐based images. Most important point about LMMSE is Adaptive approach in LMMSE against classical schemes i.e. Cubic Convolution (CC) interpola on (CC uses 16 nearest neighbors for pixel es ma on) and Bi‐Linear (BL) interpolation (BL by use of 4 nearest neighbors is suitable for MRF‐based images but is not adaptive, so experimental results in grey‐scale and color image show that BL is weaker than CC in most cases). Outputs of [15] prove LMMSE in gray‐scale images is better than CC and also we will show in IHS fusion as color space process, LMMSE is winner against CC, again. In this study we should build 4 mes greater images from two color a ribu ons which have 30m resolu on (each of their dimensions is two times smaller than equal black and white image). To achieve this we have to es mate 75% of the pixels using 25% of their amounts. There are several interpolation methods for this estimation but in this particular application considering using affine linear transform for resizing. Therefore we can use a regular interpolation based on statistical estimation which is derived from Linear Minimum Mean Square Error‐estimation (LMMSE) that also called DFDF in some texts [8, 15, 16] and hence the interpolation error is decreased and there will be an appropriate visual quality. Fig. 2‐b shows the transmission of image pixels with smaller sizes to a 4‐time bigger space. We discuss about LMMSE sta s cal es ma on in con nua on of this part. As seen in Fig. 3‐b, to calculate the non‐existent pixels directional value, first in two angles,45 and 135 degrees, two average amounts for sample pixel in Xh(2i,2j) and the same positions are calculates which are named X45 and X135 and are derived from Equa on 1. Now we can introduce two error values for these two directions and use them in future computations. Equation2 shows the wanted error value. In Equa on 2, Xh(2i,2j) or in brief Xh doesn’t exist but it can be obtained in a way that estimation error would be the least. In a situation that we name the Xh estimation as X'h, and this estimation can be achieved in a way that the errors will be the least. Therefore based on the LMMSE method, to achieve such result according to the Rela on 3, X'h can be known as a normal linear combination with W45 and W135 indices from X45 and X135 in a way that the estimation error reaches the least amount with the appropriate chose of these weights, W45 and W135. x(i , j  1)  x (i  1, j )
2
x(i , j )  x (i  1, j  1)
x135 
2
x 45 
(1) e45(2i ,2 j )  x 45(2i ,2 j )  x h(2i ,2 j )
e135(2i ,2 j )  x135(2i ,2 j )  x h(2i ,2 j )
(2) (a) (b) Fig. 2. figure (a) shows 8 pixels with most impact on target pixel in MRF‐based image. (b) shows magnifying for resizing color component to 4‐ me bigger than first size, this opera on generate 75% new pixels in magnified image that must be es mated by 25% original pixels. Weighs amounts will be as Rela on 4 a er the necessary calcula ons. A er the calcula on W45 and W135 amounts for making the estimation error to the least condition, directional error variances are calculated and the variance calculations will be according to Rela on 5 in a way that u and S are achieved from rela ons 6 and 7, respectively. x 'h  w45 x 45  w135 x135
w45  w135  1
{w45 , w135 }  arg min
w45  w135  1
w45 
(3) E {( x 'h  x h ) }
 2(e135 )
 2(e135 )   2(e135 )
w135 
2
 2(e45 )
 (e135 )   2(e135 )
2
 1  w45 (4) For other pixels which are in other posi ons, from two orthogonal direc ons (0 and 90 degrees) which the pixels amounts were available from before or where achieved as results of statistical estimation in image 3‐b, in this way we estimate the non‐existent amounts. The continuation of estimation of non‐existent pixels will be to achieve the complete interpola on of 75% of all new pixels that shown in Figures 3‐c and 3‐d. Fig. 3. This figgure shows the steps of non‐e
existent pixels e
estimation [1].  2(e45 ) 
1
3
3
(S
2
45(k )  u)
k 1
3
1
 2(e135 ) 
3
( S
(5) 2
135( k )  u)
k 1
x x
u  45 135
2
(6) S 45  { x(i , j  1), x '45 , x(i  1, j )}
S135  { x(i , j ), x '135 , x(i  1, j  1)}
(7) 4. Simula on and Disc
o
cussion To experim
ment the prop
posed method
d two panchrroma c image
es shown in FFigures 4‐a an
nd 4‐b (15m rresolu on) aree colorized with color imagees of combinaa on from 2, 3, and 4 bands (30m resolu on) shown
n in image 4‐cc and 4‐d. thee colorized outputs are show
wn in Figures 4
4‐e and 4‐f. Th
he noticeable quality of pro
oposed metho
od resulting iss thereupon o
of h always in fu
usion of images which use IHS techniqu
ue, often corrrection for thee the appropriaate interpolattion. Although
color weightss is done because of real effect of IR band in panchrom
matic, but it iss not necessarry here. Colorrization qualityy in this metho
od is obvious ffrom the com
mparison of ou
utput images. Fore precise assessment o
of images quality we should
d use appropriaate metrics. W
We cannot reelying on visuaal assessmentt because on
nly the speciaalists have enough skills for this and it is n
necessary to w
work on the m
metrical (computational) quality assessm
ment with app
propriate metrics instead o
of visual assessm
ment. Metricaal quality asseessment (two types of it arre objective Q
QA and subjecctive QA) is an
n independent research field
d and lots of researchers have h
proposeed different metrics m
for various applicattions yet. The
e used qualityy assessment metric m
in heree named SSIM
M and wang [11] has pressented it. Thiss metric is co
ompletely bassed on human
n visual system
m (HVS) and natural n
sense statistics (NSSS) which is described d
by sheikh s
[13]. TThe reality is that the SSIM
M which is pressented by waang that is seen in Rela on 8 is a full reference r
meetric (for exam
mple the PNSR as common
n quality assesssment metric is a full referrence metric, too) which m
means the origginal image have to be acccessible and in
n addition to it doesn’t havee matching wiith HVS and therefore this application iss not approprriate because the referencee u on can be known as a reduced refere
ence for color image is not completely accessible (Color image witth 30m resolu
assessment). This problem was solved by yang and his colleagues [12] and it is possible to use this metric in such applications, using SSIM which is proved for usages with reduced reference. Fig. 5 shows the collec ons of output image pixels for comparison with the main color image that named down‐sampling and is a common way for assessment in digital image processing. Totally the metrical quality assessment has three types: full reference, reduced reference and no reference [11, 14]. The second state is discussed in this article. Reference if is existing can be image or ground control points which are collected by a specialist. In Rela on 8 ux and uy amounts are reference image average and examined image average respectively and σx and σy are standard deviations for them and also σxy is the covariance of two images. Ci coefficients are small amounts which are better not to be zero [10‐11]. The SSIM output amount is ‐1 at least and +1 at most which maximum shows 100% similarity. Table 2 shows the SSIM amounts for two fused images of Fig. 4‐e and 4‐f. The quality of added colors is obvious and is confirmed by SSIM. This quality just shows the matching in original image colors and fused ones. The color quality is shown in table 2 while the high resolu on of the fused image is just like the panchroma c image and the resolution is quite saved and the reason is reversibility of IHS algorithm (addition to this, evaluation is general and non‐
color component i.e. intensity of image has impact in results, too, this problem is better for general evaluation of outputs). Fig. 6 shows the results of presented method percentage in comparison with cubic convolu on [6] classic method. SSIM 
(2ux u y  C1 )(2 x y  C2 )( xy  C3 )
(u2x  u2y  C1 )( x2   2y  C2 )( x y  C3 )
(8) Fig. 4. Lake (a, c and e) and Mountain (b, d and f). Fig. 5. Down
n Sampling for EExamination.
Table2. Quality Asse
essment in Fuse
ed Images usingg SSIM Color Band
d Red Green Blue Average ntage Similarity Percen
Lake 0.7010
0.7165
0.6669
0.6948
84.74
Mountain 0.5713
0.5862
0.5691
0.5755
78.77
86
84.74
82
2.44
84
78.25
82
80
78.77
78
76
74
M
Mountain
Laake
LMMSEE
Cubic Co
onvolution
Fig. 6. Co
omparison of Performance for Two Interpola
ation Methods (Similarity Perrcentage). 5. Conclusion One of thee best ways for enhancingg complex isssues is innovaative heuristic solutions. Almost proviing innovativee solutions aree not possiblee, except with the simulation and in fact f
problemss are solved by using sim
mulation‐based
d solu ons [9]. We see IHS fusion like a ccolor space p
process guaran
ntees better q
quality of LMMSE than CC. According to
o bility to high resolution im
mages it is neecessary to usse the easy to t achieve im
mages in the best b
way. Thee hard accessib
discussed subject in this article is the fusion of black and white images with color ones with statistical methods and features. The pointed method according to the existence of a relation between panchromatic image dimensions and optical image dimensions for other sensors is extensible to other sensors with applying changes in interpolation method which is proportional to the spatial resolution conditions of the sensors. References [1] M. R. Khosravi, A. Keshavarz, H. Rostami, S. Mansouri, Statistical Image Fusion for HR Band Colorization in Landsat Sensors, 20th Annual Conference of Computer Society of Iran (CSICC2015), CSI Conference Publications, pp. 245‐250, FUM, Mashhad, Iran, 2015. [2] http://landsat.usgs.gov. [3] R. C. Gonzalez, R. E. Woods, Digital Image Processing, 3rd edi on (2008), Pren ce Hall. [4] M. J. Choi, H. C. Kim, N. I. Cho, H. O. Kim, An Improved Intensity‐Hue‐Saturation Method for IKONOS Image Fusion, International Journal of Remote Sensing, pp. 1‐10, 2007. [5] Kruse and Raines, A technique for enhancing digital color images by contrast stretching in Munsell color space, in Proceedings of the ERIM Third Thema c Conference, Environmental Research Ins tute of Michigan, Ann Arbor, MI, pp. 755‐760, 1994. [6] R. Keys Cubic convolution interpolation for digital image processing. IEEE Transactions on Acoustics, Speech, and Signal Processing 29 (6): 1153–1160, 1981. [7] S. R. Salari, A. Mohseni, M. R. Khosravi, H. Rostami, Evaluation of Statistical and Nonlinear Adaptive Filters Performance for Noise Reduction of SAR Images, 2nd Interna onal Congress of Electrical Engineering, Computer Science and Information Technology (IT2015), IRANDOC Conference Publications, No. 3, pp. 352‐362, 2015, SBU, Tehran, Iran. [8] M. R. Khosravi, S. R. Salari, H. Rostami, Analysis of Modeling Error in Digital Image Spatial Estimation, 2nd Interna onal Congress of Electrical Engineering, Computer Science and Information Technology (IT2015), IRANDOC Conference Publications, No. 3, pp. 363‐
371, 2015, SBU, Tehran, Iran. [9] M. R. Khosravi, H. Basri, A. Khosravi, H. Rostami , Energy Efficient Spherical Divisions for VBF‐based Routing in Dense UWSNs, In Proceedings of IEEE2015‐The Second International Conference on Knowledge‐Based Engineering and Innovation (KBEI), pp. 950‐954, 2015, IUST, Tehran, Iran. [10] M. R. Khosravi, A. Khosravi, M. Shadloo‐Jahromi, A. Keshavarz, A Novel Fake Color Scheme Based on Depth Protection for MR Passive/Optical Sensors, In Proceedings of IEEE2015‐The Second International Conference on Knowledge‐Based Engineering and Innovation (KBEI), pp. 342‐348, 2015, IUST, Tehran, Iran. [11] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, Eero P. Simoncelli, Image Quality Assessment: From Error Visibility to Structural Similarity, IEEE Tran. on Image Processing, Vol. 13, No. 4, April 2004. [12] Cui Yang, Jian‐Qi Zhang, Xiao‐Rui Wang, Xin Liu, A novel similarity based quality metric for image fusion, Informa on Fusion 9, pp. 156–160, 2008. [13] Hamid R. Sheikh, Alan C. Bovik, Image Informa on and Visual Quality, IEEE Tran. on Image Processing, Vol. 15, No. 2, February 2006. [14] X. Y. Luo, J. Zhang, and Q. H. Dai, Saliency‐Based Geometry Measurement for Image Fusion Performance, IEEE Tran. On Instrumenta on and Measurement, Vol. 61, No. 4, April 2012. [15] L. Zhang and X. Wu, An edge‐guided image interpolation algorithm via directional filtering and data fusion, IEEE Trans. Image Process., vol. 15, No. 8, pp. 2226–2238, Aug. 2006. [16] Lixin Luo, Zhenyong Chen, Ming Chen, Xiao Zeng, and Zhang Xiong, Reversible Image Watermarking Using Interpola on Technique, IEEE Tran. On Informa on Forensics and Security, Vol. 5, No. 1, March 2010. [17] W. Wang, L. Jiao, S. Yang, Fusion of multispectral and panchromatic images via sparse representation and local autoregressive model, Informa on Fusion 20, pp. 73–87, 2014. [18] L. Wald, Some terms of reference in data fusion, IEEE Transaction on Geosci. Remote Sens. 37 (3), pp. 1190–1193, 1999. [19] C. Pohl, J.L. van Genderen, Mul sensor image fusion in remote sensing: concepts, methods, and applica ons, Int. J. Remote Sens. 19 (5), pp. 823–854, 1998. [20] C. Thomas, T. Ranchin, L. Wald, J. Chanussot, Synthesis of mul spectral images to high spa al resolu on: a cri cal review of fusion methods based on remote sensing physics, IEEE Trans. Geosci. Remote Sens. 46 (5), pp. 1301–1312, 2008.