rule-based classification of agricultural resources through object

RULE-BASED CLASSIFICATION OF AGRICULTURAL RESOURCES THROUGH
OBJECT-BASED IMAGE ANALYSIS USING LIDAR DERIVATIVES AND
ORTHOPHOTO: THE CASE OF TABONTABON, LEYTE, PHILIPPINES
Engr. Reyfel Niño G. Maglines1, Mc Laurence S. Compendio1, Mitch Allan A. Haboc1, Michael B. Omela1, Engr.
Omar P.Jayag1, Dr. Pastor P. Garcia2, Engr. Jannet C. Bencure3
1
Phil-LiDAR Office, Visayas State University, Baybay City, Leyte, Philippines
2
Eco-FARMI, Visayas State University, Baybay City, Leyte, Philippines
3
Department of Geodetic Engineering, Visayas State University, Baybay City, Leyte, Philippines
KEY WORDS: LiDAR, OBIA, resource mapping, SEaTH, Ecognition
ABSTRACT: Typhoon Haiyan, popularly known as typhoon “Yolanda”, was considered the strongest typhoon
recorded in history and brought tremendous damage to agricultural crops like coconut and other plantation crops.
Damage assessment has to be done so that appropriate recovery and rehabilitation plan can be laid out. The advances
on remote sensing technology provide easier, faster and cost effective method to do damage assessment. In this study,
LIDAR technology was used to assess the current status of agricultural resources such as rice, coconut, abaca and
other high value crops in the municipality of Tabontabon, Leyte, one of the hardest hit municipalities of typhoon
Yolanda. It aims to assess the present status of agricultural resources and produce a detailed agricultural resource
map. LiDAR derivatives such as Intensity, Number of Returns, Digital Terrain Model (DTM), Digital Surface Model
(DSM) and Normalized Digital Surface Model (NDSM) were used as layers for the rule based classification and
analysis. In addition, the digital orthophoto’s RGB bands and the Green-Red Vegetation Index (GRVI) were also
used as layers for the analysis. Ecognition software version 9.0 was used to run the object based image analysis. The
said application has a great collection of object-based image analysis tools and algorithms, including a rule based
algorithm (assign class algorithm). Another tool called SEaTH (Separability and Threshold), was used to identify and
quantify characteristic features with a statistical approach based on training objects. Field validation of the result
shows that the rule-based classification resulted to more than 90 percentage accuracy which is very much acceptable.
However, it was observed to be more applicable only in areas where the crops planted are not so complicated such as
presence of monocropping (rice alone, coconut alone, etc).
1.
INTRODUCTION
The development of object-based image analysis (OBIA) started primarily from the desire to use the important
semantic information necessary to interpret an image, which is not presented in single pixels but rather in meaningful
objects. Particularly with OBIA, homogeneous image objects at a chosen resolution are first extracted and
subsequently classified. In addition to spectral information, this allows a multitude of additional information, such as
shape, texture, area, context, topological relationship with other objects, and information from other object layers, to
be derived from objects and used in image classification (Shackelford and Davis 2003).
This paper shows a possible workflow using OBIA in eCognition with segmentation techniques and applying SEaTH
(SEparability and Threshold) algorithm to have a rule based image classification particularly in extracting agricultural
resources. Such softwares that were used during the process were LASTools, ENVI 5.0, ArcMap 10.2.2, eCognition
and the SEaTH tool.
Having available images and point clouds for the area of interest, Light Detection and Ranging (LiDAR) derivatives
and Orthophotograph layers were used for the image classification. The said layers were useful especially for a rulebased classification for layers from lidar really helps in separating the tall features from the short features and the
RGB values from the orthophoto offers excellent visual and spectral analysis.
2.
STUDY AREA, DATA AND METHODOLOGY
2.1 Study area and data
Tabontabon is a Philippine municipality in the province Leyte in Region VIII Eastern Visayas which belongs to
the Visayas group of islands. The municipality Tabontabon is seated about 24 km south-south-west of province
capital Tacloban City and about 583 km south-east of Philippine main capital, Manila. Tabontabon is a 5th class
municipality. Regarding urbanization Tabontabon is classified as partly urban. Tabontabon occupies an area of 24.18
km².
Figure 1. Study area and the extent of available LiDAR and RGB imagery coverage within the area.
The study area is almost a 100% fully covered with LiDAR and orthophoto. LiDAR and its RGB imagery were
acquired simultaneously around June-July of 2014, around seven (7) months after super typhoon Haiyan damaged
meaningful resources within the region. The study area’s identified major crops is only rice and coconut. They have
a total rice area of 1,418 sq.km (1,035 irrigated and 383 rainfed farms) according to their municipal agricultural
profile.
2.2 Extracting of agricultural resources through OBIA
A summary of the processing workflow is represented in Figure 3. Softwares were used chronologically according to
its main functions. LASTools for generating layer derivatives from LiDAR, all interpolated to raster format at 0.5m
resolution. ArcMap 10.2.2 were used to create a surface of above ground structure, subtracting the terrain model from
the surface model. And ENVI 5.0 was used to generate derivatives from the RGB imagery.
2.2.1 Pre-processing of LiDAR and imagery for OBIA. Pre-processing of point and raster data was conducted in
LASTools prior to segmentation in eCognition. Derivatives extracted from LiDAR using LASTools were Intensity
(INT), Number of Returns (Num_Ret), Digital Surface Model (DSM) and Digital Terrain Model (DTM). From this
layers, ArcMap 10.2.2 was used to create another raster from the difference between DSM and DTM, naming the
new layer as Normalized Digital Surface Model (nDSM). With the available RGB Imagery or Orthophoto, ENVI 5.0
was used to extract a Green-Red Vegetation Index (GRVI) from the image’s RGB values.
𝑛𝐷𝑆𝑀 = 𝐷𝑆𝑀 − 𝐷𝑇𝑀
𝐺𝑅𝑉𝐼 =
(𝐵𝑎𝑛𝑑 2𝐺𝑟𝑒𝑒𝑛 − 𝐵𝑎𝑛𝑑 1𝑅𝑒𝑑 )
(𝐵𝑎𝑛𝑑 2𝐺𝑟𝑒𝑒𝑛 + 𝐵𝑎𝑛𝑑 1𝑅𝑒𝑑 )
Figure 2. Image Layers from LiDAR and Orthophoto used for classification.
From Top Left to Bottom Right: RGB image, GRVI, DSM, DTM, INT, Num_Ret and nDSM.
Figure 3. Processing Workflow
2.2.2 Segmentation. Image segmentation was conducted within eCognition Developer 9.0. Multi-threshold
segmentation was then first performed to separate tall and short features using the nDSM layer. All having mean
nDSM value of greater than or equal to 2 but less than or equal to 255 shall be classified as Tall and the remains shall
be classified as short. Multiresolution segmentation was then performed to the short features, with parameters of
scale=50, shape=0.2 and compactness=0.5 with equal weights to each layers.
Figure 4. RGB image (Left) and the segmented RGB image (Right).
2.2.3 Image Classification. eCognition contains one algorithm called the assign class algorithm, it is an algorithm of
assigning classes using rules, giving conditions and unique values for each classes, somewhat making a decision tree
or a rule-based feature separation. Tall features were further classified, extracting builtups, trees and coconut. In
extracting Builtups, tall features having an object mean GRVI value of less than 0, Mean GRVI < 0 and mean RED
value of greater than or equal to 180, Mean R >= 180, was applied. For extracting Coconut which are greater in height
compared to other regular trees and in terms of canopy, they are much smaller compared to trees like mango, acacia
or other trees. So, coconuts were separated from the tall features using the condition, nDSM >= 4.5 and Area<=150
Pxl. Tall features that did not passed the rules for coconut and builtups were then classified as Trees. Short features
were also classified further, extracting water, fallow, road, and vegetation. Filtering the Short features, this conditions
from layers were applied: for Water, Standard Deviation of BLUE <= 4.99; for Vegetation, Mean GRVI > 0; for
Road, Assymmetry >= 0.84; and for Fallow, all remains from the unclassified short features, Short=Fallow.
2.2.4 SEaTH application. The class Vegetation is still comprising the main agricultural resource of the study area,
which is rice. The class should be further classified and rice should be extracted from it using a threshold value that
gives maximum separability for the chosen feature. The feature analyzing tool SEaTH (SEparability and THresholds)
was used to acquire that threshold value for this tool identifies characteristic features with a statistical approach based
on training objects. These training objects represent a small subset out of the total amount of image objects and should
be representative objects for each object class (Nussbaum, Niemeyer, and Canty, 2005). Training objects were then
created for Rice and Grassland within the Vegetation class. These training objects were then converted as samples
and using eCognition, the samples were exported in a comma-separated value (.csv) format. The exported file was
imported to SEaTH and immediately it provided the best layers based on the layer’s Jeffries-Matusita (J) value or the
separation value and its threshold value that can separate Rice features from Grassland features. Objects that has a
Standard deviation of DSM less than the value of 0.134906 are classified to be Rice.
Figure 5. Results for the separation of Grassland (g) versus Rice (r) using SEaTH tool.
2.3 Accuracy Assessment
Validation points were selected through image interpretation using the RGB image and field collected points were
also added to conduct accuracy assessment in eCognition. Initially, with about 70 randomly selected image interpreted
points alone, the acceptable accuracy of not less than 90% was still achieved. Field validation was conducted in May
2015, and a handheld GPS were used during the survey. Validation points collected in the field using the handheld
GPS was then merged with the image interpreted validation set and was used for the final accuracy assessment. The
classified image resulted to have an average accuracy of 94.90% and a KIA of 0.847, which is suffice to be acceptable.
3.
RESULTS
The final output is an image classified with eight (8) classes namely, Builtups, Trees, Coconut, Road, Water, Fallow,
Rice and Grassland. These classes were extracted through rule-based classification, all using the assign class
algorithm of eCognition 9.0. The classification polygons were exported into vector layer or in shapefiles (.shp) format.
The shapefile was imported to ArcMap 10.2.2 and there the polygons were finalized, assigning appropriate class
colors to the objects. Figure 6 represents the final layout of the resource map, the agricultural land cover map of
Tabontabon, Leyte. Other details such as its data source, coordinate system and political boundaries were also shown
in the template.
Figure 6. Agricultural land cover map for Tabontabon, Leyte.
4.
CONCLUSION
This paper presents an OBIA approach of classification by not using classifiers such as SVM, KNN and other common
algorithms but to just use rules to separate one class from another. Conditions and threshold values were implemented
through the assign class algorithm of eCognition 9.0. Because agricultural resources for this study are critical for the
classification, SEaTH tool was used to acquire a much precise threshold value and condition.
Agricultural profile of the municipality were provided by the municipal agricultural officer. In the said profile,
statistics like hectarage, production, crop details, crop practices, and all different data for their agriculture are shown.
Approximate area for Rice, either irrigated or non-irrigated, of Tabontabon, Leyte is 1,416 sq.km. as of their latest
collection of 2014 (before Yolanda). Area computed from the class polygons of Rice including rice Fallow was about
1,446 sq.km, therefore, the classification and the actual statistics from the municipal’s agricultural profile is close to
match and could be considered accurate to reality.
5.
REFERENCES.
Canty, M. J., Nielsen, A. A., and Schmidt, M., 2004. Automatic radiometric normalization of multispectral imagery.
Remote Sensing of Environment 91, 441–451.
Niemeyer, I. and Nussbaum, S., 2006. Change Detection: The Potential for Nuclear Safeguards. in: Verifying Treaty
Compliance - Limiting Weapons of Mass Destruction and Monitoring Kyoto Protocol Provisions, R. Avenhaus, N.
Kyriakopoulos, M. Richard and G. Stein (Ed.), Springer.
Nussbaum, S., Niemeyer, I., & Canty, M. J., 2005. Feature Rcognition in the Context of Automated ObjectOriented
Analysis of Remote Sensing Data Monitoring the Iranian Nuclear Sites. In: Proc. SPIE’s Europe Symposium
Optics/Photonics in Security & Defence, Bruges, 26-29 September 2005, SPIE Vol. ED103 (CDRom).
Nussbaum S., Niemeyer I., Canty M. L., 2006. SRATH-A new tool for automated feature extraction in the context of
object-based image analysis. In: Proc. 1st International Conference on Object-based Image Analysis (OBIA 2006),
Salzburg, 4-5 July 2006, ISPRS Volume No. XXXVI – 4/C42
Shackelford A. K., and Davis C. H., 2003. A combined fuzzy pixel-based and object-based approach for classification
of high-resolution multispectral data over urban areas. IEEE Transactions on Geoscience and Remote Sensing, 41,
pp. 2354-2364