June 2020 Layout Flipping Public - page 383

Using 250-m MODIS Data for Enhancing
Spatiotemporal Fusion by Sparse Representation
Liguo Wang, Xiaoyi Wang, and Qunming Wang
Abstract
Spatiotemporal fusion is an important technique to solve
the problem of incompatibility between the temporal and
spatial resolution of remote sensing data. In this article, we
studied the fusion of Landsat data with fine spatial resolu-
tion but coarse temporal resolution and Moderate Resolu-
tion Imaging Spectroradiometer (
MODIS
) data with coarse
spatial resolution but fine temporal resolution. The goal of
fusion is to produce time-series data with the fine spatial
resolution of Landsat and the fine temporal resolution of
MODIS
. In recent years, learning-based spatiotemporal fu-
sion methods, in particular the sparse representation-based
spatiotemporal reflectance fusion model (
SPSTFM
), have
gained increasing attention because of their great restora-
tion ability for heterogeneous landscapes. However, remote
sensing data from different sensors differ greatly on spatial
resolution, which limits the performance of the spatiotem-
poral fusion methods (including
SPSTFM
) to some extent. In
order to increase the accuracy of spatiotemporal fusion, in
this article we used existing 250-m
MODIS
bands (i.e., red and
near-infrared bands) to downscale the observed 500-m
MODIS
bands to 250 m before
SPTSFM
-based fusion of
MODIS
and
Landsat data. The experimental results show that the fusion
accuracy of
SPTSFM
is increased when using 250-m
MODIS
data, and the accuracy of
SPSTFM
coupled with 250-m
MODIS
data is greater than the compared benchmark methods.
Introduction
Due to the power limitations of satellite sensors, it is difficult
to acquire remote sensing data with both fine spatial and
fine temporal resolutions. The Landsat s
acquiring multispectral data with 30-m f
tion, and based on this characteristic, La
widely applied to exploration of earth resources, agricultural,
forestry and livestock management, and monitoring of natural
disasters and environmental pollution at local scale (Goetz
2007; Anderson
et al.
2012; van der Meer
et al.
2012). How-
ever, the 16-day Landsat revisit cycle and cloud contamina-
tion limit its potential in monitoring dynamic changes on the
Earth’s surface. On the other hand, the Moderate Resolution
Imaging Spectroradiometer (
MODIS
) on the
Terra
/
Aqua
plat-
form can revisit the same scene once or twice per day, which
can be applied to dynamic monitoring of vegetation phenolo-
gy (Zhang
et al.
2003; Ganguly
et al.
2010) and land cover and
land use (Hansen
et al.
2000). However, the spatial resolution
of
MODIS
data is 250–1000 m, and the ability to characterize
the details of ground objects is very limited, especially for
heterogeneous landscapes. In order to obtain remote sensing
images with both fine spatial resolution and fine temporal
resolution for precise and timely monitoring, spatiotemporal
fusion methods have been developed. Spatiotemporal fusion
takes advantages of the spatial features of fine-spatial-resolu-
tion remote sensing data (e.g., Landsat data) and the temporal
features of fine-temporal-resolution remote sensing data (e.g.,
MODIS
data). The current spatiotemporal fusion methods can
be divided into three main groups: weighting function-based,
unmixing-based, and learning-based.
Among weighting function-based methods, Gao
et al.
(2006) first proposed a spatial and temporal adaptive reflec-
tance fusion model (
STARFM
), which considers the differences
in spectral, temporal, and spatial features between similar
neighboring pixels. Hilker
et al.
(2009) proposed a spatial
temporal adaptive algorithm for mapping reflectance change
(
STAARCH
). That method introduces a tasseled cap transform
to increase the prediction accuracy. On the basis of
STARFM
,
Zhu
et al.
(2010) proposed enhanced
STARFM
(
ESTARFM
), which
introduced a transfer coefficient to more reliably characterize
the change rate of different land cover classes and has greater
fusion accuracy for complex and heterogeneous regions. Wang
and Atkinson (2018) proposed a three-step spatiotemporal fu-
sion model called
Fit-FC
, which consists of regression model fit-
ting, spatial filtering, and residual compensation. This method
is especially advantageous for phenological changes.
With respect to unmixing-based methods, Zhukov
et al.
(1999) first proposed an unmixing-based multisensor multires-
olution fusion model, which produces fine-spatial-resolution
predictions directly according to the unmixing result from
observed coarse-spatial-resolution images. Cherchali, Am-
ram, and Flouzat (2000) and Fortin
et al.
(2000) proposed an
approach to calculate fine-spatial-resolution pixel reflectance
olution pixel reflectance based on a
model. These methods fail to consider
ariability of land cover. Roy
et al.
(2008)
proposed a semi-physical fusion model based on the assump-
tion that the time variation of Landsat Enhanced Thematic
Mapper and
MODIS
reflectance images are consistent. Wu
et al.
(2012) proposed a spatial and temporal data fusion approach to
fuse
MODIS
and Landsat data based on the assumption that the
temporal-change characteristics of the same land cover class
within a coarse pixel are consistent. Wu
et al.
(2015) consid-
ered the spatial and temporal variations of pixel reflectivity
jointly on the basis of this approach and proposed an improved
method. This method is suitable for the cases involving miss-
ing remote sensing data. Zhu
et al.
(2016) proposed a flexible
spatiotemporal data fusion (
FSDAF
) method, which integrates
spatial unmixing and weighting schemes into one framework.
In recent years, sparse-representation theory has been
applied widely in the field of image processing. For natural
image processing, Yang
et al.
(2010) proposed an approach for
image superresolution via sparse representation, which is one
Liguo Wang and Xiaoyi Wang are with the College of
Information and Communication Engineering Harbin
Engineering University, Harbin, China.
Qunming Wang (corresponding author) is with the College of
Surveying and Geo-Informatics, Tongji University, Shanghai,
China (
).
Photogrammetric Engineering & Remote Sensing
Vol. 86, No. 6, June 2020, pp. 383–392.
0099-1112/20/383–392
© 2020 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.86.6.383
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
June 2020
383
327...,373,374,375,376,377,378,379,380,381,382 384,385,386,387,388,389,390,391,392,393,...394
Powered by FlippingBook