Full Text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Extracting water information from remote-sensing images is of great research significance for applications such as water resource protection and flood monitoring. Current water extraction methods aggregated richer multi-level features to enhance the output results. In fact, there is a difference in the requirements for the water body and the water boundary. Indiscriminate multi-feature fusion can lead to perturbation and competition of information between these two types of features during the optimization. Consequently, models cannot accurately locate the internal vacancies within the water body with the external boundary. Therefore, this paper proposes a water feature extraction network with spatial partitioning and feature decoupling. To ensure that the water features are extracted with deep semantic features and stable spatial information before decoupling, we first design a chunked multi-scale feature aggregation module (CMFAM) to construct a context path for obtaining deep semantic information. Then, an information interaction module (IIM) is designed to exchange information between two spatial paths with two fixed resolution intervals and the two paths through. During decoding, a feature decoupling module (FDM) is developed to utilize internal flow prediction to acquire the main body features, and erasing techniques are employed to obtain boundary features. Therefore, the deep features of the water body and the detailed boundary information are supplemented, strengthening the decoupled body and boundary features. Furthermore, the integrated expansion recoupling module (IERM) module is designed for the recoupling stage. The IERM expands the water body and boundary features using expansion and adaptively compensates the transition region between the water body and boundary through information guidance. Finally, multi-level constraints are combined to realize the supervision of the decoupled features. Thus, the water body and boundaries can be extracted more accurately. A comparative validation analysis is conducted on the public datasets, including the gaofen image dataset (GID) and the gaofen2020 challenge dataset (GF2020). By comparing with seven SOTAs, the results show that the proposed method achieves the best results, with IOUs of 91.22 and 78.93, especially in the localization of water bodies and boundaries. By applying the proposed method in different scenarios, the results show the stable capability of the proposed method for extracting water with various shapes and areas.

Details

Title
SPFDNet: Water Extraction Method Based on Spatial Partition and Feature Decoupling
Author
Cheng, Xuejun 1 ; Han, Kuikui 2 ; Xu, Jian 1 ; Li, Guozhong 1 ; Xiao, Xiao 1 ; Zhao, Wengang 3 ; Gao, Xianjun 2 

 Changjiang River Scientific Research Institute, Changjiang Water Resources Committee, Wuhan 430010, China; chengxj@mail.crsri.cn (X.C.); xujian@mail.crsri.cn (J.X.); liguozhong@mail.crsri.cn (G.L.); xiaoxiao@mail.crsri.cn (X.X.); Key Laboratory of River Regulation and Flood Control in Middle and Lower Reaches of Yangtze River of Ministry of Water Resources, Wuhan 430010, China 
 School of Geosciences, Yangtze University, Wuhan 430100, China; 2022720541@yangtzeu.edu.cn 
 Hunan Institute of Water Resources and Hydropower Research, Changsha 410007, China; ssky@slt.hunan.gov.cn 
First page
3959
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3126019692
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.