Meet Inspiring Speakers and Experts at our 3000+ Global Conference Series Events with over 1000+ Conferences, 1000+ Symposiums
and 1000+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business.

Explore and learn more about Conference Series : World's leading Event Organizer

Back

Baoming Zhang

Zhengzhou University, China

Title: Deep feature learning for unsupervised change detection in high-resolution multi-temporal and multi-source images

Biography

Biography: Baoming Zhang

Abstract

Multi-temporal imagery change detection is growing in popularity among many applications, such as geography information updating, disaster monitoring, agriculture monitoring. With the improvement of spatial resolution, more subtle change information is expected to be detected. However, high-resolution imaging systems usually have low temporal resolution, resulting that multi-source images have to be considered to satisfy different kinds of applications, which brings increased challenges for change detection. Recently, deep learning is a fast-developing domain, making it possible for unsupervised abstract feature extraction of remote sensing images. For this reason, this paper proposed an unsupervised change detection approach using deep feature learning for high-resolution multi-temporal images acquired by different sensors. First, to obtain initial and reliable change information from bi-temporal images, multiple features are extracted including spectral features, texture features and edge features. Through utilizing these features jointly, specific rules are designed to select robust changed and unchanged samples automatically. Then, the bi-temporal multi-source images are layered as original change feature and Stacked De-noising Auto-Encoder (SDAE) is introduced for transforming the feature into a new feature space, where change information is represented deeply. Finally, the change detection model is constructed by adding a supervised classifier to the deeply learned features and the change information can be obtained by feeding the samples into the model with fine-tuning. Experiments with multi-temporal images from different sources demonstrate the effectiveness and robustness of the proposed approach.