A review of deep learning based agricultural remote sensing image segmentation
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Authors
Agricultural remote sensing image segmentation, which involves classifying each pixel of an image into a specific category, has recently been driven by deep learning methods due to their powerful feature extraction capabilities. This paper presents a systematic review of deep learning-based image segmentation techniques for agricultural remote sensing, along with an overview of current challenges and emerging research trends. First, it outlines the characteristics of agricultural remote sensing tasks and the requirements for remote sensing image acquisition and processing, providing an in-depth analysis of the nature of agricultural remote sensing data. Next, it systematically reviews the evolution of deep learning-based methods, with a focus on summarizing segmentation network architectures, including convolution-based models, transformer-based models, hybrid architectures, lightweight models, and vision-language models. Moreover, it discusses several deep learning paradigms designed for annotation-efficient scenarios, including semi-supervised, weakly supervised, self-supervised, and transfer learning. Then, it offers an in-depth analysis of key challenges, such as data annotation, computational cost, and model generalization. Finally, it summarizes the latest advances in deep learning for agricultural remote sensing image segmentation and outlines potential future research directions, aiming to provide technical references that promote the practical application and successful deployment of deep learning in this critical domain.
Downloads
Citations
How to Cite

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.