the learning speed of the network model and the convenience of the proofreading method. The efficiency of an AID workflow mainly depends on two factors, i.e. As this procedure is iteratively repeated, the network performance is gradually improved, and thus less-and-less user proofreading is needed. Since the preliminary trained network is not accurate enough, human supervisors are involved to proofread the automatic annotation results, and the human-corrected annotations are added to the training set to retrain a more accurate network. Then, the preliminarily trained network is used for automatically annotating more training data. In a typical AID workflow, a segmentation network is preliminarily trained via a small number of manually annotated data. To tackle this problem, the strategy of annotation by iterative deep learning (AID) becomes popular in the research community. Therefore, efficient network training and human annotation methodologies are needed to relieve the burden of data annotation. Human supervision is still indispensable for most cases of deep segmentation network training. Although some recent efforts have been made on few-shot or unsupervised learning, these methods are still not generalizable enough for different imaging modalities or target organs. It is well known that manual annotation of volumetric medical images is extremely tedious and prone to subjective variabilities. So far, the training of an effective deep segmentation network still requires the annotation of large datasets. Nowadays, deep learning (DL) demonstrated promising performance for medical image analysis, and deep neural networks became the mainstream method for organ segmentation from medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape. Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Resultsįor validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. Training deep neural networks usually require a large number of human-annotated data.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |