Skin lesion segmentation from dermoscopy images is a fundamental yet challenging task in the computer-aided skin diagnosis system due to the large variations in terms of their views and scales of lesion areas. We propose a novel and effective generative adversarial network (GAN) to meet these challenges. Specifically, this network architecture integrates two modules: a skip connection and dense convolution U-Net (UNet-SCDC) based segmentation module and a dual discrimination (DD) module. While the UNet-SCDC module uses dense dilated convolution blocks to generate a deep representation that preserves fine-grained information, the DD module makes use of two discriminators to jointly decide whether the input of the discriminators is real or fake. While one discriminator, with a traditional adversarial loss, focuses on the differences at the boundaries of the generated segmentation masks and the ground truths, the other examines the contextual environment of target object in the original image using a conditional discriminative loss. We integrate these two modules and train the proposed GAN in an end-to-end manner. The proposed GAN is evaluated on the public International Skin Imaging Collaboration (ISIC) Skin Lesion Challenge Datasets of 2017 and 2018. Extensive experimental results demonstrate that the proposed network achieves superior segmentation performance to state-of-the-art methods.
首页 >
Generative adversarial networks based skin lesion segmentation > Skin lesion segmentation via generative adversarial networks with dual discriminators