DOI QR코드

DOI QR Code

Breast Tumor Cell Nuclei Segmentation in Histopathology Images using EfficientUnet++ and Multi-organ Transfer Learning

  • Dinh, Tuan Le (Dept. of Artificial Intelligence Convergence, Pukyong National University) ;
  • Kwon, Seong-Geun (Dept. of Electronics Engineering, Kyungil University) ;
  • Lee, Suk-Hwan (Dept. of Computer Engineering, Dong A University) ;
  • Kwon, Ki-Ryong (Dept. of Artificial Intelligence Convergence, Pukyong National University)
  • 투고 : 2021.08.04
  • 심사 : 2021.08.12
  • 발행 : 2021.08.30

초록

In recent years, using Deep Learning methods to apply for medical and biomedical image analysis has seen many advancements. In clinical, using Deep Learning-based approaches for cancer image analysis is one of the key applications for cancer detection and treatment. However, the scarcity and shortage of labeling images make the task of cancer detection and analysis difficult to reach high accuracy. In 2015, the Unet model was introduced and gained much attention from researchers in the field. The success of Unet model is the ability to produce high accuracy with very few input images. Since the development of Unet, there are many variants and modifications of Unet related architecture. This paper proposes a new approach of using Unet++ with pretrained EfficientNet as backbone architecture for breast tumor cell nuclei segmentation and uses the multi-organ transfer learning approach to segment nuclei of breast tumor cells. We attempt to experiment and evaluate the performance of the network on the MonuSeg training dataset and Triple Negative Breast Cancer (TNBC) testing dataset, both are Hematoxylin and Eosin (H & E)-stained images. The results have shown that EfficientUnet++ architecture and the multi-organ transfer learning approach had outperformed other techniques and produced notable accuracy for breast tumor cell nuclei segmentation.

키워드

1. INTRODUCTION

Breast cancer is the most common type of women's cancer and is the second cause of women's cancer death each year [1]. Early breast cancer diagnosis is vital for cancer patients, the survival rate of 5-year for breast cancer patients can reach 90% with early diagnosis [1]. Currently, breast cancer detection modalities have evolved with many types, and most of them use image-based approaches to screen and detect the tumor mass. We can name them such as electrical impedance based imaging, breast ultrasound, scintimammography, computed tomography (CT), positron emission tomography (PET), thermography, optical imaging, mammography, and magnetic resonance imaging (MRI) [2]. Breast cancer diagnosis often starts with mammography imaging analysis using Deep Learning to detect the existence of cancer tumors [3]. However, sometimes the method comes to an undetermined conclusion, and in situations like this, the breast cancer diagnosis needs to use biopsy and H & E-stained image analysis to have a more accurate diagnosis [4].

Hematoxylin and Eosin-stained is one of the primary techniques used for histological staining, because of its simplicity and the capability to illustrate many different tissue structures [5]. The biopsy specimens are processed on slides and then the Hematoxylin material stains the nuclei with dark color, demonstrating clear details of the intra-nuclear, and the Eosin component stains the cell cytoplasm and tissue fibers with pink, red, or orange colors in many intensity levels. The slide images were captured by whole slide digital scanners, stored for cancer tumor cell detection, or for the breast cancer staging process. Conventionally, digitized histopathology images are analyzed using laborious and manual techniques, which are time consuming processes and limited by the experience of pathologists. Thus, we need an accurate and quantitative analysis approach for histopathology images. Computer-aid diagnosis (CAD) comes to the place and then follows with many state-of-the-art (SOTA) CAD methods for automated image analysis histopathology imagery [6].

Recently, the machine learning approach applied for histopathology image analysis has taken a leap, especially for Deep Learning-based methods that increase the efficiency and accuracy of histopathological diagnosis [7]. The work of S. Bhattacharjee et al. [8] tried to predict the histological grade of prostate cancer biopsy using Convolutional Neural Network. P. Naylor et al. [9] had proposed a Deep Learning-based method for the segmentation of H & E-stained nuclei images. The novelty of the method is that it considers the segmentation task similar to the regression task of distance map so that the network can learn and tackle the problem of touching nuclei cells. Another work of Janowczyk et al. [10] had presented thoroughly Deep Learning techniques applied for seven Digital Pathology tasks and compared the results of Deep Learning-based methods with other state-of-the-art hand-crafted feature-based. Zhao Z et al. [11] proposed PFA-ScanNet which stands for Pyramidal Feature Aggregation ScanNet, a neural net specific for breast cancer metastasis detection. Graham et al. [12] proposed the Minimal information loss dilated network (MILD-Net), a Deep Neural network applied for the gland and lumen segmentation tasks. Ho et al. [13] introduced Deep Multi-Magnification, a Deep Neural Network with multi-encoder, multi-decoder, multi-concatenation specific for multi-class tissue segmentation from digital whole slide images.

2. RELATED WORKS

Most Deep Learning-based models applied for image segmentation tasks use some types of techniques related to encoder-decoder architecture. There are two categories of encoder-decoder for image segmentation problems, the general segmentation, and the medical, biomedical segmentation approaches [14]. The early work of Noh et al. [15] had proposed a network that learns the de convolution network and contains the deconvolution and unpooling layers. The network achieved outstanding results on the PASCAL VOC 2012 dataset and produced the best accuracy on the Microsoft COCO dataset. Another paper focused on the encoder-decoder approach is the work of Badrinarayanan et al. [16] named Segnet. The novelty of the network is the upsampling layer in the decoder path, the pooling indices allow the network to reduce the need for learning to upsample.

The scarcity of labeled training datasets and the demand for high accuracy segmentation raises the need to have a dedicated purpose neural net for medical and biomedical images. In 2015, Olaf Ronneberger et al. [17] published the Unet paper in which they proposed a U-shape encoder-decoder architecture that contains two symmetric paths, the contracting path, and the expanding path. The network can train end-to-end using a small number of training images, and surpass the accuracy of other best methods on the ISBI dataset with the high performance of segmentation results.

Since the outstanding performance of Unet, there is a fast growth of the Unet related papers in medical and biomedical image segmentation tasks. Some of the Unet variant models such as 3D Unet, Attention Unet, Inception Unet, Residual U-Net, Dense Unet, Unet++, Adversarial Unet, etc had gained significant improvement on a specific type of Medical, Biomedical images [18]. The work of B. Baheti et al. [19] tried to combine pretrained EfficientNet and Unet as encoder and decoder of an architecture which had ranked in first place for image segmentation in the IDD lite challenge.

To enhance the accuracy of segmentation tasks for Medical, Biomedical images, Zhongwei Zhou et al. [20] had introduced the Unet++ model which re-designed the skip pathway of the original Unet. They believed that when reducing the semantic gaps between encoder and decoder, the optimizer will carry out easier learning tasks and hence boost up the performance. In 2019, The EfficientNet paper was introduced by Mixing Tan and Quoc Le [21], and had become popular in the Machine Learning community. The basic idea of the paper is to scale the network in all dimensions, both the depth, the width of the neural net, and the image size. The new scaling method helps to create a novel EfficientNet model and achieve outstanding results compared to other ConvNets.

Using pretrained ConvNets as the backbone and then upsampling with Unet variant architecture as a decoder has become a novel approach and has achieved many improvements for segmentation tasks in recent years. The early evaluation of the Unet++ with EfficientNet as a backbone for medical image segmentation tasks can refer to the work of Le Duy Huỳnh et al. [22]. In the paper, the author had run experiments with datasets from the EndoCV2020 challenge and compared the result with other SOTAs. The Unet++ with pretrained EfficientNet as the backbone had outperformed other methods with high accuracy.

The problem facing when conducting experiments for breast tumor cell nuclei segmentation is the availability of labeled training datasets. To overcome the scarcity problem of training data, Lagree et al. [23] proposed the idea of multi-organ transfer learning, the neural net will be trained on the multi-organ H & E-stained cell datasets of liver, prostate, kidney, lung, colon, brain, bladder and stomach, then will be inferenced on the breast tumor cell dataset to evaluate the capability of transfer learning. This study follows the idea of the previously mentioned paper. We will run experiments on Unet++ with pretrained EfficientNet models from B0 to B5 and then compare the results with other SOTAs. More details about the training data sets and the configurations to conduct experiments of multi-organs transfer learning will be discussed in the Methodology section.

3. METHODOLOGY

We use Unet++ with EfficientNet as the backbone of the model. We conduct our experiments with pretrained EfficientNet from B0 to B5 which was trained on the ImageNet dataset with highly effective compound coefficients. The model produced more efficiency and accuracy than previous ConvNets, while the parameter numbers are smaller.

3.1 Dataset

MonuSeg stands for Multi-organ Nucleus Segmentation, the dataset was published in the official satellite event of MICCAI 2018. The challenge was opened aim to find the best methods to segment the nuclei in H&E-stained cell images [24]. The original MonuSeg includes thirty training images, each with size 1000×1000, from seven organs and contains total 21, 623 annotated nuclei boundaries. The test set has 14 images that are the same size as the training set and have total 7, 223 annotated nucleis. In our experiment, we moved 6 breast tumor cell images from the original MonuSeg to the test set, thus the total number of training images was 24 and was split into 80% train and 20% validation.

The test set in our experiment includes 58 images, we select 8 images from the MonuSeg data sets and 50 images from the Triple Negative Breast Cancer (TNBC) dataset. The TNBC dataset is the work of P. Naylor et al. [9] which was generated at the Curie Institute, all slides are taken from Triple Negative Breast Cancer patients and then export histopathology images with Philips Ultra Fast Scanner 1.6 RA. The TNBC consists of 50 images with a total of 4022 annotated cells, and with an average of 80 cells per sample image. The largest number of cells per image is 293 and the smallest number of cells is 5. One pathologist expert and two trained research fellows carried out the annotation for the training and testing dataset. The results were peer-checked then if there was disagreement, the team needed to discuss and come to the final decision.

MTMDCW_2021_v24n8_1000_f0001.png 이미지

Fig. 1. Some sample images from the MonuSeg dataset.

MTMDCW_2021_v24n8_1000_f0002.png 이미지

Fig. 2. Some sample images from the TNBC dataset.

3.2 EfficientNet Encoder

The EfficientNet family starts with Efficient Net-B0 at 5.3 million parameters to the largest model EfficientNet-B7 with 66 million. The elementary unit of EfficientNet is the mobile bottleneck (MBConv) using squeeze-and-excitation components for optimization purposes [25]. Each EfficientNet model differs in parameter size and its number of modules, but the overall architecture remains the same from B0 to B7. The important contribution of the EfficientNet is the new compound scaling method, that uniformly scales the network in terms of depth, width, and resolution of input images. The scaling method is demonstrated in the following principle:

\(\left\{\begin{array}{c} \text { depth: } d=\alpha^{\Phi} \\ \text { width: } w=\beta^{\Phi} \\ \text { resolution: } r=\gamma^{\Phi} \end{array}\right.\)       (1)

\(\text { s.t. } \alpha \cdot \beta^{\emptyset} \cdot \alpha^{\Phi} \cdot \gamma^{\Phi} \approx 2, \alpha \geq 1, \beta \geq 1, \gamma \geq 1\)

In which, α, β, γ are constant coefficients which can be calculated by a small grid search on the original small model. And φ is coefficient predefined by users that control resources for model scaling. If we scale the network depth αN, width βN, and resolution γN then we get 2N times more computational resources.

MTMDCW_2021_v24n8_1000_f0003.png 이미지

Fig. 3. Original Efficienet-B4 and original Unet++ architecture.

3.3 Unet++ decoder

The decoder block of Unet++ connects with the encoder block of EfficientNet through a sequence of nested dense convolutional blocks. The skip pathway of the Unet++ is formulated as follow:

\(x^{i, f}=\left\{\begin{array}{c} H\left(x^{i-1, j}\right), \quad j=0 \\ H\left(\left[\left[x^{i, k}\right]_{k=0^{\prime}}^{j-1} U\left(x^{l+1, j-1}\right)\right]\right), \quad j>0 \end{array}\right.\)       (2)

In which, the xi, j is the output of node Xi, j. We denote i as the indexes of downsampling layers along the encoder and j as indexes of the convolutional layer blocks along the skip pathway. We have H (·) are the convolution operations flow by an activation function, U (·) are the upsampling layers and [] are concatenate layers. Overall, the difference between Unet and Unet++ is the redesign skip pathway, in which, Unet architecture just has plain connections, features from decoders receive directly from encoders, and in Unet++ we have dense convolution blocks that reduce the semantic gap between the contracting path and the expansive path.

3.4 Data augmentation

We use Albumentations as the library for our training data augmentation pipeline. The library is a fast, robust, and easy-to-use tool for augmenting our training data [26]. Medical and biomedical image analysis confronts the problem of data scarcity due to the cost of labeling data and the availability of data sources. Using data augmentation techniques can help to solve the problem as well as improve the neural net performance against over fitting during the training process. We use lots of data augmentation operations to enrich our data from 19 images to 5320 images for training and 5 images to 1400 images for validation. All the images were resized into 256 × 256 then using augmentation techniques like HorizontalFlip, ShiftScale Rotate, RandomCrop, RandomBrightness, Random Gamma, etc to scale up the number of training images.

3.5 Training

We conduct experiments on a machine with Intel(R) Core (TM) i7-8700K CPU @ 3.70GHz installed memory (RAM) 16GB, NVDIA GeForce RTX 2070 8GB.

MTMDCW_2021_v24n8_1000_f0004.png 이미지

Fig. 4. Some sample of augmented images after applied Albumentations library.

We train our model using Sigmoid as the activation function, Adam optimizer with a learning rate of 0.0001, and Dice Coefficient as the loss function. The formula of Dice Loss function is defined as below:

\(D=\frac{2 \sum_{i}^{N} p_{i} g_{i}}{\sum_{i}^{N} p_{i}^{2}+\sum_{i}^{N} g_{i}^{2}}\)       (3)

In which, pi denotes the predicted pixel and gi denotes the ground truth. In our nuclei segmentation task, the pi are pixels that our trained models predict as nuclei, and the gi are nuclei pixels that were labeled by experts.

We execute image augmentation operations on-the-fly mode, simultaneously with the training process using the Albumenations library. We use StainTools (https://github.com/Peter554/StainTools) as stain normalization library for H & E-stained images. All the models and pretrained backbone were implemented on the Segmentation Models repository and available on GitHub (https://github. com/qubvel/segmentation_models.pytorch).

4. EXPERIMENTAL RESULT AND DISCUSSION

We evaluate our models based on three metrics Recall, F1 score, and Precision with the threshold value equal to 0.5. Our proposed EfficienetUnet++ results had outperformed others SOTAs on the nuclei segmentation task in terms of Recall and F1 metrics, except for Precision, the GB U-net proposed by Lagree et al. [23] had produced higher result compared to the EfficientNet family.

Table 1. Quantitative results comparison between EfficientUnet++ and other segmentation methods on the MonuSeg dataset.

MTMDCW_2021_v24n8_1000_t0001.png 이미지

On the MonuSeg dataset, Unet++ with Efficient Net-B5 and EfficientNet-B1 produced the best result for Recall metric (0.9272) and F1 metric (0.8008). EfficientUnet++ performance is superior with a wide margin on Recall metric compared with traditional segmentation methods like Otsu, Watershed, and Fiji. Similar to the Recall metric on the F1 metric, EfficientUnet++ results exceed a large margin compared with traditional segmentation methods and produced higher accuracy than Deep Learning-based methods. However, results on Precision remain to the same degree as other Deep Learning-based methods. Fig. 5 shows you the qualitative output results of Unet++ with EfficientNet-B1 as the backbone when inferencing on MonuSeg dataset.

MTMDCW_2021_v24n8_1000_f0005.png 이미지

Fig. 5. Qualitative output results of Unet++ with EfficientNet-B1 as the backbone on the MonuSeg dataset.

On the TNBC dataset, Unet++ with Efficient Net-B5 and B1 exceeds other methods in terms of Recall (0.9343) and F1 (0.6785) metrics, the EfficientUnet++ do not pass the GB-Net (0.8102) on Precision metric evaluation. Similar to the Monu Seg dataset, EfficientUnet++ also outperformed other Deep Learning-based segmentation methods on Recall and F1, however, in Precision metric EfficientUnet++ produced not as good results as other Deep Learning-based methods. Fig. 6 shows you the qualitative output results of Unet++ with EfficientNet-B1 as the backbone when inferencing on the TNBC dataset.

MTMDCW_2021_v24n8_1000_f0006.png 이미지

Fig. 6. Qualitative output results of Unet++ with EfficientNet-B1 as the backbone on the TNBC dataset.

5. CONCLUSION

In this work, we show you the effectiveness of multi-organ transfer learning applied for breast cancer tumor cells. The Unet++ with pretrained EfficientNet from B0 to B5 was selected to run the experiment on seven organ cell images. Our experiments exhibited the best result in terms of Recall and F1 score compared with previous works that run on the Monuseg datasets and inference on the TNBC datasets. The results demonstrate that Deep Learning models trained on different organ cell images can produce a good performance for breast cancer tumor cells on segment nuclei tasks. Compared with traditional methods, the Efficient Unet++ outperformed with a large accuracy margin. Compared with other Deep Learning methods, our proposed models produced more excellent results than Mask RCNN, Unet Ensemble, Unet with Resnet-50, Resnet-101, VGG-16, VGG-19, Dense Net-121, Densenet-201, Inception-v3 as the backbone, and GB U-net proposed in the paper Lagree et al. [23].

Table 2. Quantitative results comparison between EfficientUnet++ and other segmentation methods on the TNBC dataset.

MTMDCW_2021_v24n8_1000_t0002.png 이미지

To enhance the network accuracy, we need more training data feeding to our network. More training data help the network to improve the generalization and prevent from being overfitting. Recently, many techniques to enrich the labeled training dataset had been investigated, two approaches that were popular in medical and biomedical communities are scalable crowd-sourcing annotation [27] and GAN for generating new medical and biomedical images [28]. The scalable crowd-sourcing annotation approach focus to utilize the crowd workforce to produce more labeling data with the supervision of expert domains. While the GAN approach mainly leverages the family of Generative Adversarial Network to augment original data. In future work, we will try to exploit those two methods to increase the dataset for our models and will take more numerical evaluation on the benefit of those two data enrichment methods.

참고문헌

  1. R.L. Siegel, K.D. Miller, H.E. Fuchs, and A. Jemal, "Cancer statistics," A Cancer Journal for Clinicians 71, Vol. 71, pp. 7-33, 2021. https://doi.org/10.3322/caac.21654
  2. S.V. Sree, E.Y. Ng, R.U. Acharya, and O. Faust, "Breast imaging: A survey," World Journal of Clinical Oncology, Vol. 2, pp. 171-178, 2011. https://doi.org/10.5306/wjco.v2.i4.171
  3. S.Y. Kwon, Y.J. Kim, and G.G. Kim, "A deep learning-based automatic segmentation of breast mass in breast imaging," Journal of the Korean Society for Multimedia, Vol. 21, No. 12, pp. 1363-1369, Dec. 2018.
  4. M. Lekka, "Atomic force microscopy: A tip for diagnosing cancer," Nature Nanotechnology, Vol. 7, pp. 691-692, 2012. https://doi.org/10.1038/nnano.2012.196
  5. J.D. Bancroft and C. Layton, "The hematoxylins and eosin," in Bancroft's Theory and Practice of Histological Techniques (Eighth Edition), Elsevier, 2019, pp. 126-138.
  6. M.N. Gurcan, L.E. Boucheron, A. Can, A. Madabhushi, N.M. Rajpoot, and B. Yener, "Histopathological image analysis: A Review," IEEE Reviews in Biomedical Engineering, Vol. 2, pp. 147-171, 2009. https://doi.org/10.1109/RBME.2009.2034865
  7. G. Litjens, C. Sanchez, N. Timofeeva et al., "Deep Learning as a Tool for Increased Accuracy and Efficiency of Histopathological Diagnosis," Scientific Reports, Vol. 6, No. 26286, 2016.
  8. S. Bhattacharjee, D. Prakash, C.H. Kim, and H.K. Choi, "Multichannel Convolution Neural Network Classification for the Detection of Histological Pattern in Prostate Biopsy Images," Journal of the Korean Society for Multimedia, Vol. 23, No. 12, pp. 1486-1495, Dec. 2020.
  9. P. Naylor, M. Lae, F. Reyal, and T. Walter, "Segmentation of Nuclei in Histopathology Images by Deep Regression of the Distance Map," IEEE Transactions on Medical Imaging, Vol. 38, No. 2, pp. 448-459, Feb. 2018. https://doi.org/10.1109/tmi.2018.2865709
  10. A. Janowczyk and A. Madabhushi, "Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases," Journal of Pathology Informatics, Vol. 7, pp. 29, Jul. 2016. https://doi.org/10.4103/2153-3539.186902
  11. Z. Zhao, H. Lin, H. Chen, and P.A. Heng, "PFA-ScanNet: Pyramidal Feature Aggrega-Tion with Synergistic Learning for Breast Cancer Metastasis Analysis," Medical Image Computing and Computer Assisted Intervention-MICCAI 2019, pp. 586-594, 2019.
  12. S. Graham et al., "MILD-Net: Minimal Information Loss Dilated Network for Gland Instance Segmentation in Colon Histology Images," Medical Image Analysis, Vol. 52, pp. 199-211, 2019. https://doi.org/10.1016/j.media.2018.12.001
  13. D.J. Ho et al., "Deep Multi-Magnification Networks for Multi-Class Breast Cancer Image Segmentation," Computerized Medical Imaging and Graphics, Vol. 88, No. 101866, 2021.
  14. S. Minaee, Y.Y. Boykov, F. Porikli, A.J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, "Image Segmentation Using Deep Learning: A Survey," arXiv preprint, arXiv:2001.05566, 2020.
  15. H. Noh, S. Hong, and B. Han, "Learning Deconvolution Network for Semantic Segmentation," 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1520-1528, 2015.
  16. V. Badrinarayanan, A. Kendall, and R. Cipolla, "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39, No. 12, pp. 2481-2495, 2017. https://doi.org/10.1109/TPAMI.2016.2644615
  17. O. Ronneberger, P. Fischer, and T. Brox, "UNet: Convolutional Networks for Biomedical Image Segmentation," Medical Image Computting and Computer-Assisted Intervention-MICCAI 2015, vol. 9351, 2015.
  18. N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, "U-Net and its Variants for Medical Image Segmentation: A Review of Theory and Applications," IEEE Access, Vol. 9, pp. 82031-82057, 2021. https://doi.org/10.1109/ACCESS.2021.3086020
  19. B. Baheti, S. Innani, S. Gajre, and S. Talbar, "Eff-UNet: A Novel Architecture for Semantic Segmentation in Unstructured Environment," in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020.
  20. Z. Zhou, M.M.R Siddiquee, N. Tajbakhsh, and J. Liang, "UNet++: A Nested U-Net Architecture for Medical Image Segmentation," Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018, ML-CDS 2018. Lecture Notes in Computer Science, Vol. 11045, pp. 3-11, 2018.
  21. M. Tan and Q.V. Le, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks," in Proceedings of International Conference on Machine Learning, 2019.
  22. L.D. Huynh and N. Boutry, "A U-Net++ with Pre-trained EfficientNet Backbone for Segmentation of Diseases and Artifacts in Endoscopy Images and Videos," in EndoCV@ISBI, 2020.
  23. A. Lagree, M. Mohebpour, N. Meti et al., "A Review and Comparison of Breast Tumor Cell Nuclei Segmentation Performances Using Deep Convolutional Neural Networks," Scientific Reports, Vol. 11, No. 8025, 2021.
  24. N. Kumar et al., "A Multi-Organ Nucleus Segmentation Challenge," IEEE Transactions on Medical Imaging, Vol. 39, No. 5, pp. 1380- 1391, May 2020. https://doi.org/10.1109/tmi.2019.2947628
  25. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
  26. A. Buslaev, V.I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A.A. Kalinin, "Albumentations: Fast and Flexible Image Augmentations," Information, Vol. 11, No. 2, pp. 125, 2020. https://doi.org/10.3390/info11020125
  27. M. Amgad, L.A. Atteya, H. Hussein, K.H. Mohammed, E. Hafiz, M.A. Elsebaie, A.M. Alhusseiny, M.A. AlMoslemany, A.M. Elmatboly, P.A. Pappalardo et al., "NuCLS: A Scalable Crowdsourcing, Deep Learning Approach and Dataset for Nucleus Classification, Localization and Segmentation," arXiv preprint, arXiv:2102. 09099, 2021.
  28. X. Yi, E. Walia and P. Babyn, "Generative Adversarial Network in Medical Imaging: A Review," Medical Image Analysis, Vol. 58, No. 101552, 2019.