CELLNUCLEI CLASSIFICATION INHISTOPATHOLOGICAL IMAGES USING HYBRID OLCONV NET Suvidha Tripathi Department of Information Technology Indian Institute of Information Technology Allahabad Jhalwa, Deoghat, Prayagraj, Uttar Pradesh 211015 suvitri24@gmail.comSatish Kumar Singh Department of Information Technology Indian Institute of Information Technology Allahabad Jhalwa, Deoghat, Prayagraj, Uttar Pradesh 211015 sk.singh@iiita.ac.in February 22, 2022 ABSTRACT Computer-aided histopathological image analysis for cancer detection is a major research challenge in the medical domain. Automatic detection and classification of nuclei for cancer diagnosis impose a lot of challenges in developing state of the art algorithms due to the heterogeneity of cell nuclei and data set variability. Recently, a multitude of classification algorithms has used complex deep learning models for their dataset. However, most of these methods are rigid and their architectural arrangement suffers from inflexibility and non-interpretability. In this research article, we have pro- posed a hybrid and flexible deep learning architecture O LConvNet that integrates the interpretability of traditional object-level features and generalization of deep learning features by using a shallower Convolutional Neural Network (CNN) named as CNN 3L.CNN 3Lreduces the training time by training fewer parameters and hence eliminating space constraints imposed by deeper algorithms. We used F1-score and multiclass Area Under the Curve (AUC) performance parameters to compare the results. To further strengthen the viability of our architectural approach, we tested our proposed methodology with state of the art deep learning architectures AlexNet, VGG16, VGG19, ResNet50, InceptionV3, and DenseNet121 as backbone networks. After a comprehensive analysis of classi- fication results from all four architectures, we observed that our proposed model works well and perform better than contemporary complex algorithms. Keywords Deep Learning, Hybrid networks, Object level features, Transfer Learning, Histopathological Images, Cell Nuclei Classification, Class balancing, Convolutional Neural Networks, Multi Layer Perceptron 1 Introduction Early cancer detection is a major challenge in the medical domain. Even today the medical community is largely dependent upon the expert pathologist for detecting and classifying such cell anomalies that cause cancer, in whole slide histopathological images. The job of the pathologists becomes very cumbersome and may take several days for annotating the whole slide images of biopsy samples. Moreover, the reliability of predictions also depends upon the experience of the pathologist and some times, consensus of more than one pathologists are required for confirming such anomalies. These factors provide adequate motivation for research and development of a computer-assisted diagnostic (CAD) systems which classifies cell nuclei and improves the understanding of some of the underlying biological phenomenon, e.g., monitoring cancer cell cycle progress [1], type, shape, size and arrangement of the cells in the affected organ sites, and the knowledge about metastasis, if the cells are present at some unlikely locations . All these observations can be comprehended if we know the type of cell present in the diseased tissue sample. Early diagnosis of cell anomalies can largely affect the disease prognosis [2]. Such as in the case of a colon or colorectal carcinoma, epithelial cells lining the colon or rectum of the gastrointestinal tract are affected and timely detection of these cells can help in quick diagnosis, which eventually would increase the prognostic value of the disease. Similarly, the lymphocytes can also be analyzed for sentinel lymph node disease [2]. The other examples are the Myeloma orarXiv:2202.10177v1 [cs.CV] 21 Feb 2022APREPRINT - FEBRUARY 22, 2022 Figure 1: Example sub-images of different classes of nuclei starting from first row to fourth: Epithelial, Fibroblasts, Inflammatory, Miscellaneous (in sets of 2 - Adipocyte, Endothelial, Mitotic Figure, and Necrotic Nucleus (from left to right) multiple Myeloma detections through the plasma cells which are the types of the white blood cells and cause the cancer [3]. Therefore, a sample biopsy from a specific location can be quickly analyzed using the information of cell environment provided by appropriate CAD system. In particular medical image analysis, for all diagnosis, is attributed to the knowledge and skills possessed by trained and experienced pathologists. Although pathologists have the ability and means to single out the affected cancerous lesions in a tissue biopsy samples, most of such detections are still done manually and hence time-consuming. Numerous challenges are involved in diagnosing cancer due to data set variability and heterogeneity in cell structures, which makes the process extremely tedious even for experts. Software intervention for early detection is therefore important for the purpose of effective control and treatment of the diseased organs [4]. To develop such automated cell detection and classification algorithms, the knowledge of histology is vital and requires the annotated or labeled data set to be prepared by the expert histo-pathologists. Once the labelled data is acquired, then the routine intervention of pathologists can be eliminated while analyzing the whole slide samples under test by using the developed automated CAD algorithms. Cell nuclei in a Hematoxylin and Eosin (H&E) stained histopathological slide sample have a specific shade of blue caused by hematoxylin’s reaction with cellular protein present in the nuclei of the cells [5]. A shape of the cell varies with cell type, cell-cycle stage, and also with the presence or absence of cancer. Fig. 1 shows four different classes of nuclei namely, Inflammatory, Fibroblast, Epithelial, and Miscellaneous, which include adipocyte, endothelial nucleus, mitotic figures, and necrotic nucleus [6]. The nuclei structures as shown in Fig.1 have different shapes, texture, and intensity features, which vary by the factors i.e. nuclei type (epithelial, fibroblasts, lymphocytes, etc.), the malignancy of the disease (or grade of cancer), and nuclei life cycle (interphase or mitotic phase) [7]. For example, the inflammatory nuclei, a type of white blood cells and called as lymphocyte nuclei (LN) are smaller in size and regular spherical shape as compared to epithelial nuclei (EN) [8]. Fibroblasts have long spindle-like shape and appearance and having very little cytoplasm [9]. Activated fibroblasts and infiltrating inflammatory cells are the signs of potential tumor growth [9]. All these histological and biological differences between cell structures and their site of origin highlight the clinical relevance of classifying different types of nuclei. In this paper, we have undertaken the feature based approach for automated nuclei classification. Feature based approach can be classified into two general categories: hand-crafted features and deep learning based features. In histopathology Images, morphological and architectural features whose accuracy depends on the amount of magnifi- cation and the type of class and which has unique mixture of visual pattern qualify as hand crafted features whereas unsupervised deep learning features are intuitive and are a by-product of filter responses obtained from large number of training samples and fine tuning of the network. In the proposed work we clearly proved the benefits of using combined feature set consisting both object level features and learned deep learning features over feature set acquired from single domain on complex medical data. Moreover, for detailed analysis, the accuracy-generalization tradeoff and space-time complexity issues as exhibited by traditional and DL methods respectively have been considered in the proposed architectural arrangement. In summary, key contributions of this work include: 1. The strength of our method lies in the flexible architecture that supports a backbone of deep learning model to extract deep features and a simple object level extraction framework for extracting cell level features. 2APREPRINT - FEBRUARY 22, 2022 2. We achieved a high level of nuclei classification system through simple concatenation of derived features from two domains. 3. The emphasis has been put through a series of experiments that in case of nuclei structures, even very small number of basic and locally focussed object level features can enhance the performance if combined with three or more layers of deep learning architecture. 4. To the best of our knowledge, this is the first study on this hypothesis that have developed a custom architec- ture for the problem to highlight the need of designing lighter architectures for specific problems rather than using deeper pre-trained architectures. 5. To the best of our knowledge, this is the first study on this hypothesis to have experimentally prove the performance of end to end learning over stage wise learning. The rest of the paper is organized as follows. Section-II describes the reviewed literature for handcrafted and deep features. Section-III describes the complete methodology of the proposed work. The experimental setup is elaborated in Section-IV including the database and workflow. Section-V contains various results and necessary discussion. Discussion section also justifies the appropriateness of the proposed method while the flexibility and robustness are the points of concerns. Section-VI concludes the work presented in the paper followed by the acknowledgments and references. 2 Reviewed Literature Owing to the above-mentioned properties exhibited by cell nuclei, many traditional handcrafted cell nuclei classi- fication algorithms have been reported in [10–16]. Authors in [10] have first segmented the nuclei objects using morphological region growing, wavelet decomposition and then found the shape and texture features for classifying cancer cells vs. normal cells using SVM classifier. Another handcrafted feature-based method for cell nuclei clas- sification in histopathological images while using shape, statistical, and texture (Gabor and Markov Random Field) features from localized cells has been reported in [11]. Other object level (OL) feature extraction from localized cell objects based methods have been reported in [12]. All the methods [10–12, 15, 16] using the OL features have been critically analyzed against the utility of those features for individual problems related to a histological and/or cytologi- cal analysis of cancer cells in [13,14]. The quality of the extracted features from various handcrafted methods [10–16] is then assessed after passing them through appropriate classifiers. The success of their findings motivated the use of targeted OL features in our methodology. To design effective handcrafted feature-based models requires complex algorithms to achieve high performance and a decent level of domain-specific knowledge [15, 16]. Moreover, it be- comes extremely difficult to resolve the issues due to dataset variability and heterogeneity within each cell type. These issues lead to the inability of the reported novel but complex models to generalize well with varying datasets. It is worth mentioning that, most of these methods are reported on a very small sample size in general, causing robustness issues. To overcome the generalization problem, it is required to model features, which are common for a particular class of cell nuclei but highly discriminating among different classes. Recently, deep learning architectures have been known to produce generalized feature sets and hence have proved their niche in classification algorithms [17–24]. To put it more clearly, the key advantage of using deep learning architectures can be explained by highlighting the problems of linear or other shallow classifiers. The traditional classifiers do not use raw pixel data and possibly cannot distinguish two similar objects on different backgrounds which is a case of selectivity-invariance dilemma [17]. That is why we need good feature extractors with such classifiers to solve the selectivity-invariance dilemma. The Deep Learning (DL) based architectures automatically learn the good feature sets from the large histopathological image data sets. In 2016, the CAMELYON challenge has also reported the use of extensive deep learning architectures for solving various problems of Localization and Classification. Detailed methodologies of all these methods have been reported in [21]. More recently, authors in [25] have used pre-trained VGG19 to classify extensively augmented multi grade brain tumour samples whereas authors in [26] to identify alcoholism in subjects using their brain scans. So, DL methods find applicability in wide range of applications due to their robust and better performing architectures. However, there are some issues with deep learning based methods as well. DL features lack interpretability and can- not be confirmed as global or local features. Moreover, there is always the lack of a large number of datasets in the medical domain, which hampers or restricts DL algorithms to scale well on all other test data sets not used for the training. Another major issue with deep architectures is the huge parameters with greater depths, causing the opti- mization problem very time-consuming. At the same time, the complexity of a model increases, as the depth increases and eventually the intermediate processes become less and less interpretable. One of the approaches for minimizing the training time on the medical dataset is to use the concept of transfer learning and fine-tuning pre-trained models such as AlexNet [18], VGG16, VGG19 [20], ResNet50 [19], DenseNet [27], and InceptionV3 [28]. Originally these models have been trained on natural image datasets, which is from an entirely different domain but can be fine-tuned to extract features on medical images. But, medical data has very little to no correspondence with natural images. Hence, 3APREPRINT - FEBRUARY 22, 2022 relying solely on transfer learning based fine-tune approach should not be preferred. Rather, the training should be done on the networks that have either not been pre-trained on natural images or have been pre-trained on similar medi- cal dataset. But, training DL networks on medical dataset has its own set of challenges, including lack of huge amount of annotated medical data for training. Moreover, the diverse nature of medical images prevents the generalization and standardization of datasets on which DL networks could be trained for transfer learning. An exhaustive survey of deep learning methods as reported in [29] thoroughly highlights the merits of applying DL methods in the field of medical imaging, medical informatics, translational bioinformatics, and public health. The amalgamated use of both OL and DL features for the purpose of nuclei detection, segmentation, and classification have also been suggested in [8, 29]. Therefore, OL features in combination with DL features could help to bridge the gap between issues that two domains bring individually. Some recent articles have worked on the similar hypothesis of inter-domain feature combination and developed a method that combines the two feature sets as reported in [22,23]. But, the drawback of their method is in their complexity and huge training times due to very deep network models. Authors in [22] combined different deep learning features extracted from Caffe-ref [30], VGG-f [31] and VGG19 models with Bag of Features (BoF) and Local Binary Pattern (LBP) features. They then used ensemble classifiers to produce better classification accuracy than that of the softmax classification method used by deep learning models. However, the dataset under experiments in [22] was imbalanced hence, the reported accuracy trend may not hold good for other imbalanced datasets which are highly probable in case of medical image datasets. F1 score and AUC are better parameters for assessing the performance of classification algorithms for imbalanced datasets. Also, the authors [22, 23] reported the complex models which were based on pre-trained deep architectures with 7 and more layers and did not analyze the performance trend on other customized architectures that could have minimized the space and time constraints. It is difficult to design and test such relatively inflexible algorithms on a new dataset and deploy in real time applications. For example, it is difficult to change the design if one wishes to add a new functionality and re-train the algorithm. Furthermore, the reported handcrafted features in these studies lack direct relevance to the nuclei structural properties. 3 METHODOLOGY Hybrid feature based flexible classification framework trained on a dataset from [32] is used to determine the suit- ability of combining different feature sets. Few pre-processing steps are performed to segment the cell nuclei from background stroma. This step is necessary to extract the OL features. This feature set comprises relevant visual, shape and texture features from each nucleus. DL feature is extracted from the original input images. Both the set of features are then fused to produce a final feature set. The final fused feature set has been used by Multi-Layer Perceptron (MLP) as input for classifying the cell nuclei into one of the four categories. The block diagram of the proposed architectural setup has been shown in Fig. 2. The entire flow has been modeled in the Algorithm-1. Various steps involved in the proposed methodology has been elaborated in the following sub-sections (3.1)-(3.5) 3.1 Segmentation Cytologic and histologic images prevent the generalization of segmentation algorithms because of the inherent vari- ability of the nuclei structures present in them. Due to this reason, determining which state of the art algorithm for nuclei segmentation would work for our dataset was a lengthy problem. Therefore, we seek to develop application- specific segmentation algorithm for OL feature extraction. Our dataset contains H&E (Hematoxylin and Eosin) stained RGB image blocks that stains the nuclei region in bright blue and cell region in pink. The staining helped us to roughly extract the nucleus contour. Segmentation of an object then allowed for the calculation of OL features such as homo- geneous color, texture, size, and shape of the segmented region. Firstly, we enhanced the blue intensity of the nuclei through contrast adjustment. For this purpose blue color channel intensities were mapped from initial values to 255. Similarly, Red and Green channel pixel values less than a certain range were also tweaked towards higher range. This technique of adjusting intensity values in each channel to new values in an output image helped in highlighting poorly contrasted nuclei regions from cell cytoplasm and background noise. We assigned a higher value to blue intensity pixels relative to red and green components because blue-ratio is proven to be capable of highlighting nuclei regions in H&E stained histopathological images [33]. This step is followed by color normalization so that the intensity values follow normal distribution and as well remove any noise/artefact that may have been introduced due to contrast enhancement. For the next step, we computed the binary image and calculated the convex hull of the labelled region having the highest number of pixels. Convex hull of the binary image ensured that the largest area containing most blue pixels is retained and the defined boundary of the nuclei can 4APREPRINT - FEBRUARY 22, 2022 Figure 2: Block Diagram of proposed OLConvNet. Raw training images of cell nuclei is passed through Branch-1 of the network for DL feature extraction and further for classification using fully connected (FC1) and softmax layer of the DL network (OUTPUT-1). OL features are extracted from segmented nuclei images after segmentation pipeline. Extracted OL features are classified in Branch 2. Switch between branch 1 and 2 make the decision about which kind of output we would want for our dataset (OUTPUT-1 or OUTPUT-2 or Both). be obtained for calculating OL features. In other words, the perturbation in nuclei structure due to staining process may distort original nuclei structures, so obtaining a convex hull defines the smooth boundary around the nucleus. This further helps in following procedural steps of extracting OL features. Convex hull step is then followed by edge extraction of the convex hull. Lastly, we did the scalar multiplication of the resultant image with the original image to obtain the final output of a segmented RGB Nuclear image. Segmentation results helped in delineating nuclei region from the surrounding tissues. Figure 3 shows the pipeline of segmentation. Some of the segmented classwise nuclei examples are shown in Figure 4. Figure 3: Segmentation Pipeline of our network 5APREPRINT - FEBRUARY 22, 2022 Algorithm 1 OLConvNet Input: Training data set Dtrwith msamples. Data fi(X), where i= 1;_:::; m is an instance in the 3 dimensional image space X2