title
stringlengths 2
287
| abstract
stringlengths 0
5.14k
⌀ | journal
stringlengths 4
184
| date
timestamp[s] | authors
sequencelengths 1
57
| doi
stringlengths 16
6.63k
⌀ |
---|---|---|---|---|---|
A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients. | The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance. | BME frontiers | 2023-01-31T00:00:00 | [
"LingyiZhao",
"Muyinatu ALediju Bell"
] | 10.34133/2022/9780173 |
Lightweight ResGRU: a deep learning-based prediction of SARS-CoV-2 (COVID-19) and its severity classification using multimodal chest radiography images. | The new COVID-19 emerged in a town in China named Wuhan in December 2019, and since then, this deadly virus has infected 324 million people worldwide and caused 5.53 million deaths by January 2022. Because of the rapid spread of this pandemic, different countries are facing the problem of a shortage of resources, such as medical test kits and ventilators, as the number of cases increased uncontrollably. Therefore, developing a readily available, low-priced, and automated approach for COVID-19 identification is the need of the hour. The proposed study uses chest radiography images (CRIs) such as X-rays and computed tomography (CTs) to detect chest infections, as these modalities contain important information about chest infections. This research introduces a novel hybrid deep learning model named | Neural computing & applications | 2023-01-31T00:00:00 | [
"MugheesAhmad",
"Usama IjazBajwa",
"YasarMehmood",
"Muhammad WaqasAnwar"
] | 10.1007/s00521-023-08200-0
10.1016/S0140-6736(20)30183-5
10.1038/s41586-020-2008-3
10.1007/978-3-030-60188-1_2
10.1016/j.chaos.2020.110495
10.1016/j.chaos.2021.110713
10.1016/S1473-3099(20)30134-1
10.1016/j.bea.2021.100003
10.2217/fmb-2020-0098
10.1016/S0140-6736(20)30154-9
10.1148/ryct.2020200028
10.1148/radiol.2020201473
10.1148/ryct.2020200213
10.3389/fnins.2021.601109
10.1002/widm.1312
10.1145/3065386
10.1109/CVPR.2016.90
10.1051/matecconf/201927702001
10.1016/j.compbiomed.2020.103795
10.1016/j.imu.2020.100405
10.1016/j.ejrad.2020.109402
10.1007/s00330-021-07715-1
10.1016/j.eswa.2020.114054
10.1007/s11042-021-11388-9
10.1016/j.compbiomed.2020.103792
10.1016/j.imu.2020.100412
10.1016/j.asoc.2021.107160
10.1007/s10489-020-01888-w
10.1007/s42600-021-00151-6
10.1016/j.ijleo.2021.166405
10.1038/s41568-020-00327-9
10.1016/j.compmedimag.2019.05.001
10.1038/nature14539
10.1080/07391102.2020.1767212
10.1016/j.bspc.2021.102490
10.1016/j.patcog.2021.108255 |
COVID-19 lung infection segmentation from chest CT images based on CAPA-ResUNet. | Coronavirus disease 2019 (COVID-19) epidemic has devastating effects on personal health around the world. It is significant to achieve accurate segmentation of pulmonary infection regions, which is an early indicator of disease. To solve this problem, a deep learning model, namely, the content-aware pre-activated residual UNet (CAPA-ResUNet), was proposed for segmenting COVID-19 lesions from CT slices. In this network, the pre-activated residual block was used for down-sampling to solve the problems of complex foreground and large fluctuations of distribution in datasets during training and to avoid gradient disappearance. The area loss function based on the false segmentation regions was proposed to solve the problem of fuzzy boundary of the lesion area. This model was evaluated by the public dataset (COVID-19 Lung CT Lesion Segmentation Challenge-2020) and compared its performance with those of classical models. Our method gains an advantage over other models in multiple metrics. Such as the Dice coefficient, specificity (Spe), and intersection over union (IoU), our CAPA-ResUNet obtained 0.775 points, 0.972 points, and 0.646 points, respectively. The Dice coefficient of our model was 2.51% higher than Content-aware residual UNet (CARes-UNet). The code is available at https://github.com/malu108/LungInfectionSeg. | International journal of imaging systems and technology | 2023-01-31T00:00:00 | [
"LuMa",
"ShuniSong",
"LitingGuo",
"WenjunTan",
"LishengXu"
] | 10.1002/ima.22819 |
Artificial Intelligence in Paediatric Tuberculosis. | Tuberculosis (TB) continues to be a leading cause of death in children despite global efforts focused on early diagnosis and interventions to limit the spread of the disease. This challenge has been made more complex in the context of the coronavirus pandemic, which has disrupted the "End TB Strategy" and framework set out by the World Health Organization (WHO). Since the inception of artificial intelligence (AI) more than 60 years ago, the interest in AI has risen and more recently we have seen the emergence of multiple real-world applications, many of which relate to medical imaging. Nonetheless, real-world AI applications and clinical studies are limited in the niche area of paediatric imaging. This review article will focus on how AI, or more specifically deep learning, can be applied to TB diagnosis and management in children. We describe how deep learning can be utilised in chest imaging to provide computer-assisted diagnosis to augment workflow and screening efforts. We also review examples of recent AI applications for TB screening in resource constrained environments and we explore some of the challenges and the future directions of AI in paediatric TB. | Pediatric radiology | 2023-01-28T00:00:00 | [
"JaishreeNaidoo",
"Susan ChengShelmerdine",
"Carlos F Ugas-Charcape",
"Arhanjit SinghSodhi"
] | 10.1007/s00247-023-05606-9
10.7754/Clin.Lab.2015.150509
10.21037/jtd-21-1342
10.1093/cid/ciac011
10.1155/2014/291841
10.1097/INF.0000000000000792
10.1136/adc.2004.062315
10.5588/ijtld.15.0201
10.5588/ijtld.18.0122
10.1007/s00247-017-3866-1
10.1007/s00247-020-04625-0
10.1164/rccm.202202-0259OC
10.1093/cid/ciab708
10.1038/s41598-021-03265-0
10.1007/s00330-020-07024-z
10.1155/2021/5359084
10.1016/S2589-7500(20)30221-1
10.1148/radiol.2017162326
10.1007/s00330-020-07219-4
10.1148/radiol.2021210063
10.1038/s41598-021-93967-2
10.3389/fmolb.2022.874475
10.1016/S2589-7500(21)00116-3
10.1038/s41598-019-51503-3
10.1038/s41746-020-0273-z
10.1093/cid/ciab639
10.1007/s00259-021-05432-x
10.3389/frai.2022.827299
10.1007/s00330-021-08365-z
10.1016/j.clinimag.2022.04.009
10.21037/qims-21-676
10.1148/radiol.2018181422
10.1038/s41591-018-0307-0
10.1371/journal.pone.0212094
10.1007/s00247-021-05146-0
10.1093/cid/ciy967
10.1371/journal.pone.0221339
10.5588/ijtld.17.0520
10.1038/s41746-021-00393-9
10.3389/fdata.2022.850383
10.1007/s00247-019-04593-0
10.1038/s41598-020-73831-5
10.1371/journal.pone.0206410
10.1097/INF.0000000000001872
10.1002/ppul.24230
10.1002/ppul.24500
10.1007/s00247-017-3895-9
10.1038/srep12215
10.1038/s41598-020-62148-y |
Towards precision medicine: Omics approach for COVID-19. | The coronavirus disease 2019 (COVID-19) pandemic had a devastating impact on human society. Beginning with genome surveillance of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the development of omics technologies brought a clearer understanding of the complex SARS-CoV-2 and COVID-19. Here, we reviewed how omics, including genomics, proteomics, single-cell multi-omics, and clinical phenomics, play roles in answering biological and clinical questions about COVID-19. Large-scale sequencing and advanced analysis methods facilitate COVID-19 discovery from virus evolution and severity risk prediction to potential treatment identification. Omics would indicate precise and globalized prevention and medicine for the COVID-19 pandemic under the utilization of big data capability and phenotypes refinement. Furthermore, decoding the evolution rule of SARS-CoV-2 by deep learning models is promising to forecast new variants and achieve more precise data to predict future pandemics and prevent them on time. | Biosafety and health | 2023-01-24T00:00:00 | [
"XiaopingCen",
"FengaoWang",
"XinheHuang",
"DragomirkaJovic",
"FredDubee",
"HuanmingYang",
"YixueLi"
] | 10.1016/j.bsheal.2023.01.002
10.1073/pnas.0408290102
10.1038/s41588-022-01033-y
10.1126/scitranslmed.abk3445
10.1126/science.abd7331
10.1126/science.abm1208
10.1016/j.gpb.2022.01.001
10.3389/fimmu.2021.622176
10.1038/s41588-021-00854-7
10.1038/s41421-021-00318-6
10.1056/nejmoa2020283
10.1038/s41586-021-03767-x
10.1038/s41588-021-00996-8
10.1038/s41588-021-00955-3
10.1038/s41588-021-00986-w
10.1038/s41588-022-01042-x
10.1038/s41588-021-01006-7
10.1038/s41586-022-04826-7
10.1038/s43587-021-00067-x
10.1016/j.cell.2021.01.004
10.1016/j.immuni.2020.10.008
10.1038/s41586-021-03493-4
10.1016/j.celrep.2021.110271
10.1038/s41467-021-27716-4
10.1038/s41467-021-24482-1
10.1016/j.cell.2022.01.012
10.1126/scitranslmed.abj7521
10.1038/s41586-020-2588-y
10.7554/eLife.62522
10.1084/jem.20210582
10.1016/j.immuni.2020.11.017
10.1016/j.cell.2020.10.037
10.1038/s41586-021-03570-8
10.1038/s42255-021-00425-4
10.1038/s41591-021-01329-2
10.1038/s41746-021-00399-3
10.1038/s41467-020-17971-2
10.1038/s41467-020-17280-8
10.1016/j.cell.2020.04.045
10.1038/s41591-020-0931-3
10.1038/s41551-020-00633-5
10.1016/j.xcrm.2022.100580
10.1080/10408363.2020.1851167
10.1002/mco2.90
10.1038/s41587-021-01131-y
10.1016/j.xinn.2022.100289
10.1055/s-0040-1712549
10.1093/cid/ciab754
10.1080/14760584.2021.1976153
10.1002/jmv.27524
10.1038/s41392-022-01105-9
10.1038/s41586-021-04188-6
10.1002/mco2.110
10.2807/1560-7917.ES.2021.26.24.2100509
10.46234/ccdcw2021.255
10.1016/j.gpb.2020.09.001
10.1093/nar/gkw1065
10.1093/ve/veab064
10.1126/science.abe3261
10.1126/science.abc0523
10.1126/science.abb9263
10.1038/s41467-022-31511-0
10.1038/s41467-020-18314-x
10.1126/scitranslmed.abn7979
10.1126/science.abp8337
10.1038/s41467-020-19345-0
10.1038/s41591-020-0997-y
10.1038/s41591-020-1000-7
10.1126/science.abq5358
10.1038/s41576-022-00483-8
10.1016/S1473-3099(21)00170-5
10.1016/S2468-2667(21)00055-4
10.1016/j.cell.2020.11.020
10.7554/eLife.65365
10.1038/s41576-021-00408-x
10.1038/s41586-021-04352-y
10.1038/s41586-021-03792-w
10.1089/omi.2021.0182
10.1021/acs.jproteome.1c00475
10.1039/d0cb00163e
10.1038/s41746-021-00431-6
10.1038/s42256-021-00307-0
10.1038/s42256-021-00377-0
10.1038/s41591-022-01843-x
10.1089/jwh.2021.0411
10.1001/jamanetworkopen.2021.47053
10.1016/S0140-6736(22)00941-2
10.1038/s41591-022-01840-0
10.1001/jamapsychiatry.2022.2640
10.1016/j.cell.2022.01.014
10.1016/j.immuni.2022.01.017
10.1126/sciimmunol.abk1741
10.1038/s41591-022-01837-9
10.3389/fimmu.2022.838132
10.1126/sciimmunol.abm7996
10.1136/bmj.o407
10.1038/s41586-020-2355-0
10.1016/j.jval.2021.10.007
10.1038/d41586-022-03181-x |
DMFL_Net: A Federated Learning-Based Framework for the Classification of COVID-19 from Multiple Chest Diseases Using X-rays. | Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients' right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model's accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients. | Sensors (Basel, Switzerland) | 2023-01-22T00:00:00 | [
"HassaanMalik",
"AhmadNaeem",
"Rizwan AliNaqvi",
"Woong-KeeLoh"
] | 10.3390/s23020743
10.3390/electronics11172714
10.1109/OJCS.2022.3206407
10.3390/life12070958
10.1002/int.22777
10.1109/TCBB.2022.3184319
10.1109/ACCESS.2020.3037474
10.1038/s41746-020-00323-1
10.1109/bigdata50022.2020.9377873
10.3389/fpubh.2022.892499
10.1109/JIOT.2021.3056185
10.1109/JIOT.2021.3120998
10.1145/3501296
10.1016/j.asoc.2021.107330
10.1038/s41591-021-01506-3
10.2196/24207
10.1007/978-3-030-11723-8_9
10.1109/JIOT.2019.2956615
10.1109/isbi.2019.8759317
10.1145/3528580.3532845
10.1109/JBHI.2022.3143576
10.1109/icic53490.2021.9693006
10.1007/s00530-021-00878-3
10.1371/journal.pone.0266462
10.18280/ijdne.170106
10.1007/978-981-16-7618-5_13
10.14419/ijet.v9i3.30655
10.1007/s11042-022-13843-7
10.1007/s10796-022-10307-z
10.3233/shti220697
10.1109/JIOT.2022.3144450
10.1016/j.compbiomed.2022.105233
10.1109/TMM.2018.2889934
10.1109/JIOT.2019.2920987
10.1109/ACCESS.2018.2885997
10.1016/j.asoc.2020.106859
10.1016/j.knosys.2021.106775
10.32604/cmc.2022.020344
10.1016/j.dib.2020.106520
10.1111/exsy.13173
10.1109/ACCESS.2021.3102399
10.1145/3431804
10.32604/cmc.2021.013191
10.1007/s11042-022-13499-3
10.3390/diagnostics11091735
10.1001/jamanetworkopen.2019.1095
10.1016/j.cell.2018.02.010
10.2214/ajr.174.1.1740071
10.1016/j.fcij.2017.12.001
10.1109/ACCESS.2020.3031384
10.1038/s41598-020-76282-0
10.26599/TST.2021.9010026
10.1109/ubmk52708.2021.9558913
10.1109/JBHI.2020.3005160
10.1016/j.bspc.2021.102588
10.1016/j.patrec.2020.09.010
10.1111/exsy.12759
10.1146/annurev-bioeng-110220-012203
10.2174/1573405617666210414101941
10.1007/s42600-021-00135-6
10.3390/s22155652
10.1016/j.jpha.2021.12.006
10.1007/s00530-021-00826-1
10.3390/jpm12020275
10.1002/wcms.1597
10.1109/ACCESS.2020.3001507
10.1007/s40747-022-00866-8 |
Novel Comparative Study for the Detection of COVID-19 Using CT Scan and Chest X-ray Images. | The number of coronavirus disease (COVID-19) cases is constantly rising as the pandemic continues, with new variants constantly emerging. Therefore, to prevent the virus from spreading, coronavirus cases must be diagnosed as soon as possible. The COVID-19 pandemic has had a devastating impact on people's health and the economy worldwide. For COVID-19 detection, reverse transcription-polymerase chain reaction testing is the benchmark. However, this test takes a long time and necessitates a lot of laboratory resources. A new trend is emerging to address these limitations regarding the use of machine learning and deep learning techniques for automatic analysis, as these can attain high diagnosis results, especially by using medical imaging techniques. However, a key question arises whether a chest computed tomography scan or chest X-ray can be used for COVID-19 detection. A total of 17,599 images were examined in this work to develop the models used to classify the occurrence of COVID-19 infection, while four different classifiers were studied. These are the convolutional neural network (proposed architecture (named, SCovNet) and Resnet18), support vector machine, and logistic regression. Out of all four models, the proposed SCoVNet architecture reached the best performance with an accuracy of almost 99% and 98% on chest computed tomography scan images and chest X-ray images, respectively. | International journal of environmental research and public health | 2023-01-22T00:00:00 | [
"AhatshamHayat",
"PreetyBaglat",
"FábioMendonça",
"Sheikh ShanawazMostafa",
"FernandoMorgado-Dias"
] | 10.3390/ijerph20021268
10.1016/j.jds.2020.02.002
10.1016/j.ajem.2020.03.036
10.1007/s11042-021-10714-5
10.1093/aje/kwab093
10.1038/s41597-021-00900-3
10.1038/s41598-020-76282-0
10.1007/s11547-019-00990-5
10.1007/s11547-020-01135-9
10.1007/s11547-020-01277-w
10.1007/s12559-020-09773-x
10.1109/TNNLS.2018.2790388
10.1016/j.jksuci.2020.03.013
10.1016/j.chaos.2020.110190
10.1016/S2589-7500(21)00039-X
10.1007/s00500-020-05424-3
10.1016/j.bbe.2021.05.013
10.1155/2021/6658058
10.1016/j.chaos.2020.109944
10.3389/frai.2021.694875
10.1016/j.cmpb.2020.105581
10.1101/2020.03.26.20044610
10.3390/app11083414
10.3390/healthcare10020343
10.17632/8h65ywd2jr.3
10.1155/2021/5587188
10.1038/s41598-021-86735-9
10.1109/72.788646
10.1088/1742-6596/1748/4/042054
10.1007/s11263-019-01228-7 |
Research on the Application of Artificial Intelligence in Public Health Management: Leveraging Artificial Intelligence to Improve COVID-19 CT Image Diagnosis. | Since the start of 2020, the outbreak of the Coronavirus disease (COVID-19) has been a global public health emergency, and it has caused unprecedented economic and social disaster. In order to improve the diagnosis efficiency of COVID-19 patients, a number of researchers have conducted extensive studies on applying artificial intelligence techniques to the analysis of COVID-19-related medical images. The automatic segmentation of lesions from computed tomography (CT) images using deep learning provides an important basis for the quantification and diagnosis of COVID-19 cases. For a deep learning-based CT diagnostic method, a few of accurate pixel-level labels are essential for the training process of a model. However, the translucent ground-glass area of the lesion usually leads to mislabeling while performing the manual labeling operation, which weakens the accuracy of the model. In this work, we propose a method for correcting rough labels; that is, to hierarchize these rough labels into precise ones by performing an analysis on the pixel distribution of the infected and normal areas in the lung. The proposed method corrects the incorrectly labeled pixels and enables the deep learning model to learn the infected degree of each infected pixel, with which an aiding system (named DLShelper) for COVID-19 CT image diagnosis using the hierarchical labels is also proposed. The DLShelper targets lesion segmentation from CT images, as well as the severity grading. The DLShelper assists medical staff in efficient diagnosis by providing rich auxiliary diagnostic information (including the severity grade, the proportions of the lesion and the visualization of the lesion area). A comprehensive experiment based on a public COVID-19 CT image dataset is also conducted, and the experimental results show that the DLShelper significantly improves the accuracy of segmentation for the lesion areas and also achieves a promising accuracy for the severity grading task. | International journal of environmental research and public health | 2023-01-22T00:00:00 | [
"TianchengHe",
"HongLiu",
"ZhihaoZhang",
"ChaoLi",
"YoumeiZhou"
] | 10.3390/ijerph20021158
10.3390/diagnostics10110901
10.1007/s11548-021-02466-2
10.1148/radiol.2020200642
10.1016/j.chest.2020.06.025
10.1109/RBME.2020.2987975
10.1016/j.cell.2020.04.045
10.1109/TIP.2021.3058783
10.1609/aaai.v35i6.16617
10.1109/TMI.2020.2996645
10.1016/j.media.2018.11.010
10.1109/TMI.2020.3000314
10.1007/s00521-022-07709-0
10.1109/TMI.2017.2775636
10.1016/j.jvcir.2016.11.019
10.1186/s12938-015-0014-8
10.1109/TMI.2012.2196285
10.3389/fpubh.2022.1015876
10.1002/mp.12273
10.1016/j.cmpb.2019.06.005
10.1007/s10278-019-00254-8
10.1007/978-3-319-24574-4_28
10.1109/TPAMI.2016.2644615
10.1613/jair.953 |
COVID-19 Detection Mechanism in Vehicles Using a Deep Extreme Machine Learning Approach. | COVID-19 is a rapidly spreading pandemic, and early detection is important to halting the spread of infection. Recently, the outbreak of this virus has severely affected people around the world with increasing death rates. The increased death rates are because of its spreading nature among people, mainly through physical interactions. Therefore, it is very important to control the spreading of the virus and detect people's symptoms during the initial stages so proper preventive measures can be taken in good time. In response to COVID-19, revolutionary automation such as deep learning, machine learning, image processing, and medical images such as chest radiography (CXR) and computed tomography (CT) have been developed in this environment. Currently, the coronavirus is identified via an RT-PCR test. Alternative solutions are required due to the lengthy moratorium period and the large number of false-negative estimations. To prevent the spreading of the virus, we propose the Vehicle-based COVID-19 Detection System to reveal the related symptoms of a person in the vehicles. Moreover, deep extreme machine learning is applied. The proposed system uses headaches, flu, fever, cough, chest pain, shortness of breath, tiredness, nasal congestion, diarrhea, breathing difficulty, and pneumonia. The symptoms are considered parameters to reveal the presence of COVID-19 in a person. Our proposed approach in Vehicles will make it easier for governments to perform COVID-19 tests timely in cities. Due to the ambiguous nature of symptoms in humans, we utilize fuzzy modeling for simulation. The suggested COVID-19 detection model achieved an accuracy of more than 90%. | Diagnostics (Basel, Switzerland) | 2023-01-22T00:00:00 | [
"AreejFatima",
"TariqShahzad",
"SagheerAbbas",
"AbdurRehman",
"YousafSaeed",
"MeshalAlharbi",
"Muhammad AdnanKhan",
"KhmaiesOuahada"
] | 10.3390/diagnostics13020270
10.1056/NEJMoa2001017
10.46234/ccdcw2020.017
10.1016/S0140-6736(20)30154-9
10.1016/S0140-6736(20)30183-5
10.1017/ice.2020.61
10.9781/ijimai.2020.02.002
10.3390/jcm9030674
10.1136/jim-2020-001491
10.3390/diagnostics12040846
10.1007/s10140-021-01937-y
10.3390/diagnostics12071617
10.9781/ijimai.2018.04.003
10.1198/004017005000000058
10.3978/j.issn.2072-1439.2015.04.61
10.1088/1742-6596/892/1/012016
10.15837/ijccc.2010.3.2481
10.1016/j.artmed.2016.12.003
10.1080/00401706.1980.10486139
10.3390/app12094493
10.1007/s00330-021-07715-1
10.1038/s41598-020-76550-z
10.1109/ACCESS.2020.2976452
10.3233/AIS-200554
10.32604/cmc.2020.011155
10.1007/s13042-011-0019-y |
Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach. | Many people have been affected by infectious lung diseases (ILD). With the outbreak of the COVID-19 disease in the last few years, many people have waited for weeks to recover in the intensive care wards of hospitals. Therefore, early diagnosis of ILD is of great importance to reduce the occupancy rates of health institutions and the treatment time of patients. Many artificial intelligence-based studies have been carried out in detecting and classifying diseases from medical images using imaging applications. The most important goal of these studies was to increase classification performance and model reliability. In this approach, a powerful algorithm based on a new customized deep learning model (ACL model), which trained synchronously with the attention and LSTM model with CNN models, was proposed to classify healthy, COVID-19 and Pneumonia. The important stains and traces in the chest X-ray (CX-R) image were emphasized with the marker-controlled watershed (MCW) segmentation algorithm. The ACL model was trained for different training-test ratios (90-10%, 80-20%, and 70-30%). For 90-10%, 80-20%, and 70-30% training-test ratios, accuracy scores were 100%, 96%, and 96%, respectively. The best performance results were obtained compared to the existing methods. In addition, the contribution of the strategies utilized in the proposed model to classification performance was analyzed in detail. Deep learning-based applications can be used as a useful decision support tool for physicians in the early diagnosis of ILD diseases. However, for the reliability of these applications, it is necessary to undertake verification with many datasets. | Diagnostics (Basel, Switzerland) | 2023-01-22T00:00:00 | [
"YamanAkbulut"
] | 10.3390/diagnostics13020260
10.1038/s41586-020-2008-3
10.1016/S0140-6736(20)30211-7
10.1148/radiol.2020200463
10.1148/radiol.2020200432
10.1007/s10044-021-00984-y
10.1038/s41598-020-76550-z
10.1016/j.chemolab.2020.104054
10.1016/j.aiia.2020.10.002
10.1001/jama.2018.19323
10.1002/emmm.201100182
10.1016/j.aiia.2020.09.002
10.1016/j.patrec.2020.09.010
10.33889/IJMEMS.2020.5.4.052
10.1007/s13246-020-00865-4
10.1109/ACCESS.2020.3016780
10.1016/j.compbiomed.2020.103792
10.1016/j.chaos.2020.110071
10.3892/etm.2020.8797
10.1016/j.asoc.2021.107160
10.1016/j.eswa.2020.114054
10.1016/j.asoc.2022.108610
10.1007/s00354-021-00152-0
10.14358/PERS.70.3.351
10.1109/JSTARS.2018.2830410
10.14569/IJACSA.2017.080853
10.3390/jpm12010055
10.1016/j.bspc.2022.103625
10.1016/j.bspc.2020.102194
10.1016/j.bbe.2021.07.004
10.3390/jpm11121276
10.1016/j.apacoust.2021.108260
10.1016/j.mehy.2020.109761
10.1016/j.asoc.2020.106580
10.1007/s10489-020-01888-w
10.1016/j.compbiomed.2020.103805 |
Deep Learning for Detecting COVID-19 Using Medical Images. | The global spread of COVID-19 (also known as SARS-CoV-2) is a major international public health crisis [...]. | Bioengineering (Basel, Switzerland) | 2023-01-22T00:00:00 | [
"JiaLiu",
"JingQi",
"WeiChen",
"YiWu",
"YongjianNian"
] | 10.3390/bioengineering10010019
10.3390/bioengineering8070098
10.3390/bioengineering8040049
10.1016/j.media.2020.101794
10.1007/s00500-020-05424-3
10.1038/s41598-020-76550-z
10.1016/j.ins.2020.09.041
10.1016/j.neucom.2022.01.055
10.1109/TNNLS.2021.3114747
10.1016/j.media.2021.102299
10.1109/TMI.2020.2993291
10.1016/j.compbiomed.2022.105233
10.1109/TMI.2020.3040950
10.1016/j.compbiomed.2022.105732
10.1109/TNNLS.2022.3201198
10.1016/j.irbm.2020.05.003
10.1109/JBHI.2020.3023246
10.1109/JBHI.2020.3030853
10.1109/TCYB.2020.3042837
10.1109/TMI.2020.2995508
10.1109/TMI.2020.2994908
10.1016/j.media.2021.102105
10.1016/j.media.2020.101913
10.1109/TMI.2020.2996256
10.1109/TMI.2020.2995965
10.3390/bioengineering8020026 |
Classification of Pulmonary Damage Stages Caused by COVID-19 Disease from CT Scans via Transfer Learning. | The COVID-19 pandemic has produced social and economic changes that are still affecting our lives. The coronavirus is proinflammatory, it is replicating, and it is quickly spreading. The most affected organ is the lung, and the evolution of the disease can degenerate very rapidly from the early phase, also known as mild to moderate and even severe stages, where the percentage of recovered patients is very low. Therefore, a fast and automatic method to detect the disease stages for patients who underwent a computer tomography investigation can improve the clinical protocol. Transfer learning is used do tackle this issue, mainly by decreasing the computational time. The dataset is composed of images from public databases from 118 patients and new data from 55 patients collected during the COVID-19 spread in Romania in the spring of 2020. Even if the disease detection by the computerized tomography scans was studied using deep learning algorithms, to our knowledge, there are no studies related to the multiclass classification of the images into pulmonary damage stages. This could be helpful for physicians to automatically establish the disease severity and decide on the proper treatment for patients and any special surveillance, if needed. An evaluation study was completed by considering six different pre-trained CNNs. The results are encouraging, assuring an accuracy of around 87%. The clinical impact is still huge, even if the disease spread and severity are currently diminished. | Bioengineering (Basel, Switzerland) | 2023-01-22T00:00:00 | [
"Irina AndraTache",
"DimitriosGlotsos",
"Silviu MarcelStanciu"
] | 10.3390/bioengineering10010006
10.1016/j.jacr.2020.02.008
10.1371/journal.pone.0235844
10.1148/ryct.2020200028
10.1148/radiol.2020200230
10.1007/s10916-020-01562-1
10.1016/j.patcog.2022.108538
10.1016/j.chaos.2020.109944
10.1142/S0218348X20501145
10.1007/s10489-020-01826-w
10.1016/j.bspc.2021.102588
10.1007/s10916-021-01707-w
10.1016/j.bbe.2021.04.006
10.1016/j.chaos.2020.110153
10.3390/e22050517
10.1016/j.bspc.2022.104250
10.3389/fonc.2020.01560
10.1056/NEJMoa2001316
10.1186/s43055-020-00236-9
10.1148/radiol.2020200463
10.2214/AJR.20.22976
10.1016/j.jormas.2019.06.002
10.1146/annurev-bioeng-071516-044442
10.1016/j.bspc.2021.103326
10.1186/s40537-019-0235-y
10.1186/s40537-021-00428-8 |
A Survey on Deep Learning in COVID-19 Diagnosis. | According to the World Health Organization statistics, as of 25 October 2022, there have been 625,248,843 confirmed cases of COVID-19, including 65,622,281 deaths worldwide. The spread and severity of COVID-19 are alarming. The economy and life of countries worldwide have been greatly affected. The rapid and accurate diagnosis of COVID-19 directly affects the spread of the virus and the degree of harm. Currently, the classification of chest X-ray or CT images based on artificial intelligence is an important method for COVID-19 diagnosis. It can assist doctors in making judgments and reduce the misdiagnosis rate. The convolutional neural network (CNN) is very popular in computer vision applications, such as applied to biological image segmentation, traffic sign recognition, face recognition, and other fields. It is one of the most widely used machine learning methods. This paper mainly introduces the latest deep learning methods and techniques for diagnosing COVID-19 using chest X-ray or CT images based on the convolutional neural network. It reviews the technology of CNN at various stages, such as rectified linear units, batch normalization, data augmentation, dropout, and so on. Several well-performing network architectures are explained in detail, such as AlexNet, ResNet, DenseNet, VGG, GoogleNet, etc. We analyzed and discussed the existing CNN automatic COVID-19 diagnosis systems from sensitivity, accuracy, precision, specificity, and F1 score. The systems use chest X-ray or CT images as datasets. Overall, CNN has essential value in COVID-19 diagnosis. All of them have good performance in the existing experiments. If expanding the datasets, adding GPU acceleration and data preprocessing techniques, and expanding the types of medical images, the performance of CNN will be further improved. This paper wishes to make contributions to future research. | Journal of imaging | 2023-01-21T00:00:00 | [
"XueHan",
"ZuojinHu",
"ShuihuaWang",
"YudongZhang"
] | 10.3390/jimaging9010001
10.1016/j.procbio.2020.08.016
10.1056/NEJMoa2002032
10.1016/j.asoc.2020.106580
10.1016/j.bios.2020.112349
10.1016/j.bios.2020.112437
10.1021/acsnano.0c02439
10.1016/j.cartre.2020.100011
10.1148/radiol.2020201237
10.1148/radiol.2020200343
10.1016/S0140-6736(20)30154-9
10.1186/s12938-020-00831-x
10.1016/j.neucom.2020.05.078
10.1007/s10462-021-09985-z
10.1016/j.aej.2021.07.007
10.1016/j.eswa.2017.08.006
10.1186/s41747-018-0061-6
10.1142/S0129065718500582
10.1049/cit2.12042
10.3348/kjr.2017.18.4.570
10.1038/s41746-022-00592-y
10.20517/ais.2021.15
10.1049/cit2.12060
10.1038/nature14539
10.1049/cit2.12059
10.1002/ett.4080
10.1016/j.job.2022.03.003
10.1016/j.bspc.2021.103165
10.1109/TIP.2005.852470
10.1016/j.neunet.2012.02.023
10.1007/s00521-021-06762-5
10.1134/S1054661822020110
10.1145/3507902
10.3390/su14031447
10.1155/2022/1830010
10.1016/j.media.2021.102311
10.1016/j.compbiomed.2022.105244
10.1007/s42600-020-00120-5
10.1016/j.displa.2022.102150
10.1259/0007-1285-46-552-1016
10.1007/s11604-020-01010-7
10.1016/j.crad.2020.04.001
10.1148/radiol.2020200463
10.1148/radiol.2020201160
10.1016/j.jacr.2018.09.012
10.21037/atm.2017.07.20
10.1177/0846537120924606
10.1109/TMI.2020.2993291
10.3390/jimaging8010002
10.1109/TIP.2017.2713099
10.1016/j.patcog.2017.10.013
10.1007/s13369-020-04758-2
10.1145/3065386
10.1016/j.neunet.2015.07.007
10.1016/j.cmpb.2019.05.004
10.1016/j.knosys.2020.106396
10.1016/j.swevo.2021.100863
10.3390/app12178643
10.1016/j.powtec.2022.117409
10.1016/j.ymssp.2017.06.022
10.32604/cmc.2022.020140
10.3389/fpubh.2021.726144
10.1016/j.patrec.2021.02.005
10.1016/j.patrec.2020.04.018
10.1016/j.jksuci.2021.05.001
10.1016/j.neucom.2020.06.117
10.1111/nph.16830
10.1016/j.neucom.2018.03.080
10.1007/s12145-019-00383-2
10.1155/2021/6633755
10.1109/JSEN.2020.3025855
10.1016/j.jksuci.2021.07.005
10.1186/s40537-019-0197-0
10.1016/j.compbiomed.2021.104375
10.1016/j.bspc.2021.103326
10.1364/OL.390026
10.1016/j.inffus.2021.07.001
10.1364/BOE.10.006145
10.1097/JU.0000000000000852.020
10.3389/frobt.2019.00144
10.1007/s11042-017-5243-3
10.1109/TCSVT.2019.2935128
10.1167/16.12.326
10.1109/ACCESS.2017.2696121
10.1109/5.726791
10.1109/72.279181
10.1186/s40537-016-0043-6
10.1109/TKDE.2009.191
10.1023/A:1007379606734
10.1613/jair.1872
10.1016/S0378-3758(00)00115-4
10.21037/jtd-21-747
10.3390/sym14071310
10.1016/j.asoc.2020.106912
10.1007/s11390-020-0679-8
10.1016/j.eng.2020.04.010
10.1016/j.ejrad.2020.109041
10.1016/j.bspc.2021.102588
10.1007/s00521-020-05437-x
10.1371/journal.pone.0259179
10.1007/s42979-021-00782-7
10.1109/TCYB.2020.3042837
10.1007/s10140-020-01886-y
10.1007/s13755-021-00140-0
10.1109/JSEN.2021.3062442
10.3844/jcssp.2020.620.625
10.1007/s00138-020-01128-8
10.1038/s41598-020-74164-z
10.1016/j.bspc.2021.102987
10.1109/TLA.2021.9451239
10.1155/2021/8829829
10.1007/s10044-021-00984-y
10.1109/ACCESS.2020.3010287
10.1007/s10489-020-02055-x
10.1088/1361-6501/ac8ca4
10.1016/j.bspc.2021.102814
10.1101/2022.03.13.22272311
10.32628/IJSRST207614
10.1016/j.ijmedinf.2020.104284 |
Pandemic disease detection through wireless communication using infrared image based on deep learning. | Rapid diagnosis to test diseases, such as COVID-19, is a significant issue. It is a routine virus test in a reverse transcriptase-polymerase chain reaction. However, a test like this takes longer to complete because it follows the serial testing method, and there is a high chance of a false-negative ratio (FNR). Moreover, there arises a deficiency of R.T.-PCR test kits. Therefore, alternative procedures for a quick and accurate diagnosis of patients are urgently needed to deal with these pandemics. The infrared image is self-sufficient for detecting these diseases by measuring the temperature at the initial stage. C.T. scans and other pathological tests are valuable aspects of evaluating a patient with a suspected pandemic infection. However, a patient's radiological findings may not be identified initially. Therefore, we have included an Artificial Intelligence (A.I.) algorithm-based Machine Intelligence (MI) system in this proposal to combine C.T. scan findings with all other tests, symptoms, and history to quickly diagnose a patient with a positive symptom of current and future pandemic diseases. Initially, the system will collect information by an infrared camera of the patient's facial regions to measure temperature, keep it as a record, and complete further actions. We divided the face into eight classes and twelve regions for temperature measurement. A database named patient-info-mask is maintained. While collecting sample data, we incorporate a wireless network using a cloudlets server to make processing more accessible with minimal infrastructure. The system will use deep learning approaches. We propose convolution neural networks (CNN) to cross-verify the collected data. For better results, we incorporated tenfold cross-verification into the synthesis method. As a result, our new way of estimating became more accurate and efficient. We achieved 3.29% greater accuracy by incorporating the "decision tree level synthesis method" and "ten-folded-validation method". It proves the robustness of our proposed method. | Mathematical biosciences and engineering : MBE | 2023-01-19T00:00:00 | [
"MohammedAlhameed",
"FatheJeribi",
"Bushra Mohamed ElaminElnaim",
"Mohammad AlamgirHossain",
"Mohammed EltahirAbdelhag"
] | 10.3934/mbe.2023050 |
Automated grading of chest x-ray images for viral pneumonia with convolutional neural networks ensemble and region of interest localization. | Following its initial identification on December 31, 2019, COVID-19 quickly spread around the world as a pandemic claiming more than six million lives. An early diagnosis with appropriate intervention can help prevent deaths and serious illness as the distinguishing symptoms that set COVID-19 apart from pneumonia and influenza frequently don't show up until after the patient has already suffered significant damage. A chest X-ray (CXR), one of many imaging modalities that are useful for detection and one of the most used, offers a non-invasive method of detection. The CXR image analysis can also reveal additional disorders, such as pneumonia, which show up as anomalies in the lungs. Thus these CXRs can be used for automated grading aiding the doctors in making a better diagnosis. In order to classify a CXR image into the Negative for Pneumonia, Typical, Indeterminate, and Atypical, we used the publicly available CXR image competition dataset SIIM-FISABIO-RSNA COVID-19 from Kaggle. The suggested architecture employed an ensemble of EfficientNetv2-L for classification, which was trained via transfer learning from the initialised weights of ImageNet21K on various subsets of data (Code for the proposed methodology is available at: https://github.com/asadkhan1221/siim-covid19.git). To identify and localise opacities, an ensemble of YOLO was combined using Weighted Boxes Fusion (WBF). Significant generalisability gains were made possible by the suggested technique's addition of classification auxiliary heads to the CNN backbone. The suggested method improved further by utilising test time augmentation for both classifiers and localizers. The results for Mean Average Precision score show that the proposed deep learning model achieves 0.617 and 0.609 on public and private sets respectively and these are comparable to other techniques for the Kaggle dataset. | PloS one | 2023-01-18T00:00:00 | [
"AsadKhan",
"Muhammad UsmanAkram",
"SajidNazir"
] | 10.1371/journal.pone.0280352
10.1007/s13246-020-00865-4
10.1186/s40537-020-00392-9
10.1016/j.compbiomed.2021.104771
10.1038/s41598-020-76550-z
10.1007/s10489-020-01888-w
10.1111/exsy.12759
10.1007/s10462-020-09825-6
10.1007/s13244-018-0639-9
10.1016/j.bspc.2021.102764
10.1016/j.asoc.2020.106691
10.1016/j.compbiomed.2020.103792
10.1007/s13755-021-00146-8
10.1007/s00521-020-05636-6
10.1109/ICCV.2017.74
10.1016/j.compbiomed.2021.104356
10.1016/j.compbiomed.2021.104306
10.1016/j.compbiomed.2021.104304
10.1007/978-3-030-88163-4_33
10.1016/j.neucom.2020.07.144
10.1007/s10044-021-00984-y
10.1080/0952813X.2021.1908431
10.1016/j.bspc.2022.103677
10.1080/07391102.2020.1767212
10.1016/j.compbiomed.2021.104348
10.1016/j.compbiomed.2022.105604
10.1016/j.compbiomed.2021.104375
10.1002/ima.22627
10.3390/app11062884
10.1016/j.ijmedinf.2020.104284
10.1016/j.eswa.2020.114054
10.1049/ipr2.12153
10.1016/j.compbiomed.2021.105002
10.1016/j.media.2021.102299
10.1155/2021/8890226
10.1016/j.media.2021.102046
10.1136/bmjqs-2018-008370 |
Carotid Vessel-Wall-Volume Ultrasound Measurement via a UNet++ Ensemble Algorithm Trained on Small Data Sets. | Vessel wall volume (VWV) is a 3-D ultrasound measurement for the assessment of therapy in patients with carotid atherosclerosis. Deep learning can be used to segment the media-adventitia boundary (MAB) and lumen-intima boundary (LIB) and to quantify VWV automatically; however, it typically requires large training data sets with expert manual segmentation, which are difficult to obtain. In this study, a UNet++ ensemble approach was developed for automated VWV measurement, trained on five small data sets (n = 30 participants) and tested on 100 participants with clinically diagnosed coronary artery disease enrolled in a multicenter CAIN trial. The Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), Pearson correlation coefficient (r), Bland-Altman plots and coefficient of variation (CoV) were used to evaluate algorithm segmentation accuracy, agreement and reproducibility. The UNet++ ensemble yielded DSCs of 91.07%-91.56% and 87.53%-89.44% and ASSDs of 0.10-0.11 mm and 0.33-0.39 mm for the MAB and LIB, respectively; the algorithm VWV measurements were correlated (r = 0.763-0.795, p < 0.001) with manual segmentations, and the CoV for VWV was 8.89%. In addition, the UNet++ ensemble trained on 30 participants achieved a performance similar to that of U-Net and Voxel-FCN trained on 150 participants. These results suggest that our approach could provide accurate and reproducible carotid VWV measurements using relatively small training data sets, supporting deep learning applications for monitoring atherosclerosis progression in research and clinical trials. | Ultrasound in medicine & biology | 2023-01-16T00:00:00 | [
"RanZhou",
"FuminGuo",
"M RezaAzarpazhooh",
"J DavidSpence",
"HaitaoGan",
"MingyueDing",
"AaronFenster"
] | 10.1016/j.ultrasmedbio.2022.12.005 |
ACSN: Attention capsule sampling network for diagnosing COVID-19 based on chest CT scans. | Automated diagnostic techniques based on computed tomography (CT) scans of the chest for the coronavirus disease (COVID-19) help physicians detect suspected cases rapidly and precisely, which is critical in providing timely medical treatment and preventing the spread of epidemic outbreaks. Existing capsule networks have played a significant role in automatic COVID-19 detection systems based on small datasets. However, extracting key slices is difficult because CT scans typically show many scattered lesion sections. In addition, existing max pooling sampling methods cannot effectively fuse the features from multiple regions. Therefore, in this study, we propose an attention capsule sampling network (ACSN) to detect COVID-19 based on chest CT scans. A key slices enhancement method is used to obtain critical information from a large number of slices by applying attention enhancement to key slices. Then, the lost active and background features are retained by integrating two types of sampling. The results of experiments on an open dataset of 35,000 slices show that the proposed ACSN achieve high performance compared with state-of-the-art models and exhibits 96.3% accuracy, 98.8% sensitivity, 93.8% specificity, and 98.3% area under the receiver operating characteristic curve. | Computers in biology and medicine | 2023-01-15T00:00:00 | [
"CuihongWen",
"ShaowuLiu",
"ShuaiLiu",
"Ali AsgharHeidari",
"MohammadHijji",
"CarmenZarco",
"KhanMuhammad"
] | 10.1016/j.compbiomed.2022.106338
10.21203/rs.3.rs-32511/v1
10.1109/ACCESS.2021.3067311
10.3390/s22176709
10.1109/TII.2021.3056386
10.1109/CVPR.2016.90
10.48550/arXiv.2010.16041 |
Utilisation of deep learning for COVID-19 diagnosis. | The COVID-19 pandemic that began in 2019 has resulted in millions of deaths worldwide. Over this period, the economic and healthcare consequences of COVID-19 infection in survivors of acute COVID-19 infection have become apparent. During the course of the pandemic, computer analysis of medical images and data have been widely used by the medical research community. In particular, deep-learning methods, which are artificial intelligence (AI)-based approaches, have been frequently employed. This paper provides a review of deep-learning-based AI techniques for COVID-19 diagnosis using chest radiography and computed tomography. Thirty papers published from February 2020 to March 2022 that used two-dimensional (2D)/three-dimensional (3D) deep convolutional neural networks combined with transfer learning for COVID-19 detection were reviewed. The review describes how deep-learning methods detect COVID-19, and several limitations of the proposed methods are highlighted. | Clinical radiology | 2023-01-14T00:00:00 | [
"SAslani",
"JJacob"
] | 10.1016/j.crad.2022.11.006
10.1101/2020.02.11.20021493
10.5555/3305890.3305954
10.1109/ICCVW.2019.00052 |
Coronavirus covid-19 detection by means of explainable deep learning. | The coronavirus is caused by the infection of the SARS-CoV-2 virus: it represents a complex and new condition, considering that until the end of December 2019 this virus was totally unknown to the international scientific community. The clinical management of patients with the coronavirus disease has undergone an evolution over the months, thanks to the increasing knowledge of the virus, symptoms and efficacy of the various therapies. Currently, however, there is no specific therapy for SARS-CoV-2 virus, know also as Coronavirus disease 19, and treatment is based on the symptoms of the patient taking into account the overall clinical picture. Furthermore, the test to identify whether a patient is affected by the virus is generally performed on sputum and the result is generally available within a few hours or days. Researches previously found that the biomedical imaging analysis is able to show signs of pneumonia. For this reason in this paper, with the aim of providing a fully automatic and faster diagnosis, we design and implement a method adopting deep learning for the novel coronavirus disease detection, starting from computed tomography medical images. The proposed approach is aimed to detect whether a computed tomography medical images is related to an healthy patient, to a patient with a pulmonary disease or to a patient affected with Coronavirus disease 19. In case the patient is marked by the proposed method as affected by the Coronavirus disease 19, the areas symptomatic of the Coronavirus disease 19 infection are automatically highlighted in the computed tomography medical images. We perform an experimental analysis to empirically demonstrate the effectiveness of the proposed approach, by considering medical images belonging from different institutions, with an average time for Coronavirus disease 19 detection of approximately 8.9 s and an accuracy equal to 0.95. | Scientific reports | 2023-01-11T00:00:00 | [
"FrancescoMercaldo",
"Maria PaolaBelfiore",
"AlfonsoReginelli",
"LucaBrunese",
"AntonellaSantone"
] | 10.1038/s41598-023-27697-y
10.1016/j.procs.2020.09.258
10.1038/s41577-020-00434-6
10.1016/j.cmpb.2020.105608
10.1038/d41573-020-00073-5
10.1056/NEJMoa2001316
10.1053/j.gastro.2020.02.054
10.3390/biology9050097
10.1002/jmv.25748
10.1056/NEJMp030078
10.1038/nm1024
10.3201/eid2009.140378
10.3390/jcm9020523
10.1001/jama.2020.4344
10.1016/S0140-6736(20)30185-9
10.1016/S0140-6736(20)30183-5
10.1148/radiol.2020200905
10.1016/j.ejrad.2020.108961
10.1111/irv.12734
10.1148/radiol.2020200432
10.1016/S1470-2045(19)30739-9
10.1097/RLI.0b013e318074fd81
10.2967/jnumed.117.189704
10.1007/s00330-007-0613-2
10.1007/978-3-031-01821-3
10.1109/5.726791
10.1038/s41598-020-76282-0
10.1007/s00330-021-07715-1
10.1016/j.eng.2020.04.010
10.1007/s13246-020-00865-4
10.1109/TCBB.2021.3065361
10.1016/j.compbiomed.2020.103792
10.1016/j.compbiomed.2020.103795
10.1016/j.bbe.2015.12.005
10.1016/j.measurement.2020.108116
10.1016/j.ifacol.2021.10.282
10.1016/j.cose.2021.102198
10.1109/ACCESS.2019.2961754 |
LDDNet: A Deep Learning Framework for the Diagnosis of Infectious Lung Diseases. | This paper proposes a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images. This framework is termed optimized DenseNet201 for lung diseases (LDDNet). The proposed LDDNet was developed using additional layers of 2D global average pooling, dense and dropout layers, and batch normalization to the base DenseNet201 model. There are 1024 Relu-activated dense layers and 256 dense layers using the sigmoid activation method. The hyper-parameters of the model, including the learning rate, batch size, epochs, and dropout rate, were tuned for the model. Next, three datasets of lung diseases were formed from separate open-access sources. One was a CT scan dataset containing 1043 images. Two X-ray datasets comprising images of COVID-19-affected lungs, pneumonia-affected lungs, and healthy lungs exist, with one being an imbalanced dataset with 5935 images and the other being a balanced dataset with 5002 images. The performance of each model was analyzed using the Adam, Nadam, and SGD optimizers. The best results have been obtained for both the CT scan and CXR datasets using the Nadam optimizer. For the CT scan images, LDDNet showed a COVID-19-positive classification accuracy of 99.36%, a 100% precision recall of 98%, and an F1 score of 99%. For the X-ray dataset of 5935 images, LDDNet provides a 99.55% accuracy, 73% recall, 100% precision, and 85% F1 score using the Nadam optimizer in detecting COVID-19-affected patients. For the balanced X-ray dataset, LDDNet provides a 97.07% classification accuracy. For a given set of parameters, the performance results of LDDNet are better than the existing algorithms of ResNet152V2 and XceptionNet. | Sensors (Basel, Switzerland) | 2023-01-09T00:00:00 | [
"PrajoyPodder",
"Sanchita RaniDas",
"M Rubaiyat HossainMondal",
"SubratoBharati",
"AzraMaliha",
"Md JunayedHasan",
"FarzinPiltan"
] | 10.3390/s23010480
10.1016/j.scitotenv.2020.138762
10.1007/s10489-020-01826-w
10.1101/2020.04.22.056283
10.1007/s10489-021-02393-4
10.1016/j.bea.2021.100003
10.1148/radiol.2020200527
10.1148/radiol.2020200823
10.1016/j.ejrad.2020.108961
10.1155/2021/5527923
10.1016/j.compbiomed.2020.103795
10.3390/s21020369
10.1016/j.cmpb.2019.06.005
10.1109/RBME.2020.2990959
10.1007/s42979-021-00785-4
10.1007/s10489-020-01943-6
10.1016/j.compbiomed.2021.104575
10.3233/HIS-210008
10.1101/2020.04.24.20078584
10.2174/1573405617666210713113439
10.1016/j.bspc.2021.102588
10.1109/ACCESS.2020.3010287
10.20944/preprints202003.0300.v1
10.1007/s10140-020-01886-y
10.1101/2020.03.12.20027185
10.1007/s00330-021-07715-1
10.1371/journal.pone.0259179
10.1016/j.compbiomed.2022.105213
10.3390/s22020669
10.1016/j.compbiomed.2022.105418
10.3390/info11090419
10.1038/s41597-021-00900-3
10.5281/zenodo.3757476
10.3390/app10217639
10.1016/j.patrec.2021.08.035
10.1109/JBHI.2022.3177854 |
A Holistic Approach to Identify and Classify COVID-19 from Chest Radiographs, ECG, and CT-Scan Images Using ShuffleNet Convolutional Neural Network. | Early and precise COVID-19 identification and analysis are pivotal in reducing the spread of COVID-19. Medical imaging techniques, such as chest X-ray or chest radiographs, computed tomography (CT) scan, and electrocardiogram (ECG) trace images are the most widely known for early discovery and analysis of the coronavirus disease (COVID-19). Deep learning (DL) frameworks for identifying COVID-19 positive patients in the literature are limited to one data format, either ECG or chest radiograph images. Moreover, using several data types to recover abnormal patterns caused by COVID-19 could potentially provide more information and restrict the spread of the virus. This study presents an effective COVID-19 detection and classification approach using the Shufflenet CNN by employing three types of images, i.e., chest radiograph, CT-scan, and ECG-trace images. For this purpose, we performed extensive classification experiments with the proposed approach using each type of image. With the chest radiograph dataset, we performed three classification experiments at different levels of granularity, i.e., binary, three-class, and four-class classifications. In addition, we performed a binary classification experiment with the proposed approach by classifying CT-scan images into COVID-positive and normal. Finally, utilizing the ECG-trace images, we conducted three experiments at different levels of granularity, i.e., binary, three-class, and five-class classifications. We evaluated the proposed approach with the baseline COVID-19 Radiography Database, SARS-CoV-2 CT-scan, and ECG images dataset of cardiac and COVID-19 patients. The average accuracy of 99.98% for COVID-19 detection in the three-class classification scheme using chest radiographs, optimal accuracy of 100% for COVID-19 detection using CT scans, and average accuracy of 99.37% for five-class classification scheme using ECG trace images have proved the efficacy of our proposed method over the contemporary methods. The optimal accuracy of 100% for COVID-19 detection using CT scans and the accuracy gain of 1.54% (in the case of five-class classification using ECG trace images) from the previous approach, which utilized ECG images for the first time, has a major contribution to improving the COVID-19 prediction rate in early stages. Experimental findings demonstrate that the proposed framework outperforms contemporary models. For example, the proposed approach outperforms state-of-the-art DL approaches, such as Squeezenet, Alexnet, and Darknet19, by achieving the accuracy of 99.98 (proposed method), 98.29, 98.50, and 99.67, respectively. | Diagnostics (Basel, Switzerland) | 2023-01-09T00:00:00 | [
"NaeemUllah",
"Javed AliKhan",
"ShakerEl-Sappagh",
"NoraEl-Rashidy",
"Mohammad SohailKhan"
] | 10.3390/diagnostics13010162
10.3390/app12126269
10.1038/s41368-020-0075-9
10.3389/fmed.2022.1005920
10.1016/j.radi.2020.10.013
10.1016/j.patcog.2021.108255
10.1007/s00521-021-06737-6
10.1016/j.compbiomed.2022.105350
10.1016/j.cmpb.2022.106731
10.1101/2020.03.30.20047787
10.1016/j.cma.2022.114570
10.1016/j.cma.2020.113609
10.1016/j.cie.2021.107250
10.1016/j.eswa.2021.116158
10.1109/ACCESS.2022.3147821
10.1007/s13246-020-00888-x
10.1016/j.compbiomed.2020.103792
10.1016/j.cmpb.2020.105581
10.1007/s40846-020-00529-4
10.1016/j.mehy.2020.109761
10.18576/amis/100122
10.12785/amis/080617
10.1016/j.jksuci.2021.12.017
10.1155/2012/205391
10.1038/s41591-020-0931-3
10.1007/s00330-021-07715-1
10.1016/j.patcog.2021.108135
10.3390/s21175702
10.1155/2021/3366057
10.1007/s13755-021-00169-1
10.1016/j.jrras.2022.02.002
10.1109/ACCESS.2020.3010287
10.1016/j.compbiomed.2021.104319
10.1101/2020.04.24.20078584
10.1016/j.dib.2021.106762
10.1145/3065386
10.3390/technologies10020037
10.1016/j.eswa.2021.116377
10.1155/2022/4130674
10.1155/2022/6486570
10.3390/app12115645
10.3390/s22197575
10.3390/electronics11071146
10.1109/ACCESS.2022.3189676
10.1109/ACCESS.2019.2909969
10.1109/ACCESS.2019.2904800
10.3390/s22051747
10.1007/s00521-022-08007-5
10.1007/s00500-022-07420-1
10.1007/s00521-021-06631-1 |
An Efficient Deep Learning Method for Detection of COVID-19 Infection Using Chest X-ray Images. | The research community has recently shown significant interest in designing automated systems to detect coronavirus disease 2019 (COVID-19) using deep learning approaches and chest radiography images. However, state-of-the-art deep learning techniques, especially convolutional neural networks (CNNs), demand more learnable parameters and memory. Therefore, they may not be suitable for real-time diagnosis. Thus, the design of a lightweight CNN model for fast and accurate COVID-19 detection is an urgent need. In this paper, a lightweight CNN model called LW-CORONet is proposed that comprises a sequence of convolution, rectified linear unit (ReLU), and pooling layers followed by two fully connected layers. The proposed model facilitates extracting meaningful features from the chest X-ray (CXR) images with only five learnable layers. The proposed model is evaluated using two larger CXR datasets (Dataset-1: 2250 images and Dataset-2: 15,999 images) and the classification accuracy obtained are 98.67% and 99.00% on Dataset-1 and 95.67% and 96.25% on Dataset-2 for multi-class and binary classification cases, respectively. The results are compared with four contemporary pre-trained CNN models as well as state-of-the-art models. The effect of several hyperparameters: different optimization techniques, batch size, and learning rate have also been investigated. The proposed model demands fewer parameters and requires less memory space. Hence, it is effective for COVID-19 detection and can be utilized as a supplementary tool to assist radiologists in their diagnosis. | Diagnostics (Basel, Switzerland) | 2023-01-09T00:00:00 | [
"Soumya RanjanNayak",
"Deepak RanjanNayak",
"UtkarshSinha",
"VaibhavArora",
"Ram BilasPachori"
] | 10.3390/diagnostics13010131
10.1016/S0140-6736(20)30183-5
10.32604/cmc.2020.010691
10.1148/radiol.2020200432
10.1109/RBME.2020.2990959
10.1148/radiol.2020200527
10.1148/radiol.2020200230
10.1109/JBHI.2022.3196489
10.1148/radiol.2020200343
10.1109/RBME.2020.2987975
10.3390/app10020559
10.1148/radiol.2017162326
10.1371/journal.pmed.1002686
10.1016/j.compbiomed.2020.103792
10.1007/s10044-021-00984-y
10.1016/j.mehy.2020.109761
10.1016/j.imu.2020.100360
10.1038/s41598-020-76550-z
10.1016/j.compbiomed.2020.103805
10.1016/j.chaos.2020.110122
10.1109/TMI.2020.2996256
10.1109/JSEN.2020.3025855
10.1016/j.inffus.2020.11.005
10.1016/j.compbiomed.2021.104454
10.1145/3551647
10.1016/j.bspc.2021.103182
10.1016/j.compbiomed.2022.106331
10.1016/j.bspc.2020.102365
10.1007/s11042-022-12156-z
10.1109/ACCESS.2019.2950228
10.1007/BF03178082
10.1016/j.neucom.2017.12.030
10.1109/TMI.2016.2528162
10.1186/s40537-019-0197-0
10.1016/j.patrec.2020.04.018
10.1007/s12652-020-02612-9
10.1016/j.compmedimag.2019.05.001
10.1016/j.media.2017.07.005 |
Development and validation of a deep learning model to diagnose COVID-19 using time-series heart rate values before the onset of symptoms. | One of the effective ways to minimize the spread of COVID-19 infection is to diagnose it as early as possible before the onset of symptoms. In addition, if the infection can be simply diagnosed using a smartwatch, the effectiveness of preventing the spread will be greatly increased. In this study, we aimed to develop a deep learning model to diagnose COVID-19 before the onset of symptoms using heart rate (HR) data obtained from a smartwatch. In the deep learning model for the diagnosis, we proposed a transformer model that learns HR variability patterns in presymptom by tracking relationships in sequential HR data. In the cross-validation (CV) results from the COVID-19 unvaccinated patients, our proposed deep learning model exhibited high accuracy metrics: sensitivity of 84.38%, specificity of 85.25%, accuracy of 84.85%, balanced accuracy of 84.81%, and area under the receiver operating characteristics (AUROC) of 0.8778. Furthermore, we validated our model using external multiple datasets including healthy subjects, COVID-19 patients, as well as vaccinated patients. In the external healthy subject group, our model also achieved high specificity of 77.80%. In the external COVID-19 unvaccinated patient group, our model also provided similar accuracy metrics to those from the CV: balanced accuracy of 87.23% and AUROC of 0.8897. In the COVID-19 vaccinated patients, the balanced accuracy and AUROC dropped by 66.67% and 0.8072, respectively. The first finding in this study is that our proposed deep learning model can simply and accurately diagnose COVID-19 patients using HRs obtained from a smartwatch before the onset of symptoms. The second finding is that the model trained from unvaccinated patients may provide less accurate diagnosis performance compared with the vaccinated patients. The last finding is that the model trained in a certain period of time may provide degraded diagnosis performances as the virus continues to mutate. | Journal of medical virology | 2023-01-06T00:00:00 | [
"HeewonChung",
"HoonKo",
"HooseokLee",
"Dong KeonYon",
"Won HeeLee",
"Tae-SeongKim",
"Kyung WonKim",
"JinseokLee"
] | 10.1002/jmv.28462 |
Identification of Asymptomatic COVID-19 Patients on Chest CT Images Using Transformer-Based or Convolutional Neural Network-Based Deep Learning Models. | Novel coronavirus disease 2019 (COVID-19) has rapidly spread throughout the world; however, it is difficult for clinicians to make early diagnoses. This study is to evaluate the feasibility of using deep learning (DL) models to identify asymptomatic COVID-19 patients based on chest CT images. In this retrospective study, six DL models (Xception, NASNet, ResNet, EfficientNet, ViT, and Swin), based on convolutional neural networks (CNNs) or transformer architectures, were trained to identify asymptomatic patients with COVID-19 on chest CT images. Data from Yangzhou were randomly split into a training set (n = 2140) and an internal-validation set (n = 360). Data from Suzhou was the external-test set (n = 200). Model performance was assessed by the metrics accuracy, recall, and specificity and was compared with the assessments of two radiologists. A total of 2700 chest CT images were collected in this study. In the validation dataset, the Swin model achieved the highest accuracy of 0.994, followed by the EfficientNet model (0.954). The recall and the precision of the Swin model were 0.989 and 1.000, respectively. In the test dataset, the Swin model was still the best and achieved the highest accuracy (0.980). All the DL models performed remarkably better than the two experts. Last, the time on the test set diagnosis spent by two experts-42 min, 17 s (junior); and 29 min, 43 s (senior)-was significantly higher than those of the DL models (all below 2 min). This study evaluated the feasibility of multiple DL models in distinguishing asymptomatic patients with COVID-19 from healthy subjects on chest CT images. It found that a transformer-based model, the Swin model, performed best. | Journal of digital imaging | 2023-01-04T00:00:00 | [
"MinyueYin",
"XiaolongLiang",
"ZilanWang",
"YijiaZhou",
"YuHe",
"YuhanXue",
"JingwenGao",
"JiaxiLin",
"ChenyanYu",
"LuLiu",
"XiaolinLiu",
"ChaoXu",
"JinzhouZhu"
] | 10.1007/s10278-022-00754-0
10.1056/NEJMoa2002032
10.1007/s00330-020-06886-7
10.1186/s12911-021-01521-x
10.1016/j.compbiomed.2020.103805
10.1016/s0140-6736(20)30211-7
10.1001/jama.2020.1585
10.1148/radiol.2020200642
10.1148/radiol.2020200230
10.1016/s0140-6736(20)30183-5
10.1148/radiol.2020200463
10.1016/j.chest.2020.04.003
10.1001/jama.2020.12839
10.1038/s41598-020-74164-z
10.1148/radiol.2020200823
10.1016/s1473-3099(14)70846-1
10.1001/jama.2020.2565
10.1016/j.ijid.2020.06.052
10.1038/s41467-020-17971-2
10.1007/s00330-020-07042-x
10.1109/tmi.2020.3040950
10.1016/j.cell.2020.04.045
10.1016/j.compbiomed.2020.103792
10.1038/s41598-020-76550-z
10.3389/fimmu.2021.732756
10.1002/smll.202002169
10.1016/j.talanta.2020.121726
10.1148/radiol.2020200490
10.1148/radiol.2020201365
10.1016/j.annonc.2020.04.003
10.1016/j.ejrad.2020.108961
10.1148/radiol.2020200343
10.1126/science.abb3221
10.1007/s00330-021-07715-1
10.1109/rbme.2020.2987975
10.1155/2021/5185938
10.1148/radiol.2020201491
10.1109/tmi.2016.2528162 |
Covid-19 Diagnosis by WE-SAJ. | With a global COVID-19 pandemic, the number of confirmed patients increases rapidly, leaving the world with very few medical resources. Therefore, the fast diagnosis and monitoring of COVID-19 are one of the world's most critical challenges today. Artificial intelligence-based CT image classification models can quickly and accurately distinguish infected patients from healthy populations. Our research proposes a deep learning model (WE-SAJ) using wavelet entropy for feature extraction, two-layer FNNs for classification and the adaptive Jaya algorithm as a training algorithm. It achieves superior performance compared to the Jaya-based model. The model has a sensitivity of 85.47±1.84, specificity of 87.23±1.67 precision of 87.03±1.34, an accuracy of 86.35±0.70, and F1 score of 86.23±0.77, Matthews correlation coefficient of 72.75±1.38, and Fowlkes-Mallows Index of 86.24±0.76. Our experiments demonstrate the potential of artificial intelligence techniques for COVID-19 diagnosis and the effectiveness of the Self-adaptive Jaya algorithm compared to the Jaya algorithm for medical image classification tasks. | Systems science & control engineering | 2022-12-27T00:00:00 | [
"WeiWang",
"XinZhang",
"Shui-HuaWang",
"Yu-DongZhang"
] | 10.1080/21642583.2022.2045645 |
A Review of COVID-19 Diagnostic Approaches in Computer Vision. | Computer vision has proven that it can solve many problems in the field of health in recent years. Processing the data obtained from the patients provided benefits in both disease detection and follow-up and control mechanisms. Studies on the use of computer vision for COVID-19, which is one of the biggest global health problems of the past years, are increasing daily. This study includes a preliminary review of COVID-19 computer vision research conducted in recent years. This review aims to help researchers who want to work in this field. | Current medical imaging | 2022-12-26T00:00:00 | [
"CemilZalluhoğlu"
] | 10.2174/1573405619666221222161832 |
Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model. | The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional way of diagnosis is time taking, less efficient, and has a low rate of detection of this disease. Therefore, there is a need for an automatic system that expedites the diagnosis process while retaining its performance and accuracy. Artificial intelligence (AI) technologies such as machine learning (ML) and deep learning (DL) potentially provide powerful solutions to address this problem. In this study, a state-of-the-art CNN model densely connected squeeze convolutional neural network (DCSCNN) has been developed for the classification of X-ray images of COVID-19, pneumonia, normal, and lung opacity patients. Data were collected from different sources. We applied different preprocessing techniques to enhance the quality of images so that our model could learn accurately and give optimal performance. Moreover, the attention regions and decisions of the AI model were visualized using the Grad-CAM and LIME methods. The DCSCNN combines the strength of the Dense and Squeeze networks. In our experiment, seven kinds of classification have been performed, in which six are binary classifications (COVID vs. normal, COVID vs. lung opacity, lung opacity vs. normal, COVID vs. pneumonia, pneumonia vs. lung opacity, pneumonia vs. normal) and one is multiclass classification (COVID vs. pneumonia vs. lung opacity vs. normal). The main contributions of this paper are as follows. First, the development of the DCSNN model which is capable of performing binary classification as well as multiclass classification with excellent classification accuracy. Second, to ensure trust, transparency, and explainability of the model, we applied two popular Explainable AI techniques (XAI). i.e., Grad-CAM and LIME. These techniques helped to address the black-box nature of the model while improving the trust, transparency, and explainability of the model. Our proposed DCSCNN model achieved an accuracy of 98.8% for the classification of COVID-19 vs normal, followed by COVID-19 vs. lung opacity: 98.2%, lung opacity vs. normal: 97.2%, COVID-19 vs. pneumonia: 96.4%, pneumonia vs. lung opacity: 95.8%, pneumonia vs. normal: 97.4%, and lastly for multiclass classification of all the four classes i.e., COVID vs. pneumonia vs. lung opacity vs. normal: 94.7%, respectively. The DCSCNN model provides excellent classification performance consequently, helping doctors to diagnose diseases quickly and efficiently. | Sensors (Basel, Switzerland) | 2022-12-24T00:00:00 | [
"SikandarAli",
"AliHussain",
"SubrataBhattacharjee",
"AliAthar",
"NoneAbdullah",
"Hee-CheolKim"
] | 10.3390/s22249983
10.1016/S0140-6736(66)92364-6
10.1016/j.celrep.2020.108175
10.1159/000149390
10.1001/jama.2020.1097
10.1016/S0140-6736(20)30251-8
10.1016/S0140-6736(21)02758-6
10.15585/mmwr.mm7050e1
10.2174/0929867328666210521164809
10.1111/cbdd.13761
10.1002/ped4.12178
10.1093/cid/ciaa799
10.1148/radiol.2020200642
10.1001/jama.2020.3786
10.3390/diagnostics10030165
10.1001/jama.2020.1585
10.1148/radiol.2020200432
10.1016/j.measurement.2019.05.076
10.54112/bcsrj.v2020i1.31
10.1007/s12195-020-00629-w
10.1007/s10044-021-00984-y
10.1007/s00330-021-07715-1
10.20944/preprints202003.0300.v1
10.1016/j.chaos.2020.110120
10.1007/s13246-020-00865-4
10.1016/j.compbiomed.2020.104037
10.1016/j.bbe.2020.08.008
10.3390/life11101092
10.3390/diagnostics11050829
10.3390/jpm12060988
10.1109/JBHI.2022.3168604
10.32604/cmc.2020.013249
10.1016/j.inffus.2021.07.016
10.3892/etm.2020.8797
10.1016/j.compbiomed.2022.105244
10.1002/ima.22706
10.1038/s41598-020-76550-z
10.1109/TMI.2020.2993291
10.1016/j.compbiomed.2021.104335
10.1016/j.intimp.2020.106705
10.1007/s10489-020-01829-7
10.1007/s42600-020-00112-5
10.1016/j.mri.2014.03.010
10.1109/42.996338
10.1016/j.chaos.2020.110190
10.1016/j.compbiomed.2020.104041 |
Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust. | Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, U-Net, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy. | PloS one | 2022-12-23T00:00:00 | [
"KashfiaSailunaz",
"DenizBestepe",
"TanselÖzyer",
"JonRokne",
"RedaAlhajj"
] | 10.1371/journal.pone.0278487
10.3390/v12040372
10.1080/14737159.2020.1757437
10.1148/radiol.2020203173
10.3389/fpubh.2022.1046296
10.1148/rg.2020200159
10.1155/2021/2560388
10.32604/cmc.2023.032064
10.3390/s21062215
10.3390/ijerph18031117
10.1016/j.compbiomed.2022.105350
10.3389/fcvm.2021.638011
10.1155/2020/9756518
10.1109/ACCESS.2021.3054484
10.1016/j.dsx.2020.05.008
10.1109/RBME.2020.2987975
10.1016/j.jiph.2020.06.028
10.3390/jcm10091961
10.1007/s10489-020-02102-7
10.1016/j.chaos.2020.110059
10.3389/fpubh.2022.948205
10.1016/j.neucom.2021.03.034
10.1016/j.bea.2022.100041
10.1007/s13369-021-05958-0
10.1016/j.asoc.2021.107522
10.1016/j.patcog.2020.107747
10.1038/s41598-022-06854-9
10.1007/s10489-021-02731-6
10.3390/electronics11152296
10.1007/s11063-022-10785-x
10.1007/s13755-021-00146-8
10.1038/s41598-020-76282-0
10.1016/j.cmpbup.2021.100007
10.1002/mp.15231
10.3390/electronics11010130
10.7717/peerj-cs.349
10.1016/j.patcog.2021.107828
10.3390/su13031224
10.1016/j.cell.2020.04.045
10.3389/fmed.2020.608525
10.1016/j.compbiomed.2021.104319
10.1186/s41747-020-00173-2
10.5194/isprs-archives-XLIII-B3-2020-1507-2020
10.1021/acs.jcim.8b00671
10.1145/3352020.3352029
10.3390/brainsci10070427
10.3390/app10051897
10.1016/j.neunet.2015.07.007
10.1016/j.neucom.2022.01.014
10.1016/j.media.2021.102035
10.1186/s12880-015-0068-x |
Disease Recognition in X-ray Images with Doctor Consultation-Inspired Model. | The application of chest X-ray imaging for early disease screening is attracting interest from the computer vision and deep learning community. To date, various deep learning models have been applied in X-ray image analysis. However, models perform inconsistently depending on the dataset. In this paper, we consider each individual model as a medical doctor. We then propose a doctor consultation-inspired method that fuses multiple models. In particular, we consider both early and late fusion mechanisms for consultation. The early fusion mechanism combines the deep learned features from multiple models, whereas the late fusion method combines the confidence scores of all individual models. Experiments on two X-ray imaging datasets demonstrate the superiority of the proposed method relative to baseline. The experimental results also show that early consultation consistently outperforms the late consultation mechanism in both benchmark datasets. In particular, the early doctor consultation-inspired model outperforms all individual models by a large margin, i.e., 3.03 and 1.86 in terms of accuracy in the UIT COVID-19 and chest X-ray datasets, respectively. | Journal of imaging | 2022-12-23T00:00:00 | [
"Kim AnhPhung",
"Thuan TrongNguyen",
"NileshkumarWangad",
"SamahBaraheem",
"Nguyen DVo",
"KhangNguyen"
] | 10.3390/jimaging8120323
10.1148/radiol.2020200642
10.1016/j.cca.2020.03.009
10.1145/3065386
10.7861/futurehosp.6-2-94
10.3390/info13080360
10.1038/s41551-018-0305-z
10.1016/j.jacr.2019.05.047
10.1016/j.imu.2020.100405
10.20944/preprints202003.0300.v1
10.1016/j.bbe.2020.08.005
10.1038/s41591-020-0931-3
10.1097/RTI.0000000000000512
10.1016/j.chemolab.2020.104054
10.1109/TPAMI.2007.1110
10.1016/j.compbiomed.2020.103795
10.1109/TPAMI.2020.2983686
10.1109/ACCESS.2020.2994762
10.1109/TMI.2020.2993291
10.3390/s21217116
10.3390/ijerph17186933
10.1016/j.artmed.2021.102156
10.1016/j.compbiomed.2022.105383
10.1007/s10489-020-01831-z
10.3390/jimaging7020012
10.1109/TPAMI.2017.2723009 |
Improving COVID-19 CT classification of CNNs by learning parameter-efficient representation. | The COVID-19 pandemic continues to spread rapidly over the world and causes a tremendous crisis in global human health and the economy. Its early detection and diagnosis are crucial for controlling the further spread. Many deep learning-based methods have been proposed to assist clinicians in automatic COVID-19 diagnosis based on computed tomography imaging. However, challenges still remain, including low data diversity in existing datasets, and unsatisfied detection resulting from insufficient accuracy and sensitivity of deep learning models. To enhance the data diversity, we design augmentation techniques of incremental levels and apply them to the largest open-access benchmark dataset, COVIDx CT-2A. Meanwhile, similarity regularization (SR) derived from contrastive learning is proposed in this study to enable CNNs to learn more parameter-efficient representations, thus improve the accuracy and sensitivity of CNNs. The results on seven commonly used CNNs demonstrate that CNN performance can be improved stably through applying the designed augmentation and SR techniques. In particular, DenseNet121 with SR achieves an average test accuracy of 99.44% in three trials for three-category classification, including normal, non-COVID-19 pneumonia, and COVID-19 pneumonia. The achieved precision, sensitivity, and specificity for the COVID-19 pneumonia category are 98.40%, 99.59%, and 99.50%, respectively. These statistics suggest that our method has surpassed the existing state-of-the-art methods on the COVIDx CT-2A dataset. Source code is available at https://github.com/YujiaKCL/COVID-CT-Similarity-Regularization. | Computers in biology and medicine | 2022-12-22T00:00:00 | [
"YujiaXu",
"Hak-KeungLam",
"GuangyuJia",
"JianJiang",
"JunkaiLiao",
"XinqiBao"
] | 10.1016/j.compbiomed.2022.106417
10.1136/bmj.n597
10.3389/fmed.2020.608525
10.5281/zenodo.4414861 |
Virus Detection and Identification in Minutes Using Single-Particle Imaging and Deep Learning. | The increasing frequency and magnitude of viral outbreaks in recent decades, epitomized by the COVID-19 pandemic, has resulted in an urgent need for rapid and sensitive diagnostic methods. Here, we present a methodology for virus detection and identification that uses a convolutional neural network to distinguish between microscopy images of fluorescently labeled intact particles of different viruses. Our assay achieves labeling, imaging, and virus identification in less than 5 min and does not require any lysis, purification, or amplification steps. The trained neural network was able to differentiate SARS-CoV-2 from negative clinical samples, as well as from other common respiratory pathogens such as influenza and seasonal human coronaviruses. We were also able to differentiate closely related strains of influenza, as well as SARS-CoV-2 variants. Additional and novel pathogens can easily be incorporated into the test through software updates, offering the potential to rapidly utilize the technology in future infectious disease outbreaks or pandemics. Single-particle imaging combined with deep learning therefore offers a promising alternative to traditional viral diagnostic and genomic sequencing methods and has the potential for significant impact. | ACS nano | 2022-12-22T00:00:00 | [
"NicolasShiaelis",
"AlexanderTometzki",
"LeonPeto",
"AndrewMcMahon",
"ChristofHepp",
"EricaBickerton",
"CyrilFavard",
"DelphineMuriaux",
"MoniqueAndersson",
"SarahOakley",
"AliVaughan",
"Philippa CMatthews",
"NicoleStoesser",
"Derrick WCrook",
"Achillefs NKapanidis",
"Nicole CRobb"
] | 10.1021/acsnano.2c10159
10.1021/acsnano.0c02624
10.1111/1751-7915.13586
10.1007/s12250-020-00218-1
10.1371/journal.pone.0234682
10.1101/2020.02.26.20028373
10.1039/D0AN01835J
10.1021/acscentsci.0c01288
10.1002/14651858.CD013705.pub2
10.1038/s41598-019-52759-5
10.1162/neco_a_00990
10.1038/nature14539
10.7554/eLife.40183
10.1007/s12560-018-9335-7
10.1093/cid/ciaa1382
10.1016/S1473-3099(20)30113-4
10.7150/ijbs.45018
10.1111/j.1365-2672.2010.04663.x
10.1002/elps.202000121
10.1016/0166-0934(91)90012-O
10.1038/s41598-021-91371-4
10.1128/JVI.75.24.12359-12369.2001
10.2807/1560-7917.ES.2021.26.3.2100008
10.1136/bmj.308.6943.1552 |
Deep features to detect pulmonary abnormalities in chest X-rays due to infectious diseaseX: Covid-19, pneumonia, and tuberculosis. | Chest X-ray (CXR) imaging is a low-cost, easy-to-use imaging alternative that can be used to diagnose/screen pulmonary abnormalities due to infectious diseaseX: Covid-19, Pneumonia and Tuberculosis (TB). Not limited to binary decisions (with respect to healthy cases) that are reported in the state-of-the-art literature, we also consider non-healthy CXR screening using a lightweight deep neural network (DNN) with a reduced number of epochs and parameters. On three diverse publicly accessible and fully categorized datasets, for non-healthy versus healthy CXR screening, the proposed DNN produced the following accuracies: 99.87% on Covid-19 versus healthy, 99.55% on Pneumonia versus healthy, and 99.76% on TB versus healthy datasets. On the other hand, when considering non-healthy CXR screening, we received the following accuracies: 98.89% on Covid-19 versus Pneumonia, 98.99% on Covid-19 versus TB, and 100% on Pneumonia versus TB. To further precisely analyze how well the proposed DNN worked, we considered well-known DNNs such as ResNet50, ResNet152V2, MobileNetV2, and InceptionV3. Our results are comparable with the current state-of-the-art, and as the proposed CNN is light, it could potentially be used for mass screening in resource-constraint regions. | Information sciences | 2022-12-20T00:00:00 | [
"Md KawsherMahbub",
"MilonBiswas",
"LoveleenGaur",
"FayadhAlenezi",
"K CSantosh"
] | 10.1016/j.ins.2022.01.062 |
LWSNet - a novel deep-learning architecture to segregate Covid-19 and pneumonia from x-ray imagery. | Automatic detection of lung diseases using AI-based tools became very much necessary to handle the huge number of cases occurring across the globe and support the doctors. This paper proposed a novel deep learning architecture named LWSNet (Light Weight Stacking Network) to separate Covid-19, cold pneumonia, and normal chest x-ray images. This framework is based on single, double, triple, and quadruple stack mechanisms to address the above-mentioned tri-class problem. In this framework, a truncated version of standard deep learning models and a lightweight CNN model was considered to conviniently deploy in resource-constraint devices. An evaluation was conducted on three publicly available datasets alongwith their combination. We received 97.28%, 96.50%, 97.41%, and 98.54% highest classification accuracies using quadruple stack. On further investigation, we found, using LWSNet, the average accuracy got improved from individual model to quadruple model by 2.31%, 2.55%, 2.88%, and 2.26% on four respective datasets. | Multimedia tools and applications | 2022-12-20T00:00:00 | [
"AsifuzzamanLasker",
"MridulGhosh",
"Sk MdObaidullah",
"ChandanChakraborty",
"KaushikRoy"
] | 10.1007/s11042-022-14247-3
10.1111/exsy.12749
10.1145/3431804
10.1148/radiol.2020200642
10.1007/s10489-021-02199-4
10.1007/s13246-020-00865-4
10.3390/ijerph19042013
10.7717/peerj-cs.551
10.1007/s11042-021-11103-8
10.1007/s00371-021-02094-6
10.1038/s41598-018-33214-3
10.1016/S0140-6736(20)30183-5
10.1016/j.imu.2020.100412
10.1007/s10489-020-01902-1
10.3390/su14116785
10.1016/j.compbiomed.2020.104181
10.1038/nature14539
10.1016/j.neucom.2016.12.038
10.1016/j.compbiomed.2022.105213
10.31661/jbpe.v0i0.2008-1153
10.1007/s10489-020-01943-6
10.1016/j.bbe.2021.06.011
10.1109/JBHI.2021.3051470
10.1109/TNNLS.2021.3054746
10.1016/j.compbiomed.2021.104319
10.1016/j.imu.2020.100505
10.1007/s11548-020-02286-w
10.1016/j.bspc.2021.102622
10.1142/S0218001421510046
10.1109/TII.2021.3057683
10.1007/s11042-021-11807-x
10.1049/iet-ipr.2020.1127
10.1016/S0893-6080(05)80023-1
10.1002/jmv.26741
10.1371/journal.pone.0236621 |
Multi-objective automatic analysis of lung ultrasound data from COVID-19 patients by means of deep learning and decision trees. | COVID-19 raised the need for automatic medical diagnosis, to increase the physicians' efficiency in managing the pandemic. Among all the techniques for evaluating the status of the lungs of a patient with COVID-19, lung ultrasound (LUS) offers several advantages: portability, cost-effectiveness, safety. Several works approached the automatic detection of LUS imaging patterns related COVID-19 by using deep neural networks (DNNs). However, the decision processes based on DNNs are not fully explainable, which generally results in a lack of trust from physicians. This, in turn, slows down the adoption of such systems. In this work, we use two previously built DNNs as feature extractors at the frame level, and automatically synthesize, by means of an evolutionary algorithm, a decision tree (DT) that aggregates in an interpretable way the predictions made by the DNNs, returning the severity of the patients' conditions according to a LUS score of prognostic value. Our results show that our approach performs comparably or better than previously reported aggregation techniques based on an empiric combination of frame-level predictions made by DNNs. Furthermore, when we analyze the evolved DTs, we discover properties about the DNNs used as feature extractors. We make our data publicly available for further development and reproducibility. | Applied soft computing | 2022-12-20T00:00:00 | [
"Leonardo LucioCustode",
"FedericoMento",
"FrancescoTursi",
"AndreaSmargiassi",
"RiccardoInchingolo",
"TizianoPerrone",
"LibertarioDemi",
"GiovanniIacca"
] | 10.1016/j.asoc.2022.109926
10.1002/jum.15284
10.1002/jum.15285
10.1148/radiol.2020200847
10.1016/j.ejro.2020.100231
10.1016/j.jamda.2020.05.050
10.4269/ajtmh.20-0280
10.1186/s13054-020-02876-9
10.1007/s00134-020-05996-6
10.1007/s00134-020-06058-7
10.1121/10.0002183
10.1016/j.ultrasmedbio.2020.07.018
10.1109/TUFFC.2020.3012289
10.1121/10.0001797
10.1121/10.0001797
10.1007/s40477-017-0244-7
10.1016/j.ultrasmedbio.2017.01.011
10.1186/s12931-020-01338-8
10.1109/TMI.2020.2994459
10.1109/TUFFC.2020.3005512
10.1016/j.media.2021.101975
10.1109/TMI.2021.3117246
10.1002/jum.15548
10.1007/BFb0055930
10.1007/BFb0055930 |
Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images. | COVID-19 is one of the most life-threatening and dangerous diseases caused by the novel Coronavirus, which has already afflicted a larger human community worldwide. This pandemic disease recovery is possible if detected in the early stage. We proposed an automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 positive patients by following a four-phase paradigm for COVID-19 detection: preprocess the CT scan images; remove noise from test image by using anisotropic diffusion techniques; make a different segment for the preprocessed images; and train and test COVID-19 detection using Convolutional Neural Network (CNN) models. This study employed well-known pre-trained models, including AlexNet, ResNet50, VGG16 and VGG19 to evaluate experiments. 80% of images are used to train the network in the detection process, while the remaining 20% are used to test it. The result of the experiment evaluation confirmed that the VGG19 pre-trained CNN model achieved better accuracy (98.06%). We used 4861 real-life COVID-19 CT images for experiment purposes, including 3068 positive and 1793 negative images. These images were acquired from a hospital in Sao Paulo, Brazil and two other different data sources. Our proposed method revealed very high accuracy and, therefore, can be used as an assistant to help professionals detect COVID-19 patients accurately. | Scientific reports | 2022-12-17T00:00:00 | [
"Khandaker Mohammad MohiUddin",
"Samrat KumarDey",
"Hafiz Md HasanBabu",
"RafidMostafiz",
"ShahadatUddin",
"WatsharaShoombuatong",
"Mohammad AliMoni"
] | 10.1038/s41598-022-25539-x
10.1016/S0140-6736(20)30183-5
10.1056/NEJMoa2001017
10.1002/jmv.25743
10.1016/j.ijsu.2020.02.034
10.1016/S1473-3099(20)30120-1
10.3389/fpubh.2020.00154
10.1148/radiol.2020200432
10.1148/radiol.2020200370
10.1148/radiol.2020200463
10.1016/j.ejrad.2020.108961
10.1016/S1473-3099(20)30134-1
10.2214/AJR.20.22954
10.1038/nature21056
10.1016/j.patrec.2020.03.011
10.1016/j.compmedimag.2019.101673
10.1109/JSEN.2019.2959617
10.1016/j.ejrad.2020.109041
10.1371/journal.pone.0250952
10.1038/s41467-020-18685-1
10.1016/j.compbiomed.2020.103795
10.1038/s41598-021-83424-5
10.1109/TCBB.2021.3065361
10.3390/e22050517
10.1016/j.compbiomed.2020.104037
10.1109/TMI.2020.2995965
10.1148/radiol.2020200905
10.3390/bdcc3020027
10.1007/s12065-020-00550-1
10.1109/ICMA.2013.6618111
10.1148/radiol.2020200230
10.1148/radiol.2020200241
10.1016/j.eng.2020.04.010
10.1183/13993003.00775-2020
10.1016/j.imu.2020.100405
10.1088/1361-6560/abe838 |
AI support for accurate and fast radiological diagnosis of COVID-19: an international multicenter, multivendor CT study. | Differentiation between COVID-19 and community-acquired pneumonia (CAP) in computed tomography (CT) is a task that can be performed by human radiologists and artificial intelligence (AI). The present study aims to (1) develop an AI algorithm for differentiating COVID-19 from CAP and (2) evaluate its performance. (3) Evaluate the benefit of using the AI result as assistance for radiological diagnosis and the impact on relevant parameters such as accuracy of the diagnosis, diagnostic time, and confidence.
We included n = 1591 multicenter, multivendor chest CT scans and divided them into AI training and validation datasets to develop an AI algorithm (n = 991 CT scans; n = 462 COVID-19, and n = 529 CAP) from three centers in China. An independent Chinese and German test dataset of n = 600 CT scans from six centers (COVID-19 / CAP; n = 300 each) was used to test the performance of eight blinded radiologists and the AI algorithm. A subtest dataset (180 CT scans; n = 90 each) was used to evaluate the radiologists' performance without and with AI assistance to quantify changes in diagnostic accuracy, reporting time, and diagnostic confidence.
The diagnostic accuracy of the AI algorithm in the Chinese-German test dataset was 76.5%. Without AI assistance, the eight radiologists' diagnostic accuracy was 79.1% and increased with AI assistance to 81.5%, going along with significantly shorter decision times and higher confidence scores.
This large multicenter study demonstrates that AI assistance in CT-based differentiation of COVID-19 and CAP increases radiological performance with higher accuracy and specificity, faster diagnostic time, and improved diagnostic confidence.
• AI can help radiologists to get higher diagnostic accuracy, make faster decisions, and improve diagnostic confidence. • The China-German multicenter study demonstrates the advantages of a human-machine interaction using AI in clinical radiology for diagnostic differentiation between COVID-19 and CAP in CT scans. | European radiology | 2022-12-17T00:00:00 | [
"FanyangMeng",
"JonathanKottlors",
"RahilShahzad",
"HaifengLiu",
"PhilippFervers",
"YinhuaJin",
"MiriamRinneburger",
"DouLe",
"MathildaWeisthoff",
"WenyunLiu",
"MengzheNi",
"YeSun",
"LiyingAn",
"XiaochenHuai",
"DorottyaMóré",
"AthanasiosGiannakis",
"IsabelKaltenborn",
"AndreasBucher",
"DavidMaintz",
"LeiZhang",
"FrankThiele",
"MingyangLi",
"MichaelPerkuhn",
"HuimaoZhang",
"ThorstenPersigehl"
] | 10.1007/s00330-022-09335-9
10.1148/radiol.2020203173
10.1148/radiol.2020200432
10.1148/radiol.2020200642
10.1038/s41568-018-0016-5
10.1148/radiol.2020200905
10.1148/radiol.2020201491
10.7150/thno.46465
10.1007/s00330-020-07044-9
10.1038/s41591-020-0931-3
10.1038/s41467-020-17971-2
10.1183/13993003.00775-2020
10.1183/13993003.01104-2020
10.1038/s41467-020-18685-1
10.1016/j.media.2020.101860
10.1038/s41746-020-00369-1
10.1148/radiol.2015141579
10.1186/s12931-021-01670-7
10.1186/s40779-020-0233-6
10.1148/radiol.2020200370
10.1109/TPAMI.2016.2644615
10.1148/radiol.2020200823
10.1080/22221751.2020.1750307
10.1148/radiol.2020201473
10.1097/RTI.0000000000000524
10.1016/j.ejrad.2021.110002 |
Diagnosis of COVID-19 Disease in Chest CT-Scan Images Based on Combination of Low-Level Texture Analysis and MobileNetV2 Features. | Since two years ago, the COVID-19 virus has spread strongly in the world and has killed more than 6 million people directly and has affected the lives of more than 500 million people. Early diagnosis of the virus can help to break the chain of transmission and reduce the death rate. In most cases, the virus spreads in the infected person's chest. Therefore, the analysis of a chest CT scan is one of the most efficient methods for diagnosing a patient. Until now, various methods have been presented to diagnose COVID-19 disease in chest CT-scan images. Most recent studies have proposed deep learning-based methods. But handcrafted features provide acceptable results in some studies too. In this paper, an innovative approach is proposed based on the combination of low-level and deep features. First of all, local neighborhood difference patterns are performed to extract handcrafted texture features. Next, deep features are extracted using MobileNetV2. Finally, a two-level decision-making algorithm is performed to improve the detection rate especially when the proposed decisions based on the two different feature set are not the same. The proposed approach is evaluated on a collected dataset of chest CT scan images from June 1, 2021, to December 20, 2021, of 238 cases in two groups of patient and healthy in different COVID-19 variants. The results show that the combination of texture and deep features can provide better performance than using each feature set separately. Results demonstrate that the proposed approach provides higher accuracy in comparison with some state-of-the-art methods in this scope. | Computational intelligence and neuroscience | 2022-12-13T00:00:00 | [
"AzitaYazdani",
"ShervanFekri-Ershad",
"SaeedJelvay"
] | 10.1155/2022/1658615
10.1007/s10278-021-00445-2
10.2196/27468
10.1001/jama.2020.3786
10.1016/j.ejrad.2020.108961
10.1109/jbhi.2021.3074893
10.1148/radiol.2020200642
10.1148/radiol.2020200432
10.1109/rbme.2020.2987975
10.1007/s00330-021-07715-1
10.1148/radiol.2021203957
10.1007/s00330-021-08409-4
10.1155/2022/2564022
10.1101/2020.04.13.20063941
10.3390/s21020455
10.1016/j.cmpb.2020.105581
10.1007/s10140-020-01886-y
10.1007/s10044-021-00984-y
10.3390/app12104825
10.1016/j.cmpb.2020.105532
10.1016/j.ins.2020.09.041
10.3390/ijerph18063056
10.3390/healthcare9050522
10.3390/app11199023
10.3390/math10142472
10.3390/jpm12020309
10.1109/CVPR40276.2018
10.1007/3-540-45054-8_27
10.1007/s13369-013-0725-8
10.1109/tpami.2002.1017623
10.1109/tip.2010.2042645
10.1007/s11042-020-10321-w
10.1007/s11042-017-4834-3
10.11591/ijece.v11i1.pp844-850 |
Automatic diagnosis of COVID-19 with MCA-inspired TQWT-based classification of chest X-ray images. | In this era of Coronavirus disease 2019 (COVID-19), an accurate method of diagnosis with less diagnosis time and cost can effectively help in controlling the disease spread with the new variants taking birth from time to time. In order to achieve this, a two-dimensional (2D) tunable Q-wavelet transform (TQWT) based on a memristive crossbar array (MCA) is introduced in this work for the decomposition of chest X-ray images of two different datasets. TQWT has resulted in promising values of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) at the optimum values of its parameters namely quality factor (Q) of 4, and oversampling rate (r) of 3 and at a decomposition level (J) of 2. The MCA-based model is used to process decomposed images for further classification with efficient storage. These images have been further used for the classification of COVID-19 and non-COVID-19 images using ResNet50 and AlexNet convolutional neural network (CNN) models. The average accuracy values achieved for the processed chest X-ray images classification in the small and large datasets are 98.82% and 94.64%, respectively which are higher than the reported conventional methods based on different models of deep learning techniques. The average accuracy of detection of COVID-19 via the proposed method of image classification has also been achieved with less complexity, energy, power, and area consumption along with lower cost estimation as compared to CMOS-based technology. | Computers in biology and medicine | 2022-12-13T00:00:00 | [
"KumariJyoti",
"SaiSushma",
"SaurabhYadav",
"PawanKumar",
"Ram BilasPachori",
"ShaibalMukherjee"
] | 10.1016/j.compbiomed.2022.106331
10.1007/s13755-020-00135-3 |
US-Net: A lightweight network for simultaneous speckle suppression and texture enhancement in ultrasound images. | Numerous traditional filtering approaches and deep learning-based methods have been proposed to improve the quality of ultrasound (US) image data. However, their results tend to suffer from over-smoothing and loss of texture and fine details. Moreover, they perform poorly on images with different degradation levels and mainly focus on speckle reduction, even though texture and fine detail enhancement are of crucial importance in clinical diagnosis.
We propose an end-to-end framework termed US-Net for simultaneous speckle suppression and texture enhancement in US images. The architecture of US-Net is inspired by U-Net, whereby a feature refinement attention block (FRAB) is introduced to enable an effective learning of multi-level and multi-contextual representative features. Specifically, FRAB aims to emphasize high-frequency image information, which helps boost the restoration and preservation of fine-grained and textural details. Furthermore, our proposed US-Net is trained essentially with real US image data, whereby real US images embedded with simulated multi-level speckle noise are used as an auxiliary training set.
Extensive quantitative and qualitative experiments indicate that although trained with only one US image data type, our proposed US-Net is capable of restoring images acquired from different body parts and scanning settings with different degradation levels, while exhibiting favorable performance against state-of-the-art image enhancement approaches. Furthermore, utilizing our proposed US-Net as a pre-processing stage for COVID-19 diagnosis results in a gain of 3.6% in diagnostic accuracy.
The proposed framework can help improve the accuracy of ultrasound diagnosis. | Computers in biology and medicine | 2022-12-10T00:00:00 | [
"PatriceMonkam",
"WenkaiLu",
"SongbaiJin",
"WenjunShan",
"JingWu",
"XiangZhou",
"BoTang",
"HuaZhao",
"HongminZhang",
"XinDing",
"HuanChen",
"LongxiangSu"
] | 10.1016/j.compbiomed.2022.106385 |
Cov-caldas: A new COVID-19 chest X-Ray dataset from state of Caldas-Colombia. | The emergence of COVID-19 as a global pandemic forced researchers worldwide in various disciplines to investigate and propose efficient strategies and/or technologies to prevent COVID-19 from further spreading. One of the main challenges to be overcome is the fast and efficient detection of COVID-19 using deep learning approaches and medical images such as Chest Computed Tomography (CT) and Chest X-ray images. In order to contribute to this challenge, a new dataset was collected in collaboration with "S.E.S Hospital Universitario de Caldas" ( https://hospitaldecaldas.com/ ) from Colombia and organized following the Medical Imaging Data Structure (MIDS) format. The dataset contains 7,307 chest X-ray images divided into 3,077 and 4,230 COVID-19 positive and negative images. Images were subjected to a selection and anonymization process to allow the scientific community to use them freely. Finally, different convolutional neural networks were used to perform technical validation. This dataset contributes to the scientific community by tackling significant limitations regarding data quality and availability for the detection of COVID-19. | Scientific data | 2022-12-09T00:00:00 | [
"Jesús AlejandroAlzate-Grisales",
"AlejandroMora-Rubio",
"Harold BrayanArteaga-Arteaga",
"Mario AlejandroBravo-Ortiz",
"DanielArias-Garzón",
"Luis HumbertoLópez-Murillo",
"EstebanMercado-Ruiz",
"Juan PabloVilla-Pulgarin",
"OscarCardona-Morales",
"SimonOrozco-Arias",
"FelipeBuitrago-Carmona",
"Maria JosePalancares-Sosa",
"FernandaMartínez-Rodríguez",
"Sonia HContreras-Ortiz",
"Jose ManuelSaborit-Torres",
"Joaquim ÁngelMontell Serrano",
"María MónicaRamirez-Sánchez",
"Mario AlfonsoSierra-Gaber",
"OscarJaramillo-Robledo",
"Mariade la Iglesia-Vayá",
"ReinelTabares-Soto"
] | 10.1038/s41597-022-01576-z
10.1001/jama.2020.1585
10.1109/ACCESS.2020.3028390
10.1109/JAS.2020.1003393
10.1001/archinternmed.2009.427.Radiation
10.1038/s41597-020-00741-6
10.1016/J.MEDIA.2021.102046
10.1016/j.media.2020.101797
10.1016/j.cell.2018.02.010
10.1136/ADC.83.1.82
10.6084/m9.figshare.c.5833484.v1
10.1016/J.MLWA.2021.100138
10.1007/s11263-015-0816-y |
Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. | Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations. | SN computer science | 2022-12-06T00:00:00 | [
"AsifuzzamanLasker",
"Sk MdObaidullah",
"ChandanChakraborty",
"KaushikRoy"
] | 10.1007/s42979-022-01464-8
10.1109/TMI.2020.3001810
10.1016/j.asoc.2020.106859
10.1007/s13246-020-00934-8
10.3348/kjr.2020.0536
10.1056/nejmoa2001017
10.3389/frai.2020.00065
10.1016/j.cmpb.2020.105581
10.1109/ICOASE51841.2020.9436542
10.1016/j.compbiomed.2021.104868
10.1016/j.sysarc.2020.101830
10.3390/healthcare8010046
10.1007/s42979-020-00383-w
10.1007/s11042-021-10907-y
10.1038/s41746-020-00372-6
10.3389/fimmu.2020.01441
10.3390/jpm11010028
10.1007/s11042-020-10340-7
10.26355/eurrev_202008_22510
10.1016/j.acra.2020.09.004
10.1186/s12911-020-01316-6
10.1038/s41467-020-18684-2
10.1016/j.ijsu.2010.02.007
10.1016/j.compbiomed.2021.104605
10.1016/j.patrec.2019.11.013
10.1016/j.patcog.2012.10.005
10.1016/j.eswa.2021.115152
10.3390/e23020204
10.1016/j.media.2020.101836
10.1007/s10278-019-00227-x
10.1016/j.compbiomed.2020.104037
10.1002/mp.14676
10.1016/j.cmpbup.2021.100007
10.1109/TIP.2021.3058783
10.1080/01431160600746456
10.1109/ACCESS.2021.3054484
10.1016/j.compbiomed.2021.104319
10.1007/s13246-020-00865-4
10.1016/j.chaos.2020.109944
10.1007/s10489-020-01902-1
10.3233/XST-200720
10.1016/j.ijmedinf.2020.104284
10.31661/jbpe.v0i0.2008-1153
10.1371/journal.pone.0242535
10.1097/RLI.0000000000000748
10.14358/PERS.80.2.000
10.1007/s00330-020-06801-0
10.1148/radiol.2020200230
10.2214/AJR.20.22954
10.1016/j.media.2016.10.004
10.1016/j.chaos.2020.110153
10.1007/s00330-021-07715-1
10.1016/j.compbiomed.2021.104588
10.1038/s41598-021-83237-6
10.1002/mp.14609
10.1007/s00259-020-05075-4
10.1007/s00259-020-04953-1
10.34171/mjiri.34.174
10.1007/s12539-021-00420-z
10.1007/s42399-020-00643-z
10.7759/cureus.10378
10.1097/MD.0000000000023167
10.1148/rg.2020200149
10.1056/nejmp2000929
10.1148/radiol.2462070712
10.1148/radiol.2020200370
10.1016/S2213-2600(20)30076-X
10.1148/radiol.2020202791
10.2214/AJR.20.23513
10.1016/j.ultrasmedbio.2020.07.018
10.1016/j.irbm.2020.07.001
10.1007/s10489-020-01826-w
10.16984/saufenbilder.459659
10.1007/s12539-020-00393-5
10.1109/ACCESS.2020.3016780
10.1007/s42979-021-00605-9
10.1007/s11042-021-10714-5
10.1016/j.compbiomed.2021.104304
10.1371/journal.pone.0235187
10.3390/v12070769
10.1186/s12938-020-00831-x
10.1002/ima.22564
10.1007/s00138-020-01101-5
10.1016/j.cmpb.2020.105532
10.1016/j.imu.2021.100621
10.1038/s41598-021-88807-2
10.1007/s11042-020-09894-3
10.1016/j.bspc.2021.102622
10.1016/j.chemolab.2020.104054
10.1186/s12879-021-05839-9
10.1186/s12938-020-00809-9
10.1016/j.imu.2020.100505
10.1007/s12539-020-00403-6
10.1088/1742-6596/1933/1/012040
10.1007/s00330-021-07957-z
10.1007/s10489-020-01943-6
10.1038/s41598-020-76282-0
10.1038/s41598-020-80261-w
10.32604/cmc.2021.016264
10.1007/s11548-020-02305-w
10.1109/ACCESS.2021.3061058
10.1016/j.bbe.2020.08.008
10.3233/SHTI210223
10.1007/s11548-020-02286-w
10.1049/iet-ipr.2020.1127
10.1007/s11517-020-02299-2
10.1007/s10489-021-02199-4
10.1016/j.compbiomed.2020.103869
10.1016/j.mehy.2020.109761
10.1016/j.compbiomed.2020.103792
10.1016/j.chaos.2020.110495
10.1038/s41598-020-76550-z
10.1016/j.imu.2021.100620
10.7717/peerj-cs.551
10.1109/TII.2021.3057683
10.2196/27468
10.1007/s10140-020-01886-y
10.47611/jsrhs.v9i2.1246
10.1016/j.media.2020.101913
10.1016/j.patrec.2020.09.010
10.1016/j.neucom.2021.03.034
10.1109/TNNLS.2021.3070467
10.1109/EIConCIT50028.2021.9431887
10.1016/j.iot.2021.100377
10.2196/19569
10.1109/ACCESS.2021.3083516
10.9781/ijimai.2020.04.003
10.1007/s10489-020-01867-1
10.29194/njes.23040408
10.7717/PEERJ-CS.345
10.1145/3431804
10.3389/fmed.2021.629134
10.1148/radiol.2020201491
10.32604/cmc.2021.014956
10.1177/2472630320958376
10.1016/j.irbm.2021.01.004
10.1109/ICCC51575.2020.9344870
10.1007/s13246-020-00888-x
10.1007/s00330-020-07225-6
10.1109/TMI.2020.2994908
10.32604/cmc.2021.013228
10.7717/PEERJ-CS.364
10.1016/j.inffus.2020.10.004
10.1016/j.ejrad.2020.109041
10.2196/25535
10.1016/j.bspc.2020.102365
10.3233/XST-200715
10.1109/TNNLS.2021.3054746
10.1016/j.matpr.2021.01.820
10.1038/s41598-020-80936-4
10.1109/TMI.2020.3000314
10.31763/sitech.v1i2.202
10.1016/j.compbiomed.2020.104181
10.1007/s12559-020-09775-9
10.1109/TMI.2020.2996645
10.1038/s41551-021-00704-1
10.1007/s10489-020-02019-1
10.1007/s00146-020-00978-0
10.1136/bmj.m1808
10.1016/j.bspc.2021.102490
10.1016/j.inffus.2020.11.005
10.7717/peerj.10309
10.7150/ijbs.58855
10.1109/ACCESS.2020.3003810
10.1007/s11263-019-01228-7 |
PulDi-COVID: Chronic obstructive pulmonary (lung) diseases with COVID-19 classification using ensemble deep convolutional neural network from chest X-ray images to minimize severity and mortality rates. | In the current COVID-19 outbreak, efficient testing of COVID-19 individuals has proven vital to limiting and arresting the disease's accelerated spread globally. It has been observed that the severity and mortality ratio of COVID-19 affected patients is at greater risk because of chronic pulmonary diseases. This study looks at radiographic examinations exploiting chest X-ray images (CXI), which have become one of the utmost feasible assessment approaches for pulmonary disorders, including COVID-19. Deep Learning(DL) remains an excellent image classification method and framework; research has been conducted to predict pulmonary diseases with COVID-19 instances by developing DL classifiers with nine class CXI. However, a few claim to have strong prediction results; because of noisy and small data, their recommended DL strategies may suffer from significant deviation and generality failures.
Therefore, a unique CNN model(PulDi-COVID) for detecting nine diseases (atelectasis, bacterial-pneumonia, cardiomegaly, covid19, effusion, infiltration, no-finding, pneumothorax, viral-Pneumonia) using CXI has been proposed using the SSE algorithm. Several transfer-learning models: VGG16, ResNet50, VGG19, DenseNet201, MobileNetV2, NASNetMobile, ResNet152V2, DenseNet169 are trained on CXI of chronic lung diseases and COVID-19 instances. Given that the proposed thirteen SSE ensemble models solved DL's constraints by making predictions with different classifiers rather than a single, we present PulDi-COVID, an ensemble DL model that combines DL with ensemble learning. The PulDi-COVID framework is created by incorporating various snapshots of DL models, which have spearheaded chronic lung diseases with COVID-19 cases identification process with a deep neural network produced CXI by applying a suggested SSE method. That is familiar with the idea of various DL perceptions on different classes.
PulDi-COVID findings were compared to thirteen existing studies for nine-class classification using COVID-19. Test results reveal that PulDi-COVID offers impressive outcomes for chronic diseases with COVID-19 identification with a 99.70% accuracy, 98.68% precision, 98.67% recall, 98.67% F1 score, lowest 12 CXIs zero-one loss, 99.24% AUC-ROC score, and lowest 1.33% error rate. Overall test results are superior to the existing Convolutional Neural Network(CNN). To the best of our knowledge, the observed results for nine-class classification are significantly superior to the state-of-the-art approaches employed for COVID-19 detection. Furthermore, the CXI that we used to assess our algorithm is one of the larger datasets for COVID detection with pulmonary diseases.
The empirical findings of our suggested approach PulDi-COVID show that it outperforms previously developed methods. The suggested SSE method with PulDi-COVID can effectively fulfill the COVID-19 speedy detection needs with different lung diseases for physicians to minimize patient severity and mortality. | Biomedical signal processing and control | 2022-12-06T00:00:00 | [
"Yogesh HBhosale",
"K SridharPatnaik"
] | 10.1016/j.bspc.2022.104445
10.3389/fmed.2021.588013
10.1109/TII.2021.3057683 |
Rapid diagnosis of Covid-19 infections by a progressively growing GAN and CNN optimisation. | Covid-19 infections are spreading around the globe since December 2019. Several diagnostic methods were developed based on biological investigations and the success of each method depends on the accuracy of identifying Covid infections. However, access to diagnostic tools can be limited, depending on geographic region and the diagnosis duration plays an important role in treating Covid-19. Since the virus causes pneumonia, its presence can also be detected using medical imaging by Radiologists. Hospitals with X-ray capabilities are widely distributed all over the world, so a method for diagnosing Covid-19 from chest X-rays would present itself. Studies have shown promising results in automatically detecting Covid-19 from medical images using supervised Artificial neural network (ANN) algorithms. The major drawback of supervised learning algorithms is that they require huge amounts of data to train. Also, the radiology equipment is not computationally efficient for deep neural networks. Therefore, we aim to develop a Generative Adversarial Network (GAN) based image augmentation to optimize the performance of custom, light, Convolutional networks used for the classification of Chest X-rays (CXR).
A Progressively Growing Generative Adversarial Network (PGGAN) is used to generate synthetic and augmented data to supplement the dataset. We propose two novel CNN architectures to perform the Multi-class classification of Covid-19, healthy and pneumonia affected Chest X-rays. Comparisons have been drawn to the state of the art models and transfer learning methods to evaluate the superiority of the networks. All the models are trained using enhanced and augmented X-ray images and are compared based on classification metrics.
The proposed models had extremely high classification metrics with proposed Architectures having test accuracy of 98.78% and 99.2% respectively while having 40% lesser training parameters than their state of the art counterpart.
In the present study, a method based on artificial intelligence is proposed, leading to a rapid diagnostic tool for Covid infections based on Generative Adversarial Network (GAN) and Convolutional Neural Networks (CNN). The benefit will be a high accuracy of detection with up to 99% hit rate, a rapid diagnosis, and an accessible Covid identification method by chest X-ray images. | Computer methods and programs in biomedicine | 2022-12-05T00:00:00 | [
"RutwikGulakala",
"BerndMarkert",
"MarcusStoffel"
] | 10.1016/j.cmpb.2022.107262 |
COVID-DSNet: A novel deep convolutional neural network for detection of coronavirus (SARS-CoV-2) cases from CT and Chest X-Ray images. | COVID-19 (SARS-CoV-2), which causes acute respiratory syndrome, is a contagious and deadly disease that has devastating effects on society and human life. COVID-19 can cause serious complications, especially in patients with pre-existing chronic health problems such as diabetes, hypertension, lung cancer, weakened immune systems, and the elderly. The most critical step in the fight against COVID-19 is the rapid diagnosis of infected patients. Computed Tomography (CT), chest X-ray (CXR), and RT-PCR diagnostic kits are frequently used to diagnose the disease. However, due to difficulties such as the inadequacy of RT-PCR test kits and false negative (FN) results in the early stages of the disease, the time-consuming examination of medical images obtained from CT and CXR imaging techniques by specialists/doctors, and the increasing workload on specialists, it is challenging to detect COVID-19. Therefore, researchers have suggested searching for new methods in COVID- 19 detection. In analysis studies with CT and CXR radiography images, it was determined that COVID-19-infected patients experienced abnormalities related to COVID-19. The anomalies observed here are the primary motivation for artificial intelligence researchers to develop COVID-19 detection applications with deep convolutional neural networks. Here, convolutional neural network-based deep learning algorithms from artificial intelligence technologies with high discrimination capabilities can be considered as an alternative approach in the disease detection process. This study proposes a deep convolutional neural network, COVID-DSNet, to diagnose typical pneumonia (bacterial, viral) and COVID-19 diseases from CT, CXR, hybrid CT + CXR images. In the multi-classification study with the CT dataset, 97.60 % accuracy and 97.60 % sensitivity values were obtained from the COVID-DSNet model, and 100 %, 96.30 %, and 96.58 % sensitivity values were obtained in the detection of typical, common pneumonia and COVID-19, respectively. The proposed model is an economical, practical deep learning network that data scientists can benefit from and develop. Although it is not a definitive solution in disease diagnosis, it may help experts as it produces successful results in detecting pneumonia and COVID-19. | Artificial intelligence in medicine | 2022-12-04T00:00:00 | [
"Hatice CatalReis",
"VeyselTurk"
] | 10.1016/j.artmed.2022.102427
10.1016/j.asoc.2020.106859
10.1016/j.virs.2022.09.003
10.1016/j.mtbio.2022.100265
10.1111/1348-0421.12945
10.1002/jmv.27132
10.1038/s41579-021-00573-0
10.1001/jama.2022.14711
10.1038/s41565-022-01177-2
10.1038/s41580-021-00418-x
10.3390/ijms23031716
10.1038/s41580-021-00432-z
10.1016/j.media.2021.102096
10.3390/cancers13020162
10.3389/fimmu.2021.660632
10.1007/s12559-020-09787-5
10.3389/fphar.2021.664349
10.3390/cells10030587
10.1007/s11033-021-06358-1
10.1001/jama.2021.13084
10.1016/j.arbres.2021.06.003
10.1038/s41598-021-96755-0
10.1016/S0140-6736(22)00009-5
10.1016/j.cmpb.2019.105162
10.1148/ryct.2021200564
10.1016/j.ejrad.2020.109147
10.1186/s12890-021-01450-5
10.1007/s12559-020-09779-5
10.1093/bib/bbab412
10.3390/v14020322
10.1007/s00521-020-05410-8
10.1016/j.nupar.2022.01.003
10.1016/j.biochi.2022.01.015
10.1109/ACCESS.2020.3010287
10.1016/j.ijid.2020.10.069
10.3390/biomedicines10020242
10.1016/bs.acr.2020.10.001
10.1084/jem.20202489
10.17305/bjbms.2021.6340
10.1007/s11154-021-09707-4
10.1016/j.jacbts.2021.10.011
10.1001/jamacardio.2020.3557
10.1016/j.chom.2020.05.008
10.1126/science.369.6500.125
10.1093/cid/ciaa644
10.1016/j.asoc.2022.109207
10.1038/s41392-022-00884-5
10.1038/s41579-020-00462-y
10.1136/bmj.n597
10.3390/diagnostics12020467
10.1007/s10311-021-01369-7
10.1056/NEJMc2119236
10.1001/jama.2021.24315
10.1056/NEJMoa2108891
10.1126/science.abn7591
10.1016/j.eclinm.2021.100861
10.1016/j.compbiomed.2021.104742
10.1016/j.bspc.2021.103415
10.3390/diagnostics11111972
10.1007/s00521-020-05636-6
10.1007/s42979-021-00695-5
10.1038/s41598-021-84630-x
10.1038/s41598-021-03889-2
10.3390/s22010372
10.1038/s41598-022-06264-x
10.1016/j.compbiomed.2022.105810
10.1016/j.chaos.2020.110245
10.1016/j.bspc.2022.103977
10.1007/s10489-020-01943-6
10.1007/s00500-021-06579-3
10.1007/s11042-020-10165-4
10.1007/s12559-021-09955-1
10.3390/s22031211
10.1007/s10522-021-09946-7
10.1007/s11042-021-11319-8
10.1038/s41598-020-76550-z
10.48550/arXiv.2003.11597
10.1016/j.patcog.2021.108255
10.1007/s10489-020-01829-7
10.1148/radiol.2020200905
10.1016/j.patrec.2021.08.018
10.1155/2022/6185013
10.1109/CVPR.2016.90
10.1109/CVPR.2017.243
10.1162/neco.1997.9.8.1735
10.3390/s21030832
10.1016/j.chaos.2020.110153
10.1016/j.compbiomed.2021.104319
10.17632/rscbjbr9sj.2
10.1609/aaai.v31i1.11231
10.1109/CVPR.2016.308
10.48550/arXiv.1704.04861
10.1109/CVPR.2018.00907
10.48550/arXiv.1412.6980
10.1016/j.eswa.2020.113909
10.1016/j.compbiomed.2021.105134
10.1016/j.compbiomed.2022.105213
10.1016/j.knosys.2021.106849
10.1038/s41586-022-04569-5
10.1007/s42452-019-1903-4 |
Classification and visual explanation for COVID-19 pneumonia from CT images using triple learning. | This study presents a novel framework for classifying and visualizing pneumonia induced by COVID-19 from CT images. Although many image classification methods using deep learning have been proposed, in the case of medical image fields, standard classification methods are unable to be used in some cases because the medical images that belong to the same category vary depending on the progression of the symptoms and the size of the inflamed area. In addition, it is essential that the models used be transparent and explainable, allowing health care providers to trust the models and avoid mistakes. In this study, we propose a classification method using contrastive learning and an attention mechanism. Contrastive learning is able to close the distance for images of the same category and generate a better feature space for classification. An attention mechanism is able to emphasize an important area in the image and visualize the location related to classification. Through experiments conducted on two-types of classification using a three-fold cross validation, we confirmed that the classification accuracy was significantly improved; in addition, a detailed visual explanation was achieved comparison with conventional methods. | Scientific reports | 2022-12-03T00:00:00 | [
"SotaKato",
"MasahiroOda",
"KensakuMori",
"AkinobuShimizu",
"YoshitoOtake",
"MasahiroHashimoto",
"ToshiakiAkashi",
"KazuhiroHotta"
] | 10.1038/s41598-022-24936-6
10.1148/radiol.2020200905
10.1016/j.ejrad.2020.109041
10.1109/ACCESS.2020.3005510
10.1016/j.asoc.2020.106885
10.1109/TCBB.2021.3065361
10.1016/j.compbiomed.2020.104037
10.3390/diagnostics11050893
10.1016/j.patcog.2021.107826
10.1016/j.patcog.2021.107848
10.1016/j.media.2021.102105
10.1007/s11263-021-01559-4
10.1148/radiol.2020200905
10.1109/TMI.2020.2995965 |
A lightweight network for COVID-19 detection in X-ray images. | The Novel Coronavirus 2019 (COVID-19) is a global pandemic which has a devastating impact. Due to its quick transmission, a prominent challenge in confronting this pandemic is the rapid diagnosis. Currently, the commonly-used diagnosis is the specific molecular tests aided with the medical imaging modalities such as chest X-ray (CXR). However, with the large demand, the diagnoses of CXR are time-consuming and laborious. Deep learning is promising for automatically diagnosing COVID-19 to ease the burden on medical systems. At present, the most applied neural networks are large, which hardly satisfy the rapid yet inexpensive requirements of COVID-19 detection. To reduce huge computation and memory demands, in this paper, we focus on implementing lightweight networks for COVID-19 detection in CXR. Concretely, we first augment data based on clinical visual features of CXR from expertise. Then, according to the fact that all the input data are CXR, we design a targeted four-layer network with either 11 × 11 or 3 × 3 kernels to recognize regional features and detail features. A pruning criterion based on the weights importance is also proposed to further prune the network. Experiments on a public COVID-19 dataset validate the effectiveness and efficiency of the proposed method. | Methods (San Diego, Calif.) | 2022-12-03T00:00:00 | [
"YongShi",
"AndaTang",
"YangXiao",
"LingfengNiu"
] | 10.1016/j.ymeth.2022.11.004 |
Radiomorphological signs and clinical severity of SARS-CoV-2 lineage B.1.1.7. | We aimed to assess the differences in the severity and chest-CT radiomorphological signs of SARS-CoV-2 B.1.1.7 and non-B.1.1.7 variants.
We collected clinical data of consecutive patients with laboratory-confirmed COVID-19 and chest-CT imaging who were admitted to the Emergency Department between September 1- November 13, 2020 (non-B.1.1.7 cases) and March 1-March 18, 2021 (B.1.1.7 cases). We also examined the differences in the severity and radiomorphological features associated with COVID-19 pneumonia. Total pneumonia burden (%), mean attenuation of ground-glass opacities and consolidation were quantified using deep-learning research software.
The final population comprised 500 B.1.1.7 and 500 non-B.1.1.7 cases. Patients with B.1.1.7 infection were younger (58.5 ± 15.6 vs 64.8 ± 17.3;
Despite B.1.1.7 patients were younger and had fewer comorbidities, they experienced more severe disease than non-B.1.1.7 patients, however, the risk of death was the same between the two groups.
Our study provides data on deep-learning based quantitative lung lesion burden and clinical outcomes of patients infected by B.1.1.7 VOC. Our findings might serve as a model for later investigations, as new variants are emerging across the globe. | BJR open | 2022-12-02T00:00:00 | [
"JuditSimon",
"KajetanGrodecki",
"SebastianCadet",
"AdityaKillekar",
"PiotrSlomka",
"Samuel JamesZara",
"EmeseZsarnóczay",
"ChiaraNardocci",
"NorbertNagy",
"KatalinKristóf",
"BarnaVásárhelyi",
"VeronikaMüller",
"BélaMerkely",
"DaminiDey",
"PálMaurovich-Horvat"
] | 10.1259/bjro.20220016
10.1126/science.abg3055
10.1016/S1473-3099(21)00170-5
10.1136/bmj.n579
10.2807/1560-7917.ES.2021.26.11.2100256
10.1007/s00330-020-06748-2
10.1148/radiol.2020200843
10.2214/AJR.20.22954
10.1148/radiol.2020200370
10.1148/radiol.2020200463
10.1016/S1473-3099(20)30483-7
10.1148/ryct.2020200389
10.1016/j.ejrad.2020.108961
10.1148/radiol.2020200432
10.1136/bmj.m1464
10.1016/S1473-3099(20)30086-4
10.1556/1647.2020.00002
10.1038/s41379-020-0536-x
10.1016/S0140-6736(20)30566-3
10.1038/s41598-020-76550-z
10.1007/s00330-021-07715-1
10.1183/13993003.00775-2020
10.1016/j.compbiomed.2020.103795 |
3D CT-Inclusive Deep-Learning Model to Predict Mortality, ICU Admittance, and Intubation in COVID-19 Patients. | Chest CT is a useful initial exam in patients with coronavirus disease 2019 (COVID-19) for assessing lung damage. AI-powered predictive models could be useful to better allocate resources in the midst of the pandemic. Our aim was to build a deep-learning (DL) model for COVID-19 outcome prediction inclusive of 3D chest CT images acquired at hospital admission. This retrospective multicentric study included 1051 patients (mean age 69, SD = 15) who presented to the emergency department of three different institutions between 20th March 2020 and 20th January 2021 with COVID-19 confirmed by real-time reverse transcriptase polymerase chain reaction (RT-PCR). Chest CT at hospital admission were evaluated by a 3D residual neural network algorithm. Training, internal validation, and external validation groups included 608, 153, and 290 patients, respectively. Images, clinical, and laboratory data were fed into different customizations of a dense neural network to choose the best performing architecture for the prediction of mortality, intubation, and intensive care unit (ICU) admission. The AI model tested on CT and clinical features displayed accuracy, sensitivity, specificity, and ROC-AUC, respectively, of 91.7%, 90.5%, 92.4%, and 95% for the prediction of patient's mortality; 91.3%, 91.5%, 89.8%, and 95% for intubation; and 89.6%, 90.2%, 86.5%, and 94% for ICU admission (internal validation) in the testing cohort. The performance was lower in the validation cohort for mortality (71.7%, 55.6%, 74.8%, 72%), intubation (72.6%, 74.7%, 45.7%, 64%), and ICU admission (74.7%, 77%, 46%, 70%) prediction. The addition of the available laboratory data led to an increase in sensitivity for patient's mortality (66%) and specificity for intubation and ICU admission (50%, 52%, respectively), while the other metrics maintained similar performance results. We present a deep-learning model to predict mortality, ICU admittance, and intubation in COVID-19 patients. KEY POINTS: • 3D CT-based deep learning model predicted the internal validation set with high accuracy, sensibility and specificity (> 90%) mortality, ICU admittance, and intubation in COVID-19 patients. • The model slightly increased prediction results when laboratory data were added to the analysis, despite data imbalance. However, the model accuracy dropped when CT images were not considered in the analysis, implying an important role of CT in predicting outcomes. | Journal of digital imaging | 2022-12-01T00:00:00 | [
"AlbertoDi Napoli",
"EmanuelaTagliente",
"LucaPasquini",
"EnricaCipriano",
"FilomenaPietrantonio",
"PiermariaOrtis",
"SimonaCurti",
"AlessandroBoellis",
"TeseoStefanini",
"AntonioBernardini",
"ChiaraAngeletti",
"Sofia ChiatamoneRanieri",
"PaolaFranchi",
"Ioan PaulVoicu",
"CarloCapotondi",
"AntonioNapolitano"
] | 10.1007/s10278-022-00734-4 |
Automatic detection of Covid-19 from chest X-ray and lung computed tomography images using deep neural networks and transfer learning. | The world has been undergoing the most ever unprecedented circumstances caused by the coronavirus pandemic, which is having a devastating global effect in different aspects of life. Since there are not effective antiviral treatments for Covid-19 yet, it is crucial to early detect and monitor the progression of the disease, thereby helping to reduce mortality. While different measures are being used to combat the virus, medical imaging techniques have been examined to support doctors in diagnosing the disease. In this paper, we present a practical solution for the detection of Covid-19 from chest X-ray (CXR) and lung computed tomography (LCT) images, exploiting cutting-edge Machine Learning techniques. As the main classification engine, we make use of EfficientNet and MixNet, two recently developed families of deep neural networks. Furthermore, to make the training more effective and efficient, we apply three transfer learning algorithms. The ultimate aim is to build a reliable expert system to detect Covid-19 from different sources of images, making it be a multi-purpose AI diagnosing system. We validated our proposed approach using four real-world datasets. The first two are CXR datasets consist of 15,000 and 17,905 images, respectively. The other two are LCT datasets with 2,482 and 411,528 images, respectively. The five-fold cross-validation methodology was used to evaluate the approach, where the dataset is split into five parts, and accordingly the evaluation is conducted in five rounds. By each evaluation, four parts are combined to form the training data, and the remaining one is used for testing. We obtained an encouraging prediction performance for all the considered datasets. In all the configurations, the obtained accuracy is always larger than 95.0%. Compared to various existing studies, our approach yields a substantial performance gain. Moreover, such an improvement is statistically significant. | Applied soft computing | 2022-12-01T00:00:00 | [
"Linh TDuong",
"Phuong TNguyen",
"LudovicoIovino",
"MicheleFlammini"
] | 10.1016/j.asoc.2022.109851
10.1109/TITS.2021.3053373
10.1016/j.eswa.2021.115519
10.1016/j.eswa.2017.12.020
10.1038/s41591-020-0931-3
10.1007/s11263-015-0816-y
10.1016/j.compag.2018.02.016
10.1186/s40537-016-0043-6
10.1007/11564096_40
10.3390/rs9090907
10.1016/j.compag.2020.105326
10.1101/2020.04.24.20078584
10.1016/j.cell.2020.04.045
10.1016/j.asoc.2020.106691
10.1016/j.tube.2022.102234
10.1016/j.asoc.2021.107323
10.1016/j.compbiomed.2020.103792
10.1101/2020.06.08.20125963
10.1109/CVPR.2016.90
10.1007/s11042-021-10783-6
10.1038/s41598-021-99015-3
10.1007/978-1-4612-4380-9_16
10.1101/2020.06.08.20121541
10.36227/techrxiv.12328061
10.1016/j.compbiomed.2020.103795
10.1148/radiol.2020201491
10.1148/radiol.2020201491
10.1101/2020.08.13.20173997 |
Mentoring within the medical radiation sciences - Establishing a national program. | The aim of this study was to compare the accuracy and performance of 12 pre-trained deep learning models for classifying covid-19 and normal chest X-ray images from Kaggle.
a desktop computer with an Intel CPU i9-10900 2.80GHz and NVIDIA GPU GeForce RTX2070 SUPER, Anaconda3 software with 12 pre-trained models including VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, RestNet50V2, RestNet101V2, RestNet152V2, InceptionRestnetV2, InceptionV3, XceptionV1 and MobileNetV2, covid-19 and normal chest X-ray from Kaggle website.
the images were divided into three sets of train, test, and validation sets using a ratio of 70:20:10, respectively. The performance was recorded for each pre-train model with hyperparameters of epoch, batch size, and learning rate as 16, 16 and 0.0001 respectively. The prediction results of each model were recorded and compared.
from the results of all 12 pre-trained deep learning model, five models that have highest validation accuracy were DenseNet169, DenseNet201, InceptionV3, DenseNet121 and InceptionRestNetV2, respectively.
The top-5 highest accuracy models for classifying the COVID-19 were DenseNet169, DenseNet201, InceptionV3, DenseNet121 and InceptionRestnetV2 with accuracies of 95.4%, 95.07%, 94.73%, 94.51% and 93.61% respectively. | Journal of medical imaging and radiation sciences | 2022-11-29T00:00:00 | [
"AllieTonks",
"FranziskaJerjen"
] | 10.1016/j.jmir.2022.10.190 |
Optimal Ensemble learning model for COVID-19 detection using chest X-ray images. | COVID-19 pandemic is the main outbreak in the world, which has shown a bad impact on people's lives in more than 150 countries. The major steps in fighting COVID-19 are identifying the affected patients as early as possible and locating them with special care. Images from radiology and radiography are among the most effective tools for determining a patient's ailment. Recent studies have shown detailed abnormalities of affected patients with COVID-19 in the chest radiograms. The purpose of this work is to present a COVID-19 detection system with three key steps: "(i) preprocessing, (ii) Feature extraction, (iii) Classification." Originally, the input image is given to the preprocessing step as its input, extracting the deep features and texture features from the preprocessed image. Particularly, it extracts the deep features by inceptionv3. Then, the features like proposed Local Vector Patterns (LVP) and Local Binary Pattern (LBP) are extracted from the preprocessed image. Moreover, the extracted features are subjected to the proposed ensemble model based classification phase, including Support Vector Machine (SVM), Convolutional Neural Network (CNN), Optimized Neural Network (NN), and Random Forest (RF). A novel Self Adaptive Kill Herd Optimization (SAKHO) approach is used to properly tune the weight of NN to improve classification accuracy and precision. The performance of the proposed method is then compared to the performance of the conventional approaches using a variety of metrics, including recall, FNR, MCC, FDR, Thread score, FPR, precision, FOR, accuracy, specificity, NPV, FMS, and sensitivity, accordingly. | Biomedical signal processing and control | 2022-11-29T00:00:00 | [
"SBalasubramaniam",
"KSatheesh Kumar"
] | 10.1016/j.bspc.2022.104392
10.1109/TBDATA.2020.3035935
10.1109/ACCESS.2020.3033762
10.3233/HIS-120161
10.1504/IJCSE.2013.053087
10.1007/s40747-020-00216-6
10.1109/ICIP.2014.7025047 |
Deep Learning-Based Computer-Aided Diagnosis (CAD): Applications for Medical Image Datasets. | Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the best optimal features while reducing the amount of data. Lastly, diagnosis prediction (classification) is achieved using learnable classifiers. The novel framework for the extraction and selection of features is based on deep learning, auto-encoder, and ACO. The performance of the proposed approach is evaluated using two medical image datasets: chest X-ray (CXR) and magnetic resonance imaging (MRI) for the prediction of the existence of COVID-19 and brain tumors. Accuracy is used as the main measure to compare the performance of the proposed approach with existing state-of-the-art methods. The proposed system achieves an average accuracy of 99.61% and 99.18%, outperforming all other methods in diagnosing the presence of COVID-19 and brain tumors, respectively. Based on the achieved results, it can be claimed that physicians or radiologists can confidently utilize the proposed approach for diagnosing COVID-19 patients and patients with specific brain tumors. | Sensors (Basel, Switzerland) | 2022-11-27T00:00:00 | [
"Yezi AliKadhim",
"Muhammad UmerKhan",
"AlokMishra"
] | 10.3390/s22228999
10.3389/fmed.2020.00027
10.1097/00004424-196601000-00032
10.1118/1.3013555
10.3390/s22218326
10.1148/83.6.1029
10.1353/pbm.1992.0011
10.1016/j.compmedimag.2007.02.002
10.1002/mp.13764
10.1007/s10489-020-02002-w
10.1164/ajrccm.153.1.8542102
10.12928/telkomnika.v15i4.3163
10.1056/NEJM199105303242205
10.1016/j.eswa.2020.113274
10.1016/j.ijmedinf.2019.06.017
10.1016/j.compbiomed.2019.103345
10.1007/s10278-013-9600-0
10.1080/21681163.2016.1138324
10.22146/ijeis.34713
10.1016/j.jocs.2018.12.003
10.3390/e24070869
10.3390/life12111709
10.3390/s22155880
10.1016/j.conbuildmat.2017.09.110
10.1016/j.compeleceng.2018.07.042
10.1186/s40537-019-0276-2
10.1038/s41598-020-76550-z
10.1016/j.compbiomed.2020.103792
10.1016/j.compbiomed.2020.103795
10.1186/s41256-020-00135-6
10.1101/2020.08.31.20175828
10.1371/journal.pone.0157112
10.1109/ACCESS.2019.2912200
10.1109/MGRS.2018.2853555
10.1109/MCI.2006.329691
10.1016/j.tcs.2005.05.020
10.1016/j.engappai.2014.03.007
10.1016/j.eswa.2006.04.010
10.1016/j.eswa.2015.07.007
10.1016/j.eswa.2014.04.019
10.1371/journal.pone.0140381
10.1016/j.bspc.2019.101678
10.3390/ijerph17124204
10.20944/preprints202003.0300.v1
10.1016/j.chaos.2020.110210
10.1016/j.patrec.2020.09.010
10.1016/j.cor.2021.105359
10.1016/j.ins.2017.12.047
10.1023/B:ANOR.0000039523.95673.33
10.1016/j.imu.2020.100330
10.1155/2021/6621540
10.3390/sym11020157
10.1109/ACCESS.2021.3076756
10.1109/ACCESS.2021.3051723 |
Dual_Pachi: Attention-based dual path framework with intermediate second order-pooling for Covid-19 detection from chest X-ray images. | Numerous machine learning and image processing algorithms, most recently deep learning, allow the recognition and classification of COVID-19 disease in medical images. However, feature extraction, or the semantic gap between low-level visual information collected by imaging modalities and high-level semantics, is the fundamental shortcoming of these techniques. On the other hand, several techniques focused on the first-order feature extraction of the chest X-Ray thus making the employed models less accurate and robust. This study presents Dual_Pachi: Attention Based Dual Path Framework with Intermediate Second Order-Pooling for more accurate and robust Chest X-ray feature extraction for Covid-19 detection. Dual_Pachi consists of 4 main building Blocks; Block one converts the received chest X-Ray image to CIE LAB coordinates (L & AB channels which are separated at the first three layers of a modified Inception V3 Architecture.). Block two further exploit the global features extracted from block one via a global second-order pooling while block three focuses on the low-level visual information and the high-level semantics of Chest X-ray image features using a multi-head self-attention and an MLP Layer without sacrificing performance. Finally, the fourth block is the classification block where classification is done using fully connected layers and SoftMax activation. Dual_Pachi is designed and trained in an end-to-end manner. According to the results, Dual_Pachi outperforms traditional deep learning models and other state-of-the-art approaches described in the literature with an accuracy of 0.96656 (Data_A) and 0.97867 (Data_B) for the Dual_Pachi approach and an accuracy of 0.95987 (Data_A) and 0.968 (Data_B) for the Dual_Pachi without attention block model. A Grad-CAM-based visualization is also built to highlight where the applied attention mechanism is concentrated. | Computers in biology and medicine | 2022-11-25T00:00:00 | [
"Chiagoziem CUkwuoma",
"ZhiguangQin",
"Victor KAgbesi",
"Bernard MCobbinah",
"Sophyani BYussif",
"Hassan SAbubakar",
"Bona DLemessa"
] | 10.1016/j.compbiomed.2022.106324
10.1142/S0218339020500096
10.7150/ijbs.45053
10.1016/j.jare.2020.03.005
10.1016/j.cpcardiol.2020.100618
10.1016/j.diii.2020.03.014
10.1016/S1473-3099(20)30190-0
10.1002/jmv.25721
10.1002/jmv.25786
10.1148/radiol.2020200642
10.1016/j.jcct.2011.07.001
10.1109/prai53619.2021.9551094
10.1007/s10278-017-9983-4
10.3390/s20113243
10.1016/j.compbiomed.2020.103795
10.1007/s10278-019-00227-x
10.1007/s13246-020-00888-x
10.1093/cid/ciaa1383
10.1016/j.compbiomed.2020.103869
10.1016/j.bspc.2021.102696
10.1016/j.eswa.2021.114576
10.3390/diagnostics12051152
10.1016/j.sciaf.2022.e01151
10.1016/j.patcog.2020.107613
10.1016/j.compbiomed.2020.103792
10.1016/j.pdpdt.2021.102473
10.1038/s41598-020-76550-z
10.1016/j.patrec.2020.09.010
10.1007/s00330-021-07715-1
10.1007/s10044-021-00984-y
10.1016/j.media.2020.101794
10.1007/s10489-020-01902-1
10.1007/s13246-020-00865-4
10.1109/SSCI47803.2020.9308571
10.1016/j.chaos.2020.109944
10.33889/IJMEMS.2020.5.4.052
10.1109/ICCC51575.2020.9344870
10.1007/s42600-021-00151-6
10.1016/j.chaos.2020.110495
10.1016/j.compbiomed.2020.104181
10.1007/s10916-021-01745-4
10.1016/j.compbiomed.2021.104816
10.1016/j.knosys.2022.108207
10.3233/idt-210002
10.1007/s12065-021-00679-7
10.1016/j.eswa.2020.114054
10.1007/s00354-021-00152-0
10.34133/2019/9237136
10.1109/ACCESS.2020.3010287
10.3390/covid1010034
10.1016/j.cmpb.2020.105581
10.1109/JBHI.2021.3058293
10.1016/j.asoc.2022.108867
10.1109/JBHI.2021.3074893
10.3390/s22031211
10.3390/ijerph182111086
10.1007/s00521-019-04332-4 |
Automated Lung-Related Pneumonia and COVID-19 Detection Based on Novel Feature Extraction Framework and Vision Transformer Approaches Using Chest X-ray Images. | According to research, classifiers and detectors are less accurate when images are blurry, have low contrast, or have other flaws which raise questions about the machine learning model's ability to recognize items effectively. The chest X-ray image has proven to be the preferred image modality for medical imaging as it contains more information about a patient. Its interpretation is quite difficult, nevertheless. The goal of this research is to construct a reliable deep-learning model capable of producing high classification accuracy on chest x-ray images for lung diseases. To enable a thorough study of the chest X-ray image, the suggested framework first derived richer features using an ensemble technique, then a global second-order pooling is applied to further derive higher global features of the images. Furthermore, the images are then separated into patches and position embedding before analyzing the patches individually via a vision transformer approach. The proposed model yielded 96.01% sensitivity, 96.20% precision, and 98.00% accuracy for the COVID-19 Radiography Dataset while achieving 97.84% accuracy, 96.76% sensitivity and 96.80% precision, for the Covid-ChestX-ray-15k dataset. The experimental findings reveal that the presented models outperform traditional deep learning models and other state-of-the-art approaches provided in the literature. | Bioengineering (Basel, Switzerland) | 2022-11-25T00:00:00 | [
"Chiagoziem CUkwuoma",
"ZhiguangQin",
"Md Belal BinHeyat",
"FaijanAkhtar",
"AblaSmahi",
"Jehoiada KJackson",
"SyedFurqan Qadri",
"Abdullah YMuaad",
"Happy NMonday",
"Grace UNneji"
] | 10.3390/bioengineering9110709
10.3390/bioengineering9070305
10.1007/s42399-020-00527-2
10.1007/s00415-020-10067-3
10.1007/s42399-020-00383-0
10.1155/2022/9210947
10.1109/ACCESS.2022.3194152
10.1155/2022/5641727
10.1016/j.jare.2022.08.021
10.1186/s12890-020-01286-5
10.1007/s11042-019-08394-3
10.3390/bioengineering9040172
10.1155/2022/5718501
10.1109/ACCESS.2019.2928020
10.3390/bios12060427
10.1155/2022/3599246
10.2174/1871527319666201110124954
10.3390/app10217410
10.2174/1389450121666201027125828
10.1145/3465055
10.32604/cmc.2021.014134
10.3390/diagnostics10090649
10.1155/2019/4180949
10.1007/s10916-021-01745-4
10.1016/j.compeleceng.2019.08.004
10.1016/j.cmpb.2019.06.023
10.3390/app10020559
10.1016/j.measurement.2020.108046
10.1016/j.bspc.2020.102365
10.1016/j.compbiomed.2020.103792
10.1016/j.chaos.2021.110713
10.1016/j.compbiomed.2021.104375
10.1016/j.chaos.2021.110749
10.1016/j.patcog.2021.108255
10.1007/s10044-021-00984-y
10.1038/s41598-020-76550-z
10.1007/s11063-022-10834-5
10.1109/ACCESS.2020.3010287
10.3390/covid1010034
10.1155/2022/9475162
10.3389/fnins.2021.754058
10.1109/ACCESS.2022.3212120
10.1155/2022/3408501
10.3390/app12031344
10.31083/j.jin2101020
10.1016/b978-0-323-99031-8.00012-0
10.3390/diagnostics12112815
10.1016/j.cmpb.2020.105581
10.1109/JBHI.2021.3058293
10.1016/j.asoc.2022.108867
10.1109/JBHI.2021.3074893
10.3390/s22031211
10.18280/ts.380337
10.18201/ijisae.2020466310
10.3233/XST-211005
10.1016/j.cell.2018.02.010
10.1007/s12559-020-09787-5
10.1016/j.chaos.2020.109944 |
Portable, Automated and Deep-Learning-Enabled Microscopy for Smartphone-Tethered Optical Platform Towards Remote Homecare Diagnostics: A Review. | Globally new pandemic diseases induce urgent demands for portable diagnostic systems to prevent and control infectious diseases. Smartphone-based portable diagnostic devices are significantly efficient tools to user-friendly connect personalized health conditions and collect valuable optical information for rapid diagnosis and biomedical research through at-home screening. Deep learning algorithms for portable microscopes also help to enhance diagnostic accuracy by reducing the imaging resolution gap between benchtop and portable microscopes. This review highlighted recent progress and continued efforts in a smartphone-tethered optical platform through portable, automated, and deep-learning-enabled microscopy for personalized diagnostics and remote monitoring. In detail, the optical platforms through smartphone-based microscopes and lens-free holographic microscopy are introduced, and deep learning-based portable microscopic imaging is explained to improve the image resolution and accuracy of diagnostics. The challenges and prospects of portable optical systems with microfluidic channels and a compact microscope to screen COVID-19 in the current pandemic are also discussed. It has been believed that this review offers a novel guide for rapid diagnosis, biomedical imaging, and digital healthcare with low cost and portability. | Small methods | 2022-11-25T00:00:00 | [
"KisooKim",
"Won GuLee"
] | 10.1002/smtd.202200979 |
A systematic review: Chest radiography images (X-ray images) analysis and COVID-19 categorization diagnosis using artificial intelligence techniques. | COVID-19 pandemic created a turmoil across nations due to Severe Acute Respiratory Syndrome Corona virus-1(SARS - Co-V-2). The severity of COVID-19 symptoms is starting from cold, breathing problems, issues in respiratory system which may also lead to life threatening situations. This disease is widely contaminating and transmitted from man-to-man. The contamination is spreading when the human organs like eyes, nose, and mouth get in contact with contaminated fluids. This virus can be screened through performing a nasopharyngeal swab test which is time consuming. So the physicians are preferring the fast detection methods like chest radiography images and CT scans. At times some confusion in finding out the accurate disorder from chest radiography images can happen. To overcome this issue this study reviews several deep learning and machine learning procedures to be implemented in X-ray images of chest. This also helps the professionals to find out the other types of malfunctions happening in the chest other than COVID-19 also. This review can act as a guidance to the doctors and radiologists in identifying the COVID-19 and other types of viruses causing illness in the human anatomy and can provide aid soon. | Network (Bristol, England) | 2022-11-25T00:00:00 | [
"SaravananSuba",
"MMuthulakshmi"
] | 10.1080/0954898X.2022.2147231 |
Deep progressive learning achieves whole-body low-dose | To validate a total-body PET-guided deep progressive learning reconstruction method (DPR) for low-dose
List-mode data from the retrospective study (n = 26) were rebinned into short-duration scans and reconstructed with DPR. The standard uptake value (SUV) and tumor-to-liver ratio (TLR) in lesions and coefficient of variation (COV) in the liver in the DPR images were compared to the reference (OSEM images with full-duration data). In the prospective study, another 41 patients were injected with 1/3 of the activity based on the retrospective results. The DPR images (DPR_1/3(p)) were generated and compared with the reference (OSEM images with extended acquisition time). The SUV and COV were evaluated in three selected organs: liver, blood pool and muscle. Quantitative analyses were performed with lesion SUV and TLR, furthermore on small lesions (≤ 10 mm in diameter). Additionally, a 5-point Likert scale visual analysis was performed on the following perspectives: contrast, noise and diagnostic confidence.
In the retrospective study, the DPR with one-third duration can maintain the image quality as the reference. In the prospective study, good agreement among the SUVs was observed in all selected organs. The quantitative results showed that there was no significant difference in COV between the DPR_1/3(p) group and the reference, while the visual analysis showed no significant differences in image contrast, noise and diagnostic confidence. The lesion SUVs and TLRs in the DPR_1/3(p) group were significantly enhanced compared with the reference, even for small lesions.
The proposed DPR method can reduce the administered activity of | EJNMMI physics | 2022-11-23T00:00:00 | [
"TaisongWang",
"WenliQiao",
"YingWang",
"JingyiWang",
"YangLv",
"YunDong",
"ZhengQian",
"YanXing",
"JinhuaZhao"
] | 10.1186/s40658-022-00508-5
10.1007/s00259-014-2961-x
10.1016/S0377-1237(09)80099-3
10.2967/jnumed.107.047787
10.1007/s12350-016-0522-3
10.1016/j.nuclcard.2007.04.006
10.1136/jnnp.2003.028175
10.2967/jnumed.117.200790
10.1097/RLU.0000000000003075
10.1007/s00259-020-05167-1
10.1186/s13550-020-00695-1
10.1007/s00259-021-05197-3.10.1007/s00259-021-05197-3
10.1007/s00259-021-05478-x
10.1109/TRPMS.2020.3014786
10.1088/1361-6560/abfb17
10.1007/s00247-006-0191-5
10.1007/s00247-009-1404-5
10.1259/bjr/01948454
10.1007/s00259-020-05091-4
10.1007/s00259-021-05304-4
10.1007/s00259-021-05462-5
10.1007/s00259-021-05537-3
10.2967/jnumed.121.262038
10.1007/s00259-021-05592-w
10.1259/bjr.20201356
10.1186/s13550-019-0536-3
10.1007/s00259-017-3893-z
10.1186/s13550-019-0565-y |
COVID-19 classification using chest X-ray images based on fusion-assisted deep Bayesian optimization and Grad-CAM visualization. | The COVID-19 virus's rapid global spread has caused millions of illnesses and deaths. As a result, it has disastrous consequences for people's lives, public health, and the global economy. Clinical studies have revealed a link between the severity of COVID-19 cases and the amount of virus present in infected people's lungs. Imaging techniques such as computed tomography (CT) and chest x-rays can detect COVID-19 (CXR). Manual inspection of these images is a difficult process, so computerized techniques are widely used. Deep convolutional neural networks (DCNNs) are a type of machine learning that is frequently used in computer vision applications, particularly in medical imaging, to detect and classify infected regions. These techniques can assist medical personnel in the detection of patients with COVID-19. In this article, a Bayesian optimized DCNN and explainable AI-based framework is proposed for the classification of COVID-19 from the chest X-ray images. The proposed method starts with a multi-filter contrast enhancement technique that increases the visibility of the infected part. Two pre-trained deep models, namely, EfficientNet-B0 and MobileNet-V2, are fine-tuned according to the target classes and then trained by employing Bayesian optimization (BO). Through BO, hyperparameters have been selected instead of static initialization. Features are extracted from the trained model and fused using a slicing-based serial fusion approach. The fused features are classified using machine learning classifiers for the final classification. Moreover, visualization is performed using a Grad-CAM that highlights the infected part in the image. Three publically available COVID-19 datasets are used for the experimental process to obtain improved accuracies of 98.8, 97.9, and 99.4%, respectively. | Frontiers in public health | 2022-11-22T00:00:00 | [
"AmeerHamza",
"MuhammadAttique Khan",
"Shui-HuaWang",
"MajedAlhaisoni",
"MeshalAlharbi",
"Hany SHussein",
"HammamAlshazly",
"Ye JinKim",
"JaehyukCha"
] | 10.3389/fpubh.2022.1046296
10.3390/s21020455
10.32604/cmc.2022.020140
10.1111/exsy.12776
10.1016/j.compbiomed.2022.105233
10.1155/2022/7672196
10.3389/fcomp.2020.00005
10.1016/j.bbi.2020.04.081
10.1148/radiol.2020200230
10.7717/peerj-cs.655
10.1109/TMI.2020.3040950
10.1109/TMI.2020.2993291
10.1109/TMI.2016.2528162
10.1155/2022/7377502
10.1016/j.media.2017.07.005
10.1038/nature14539
10.1148/radiol.2017162326
10.1007/s11263-015-0816-y
10.48550/arXiv.1602.07360
10.3389/fpubh.2022.948205
10.1155/2021/2560388
10.1016/j.compbiomed.2022.105213
10.3389/fmed.2020.00427
10.1371/journal.pone.0242535
10.1007/s13755-020-00119-3
10.3390/info11090419
10.1155/2020/8828855
10.1155/2022/4254631
10.1007/s00530-021-00826-1
10.1007/s42600-020-00120-5
10.1155/2022/1307944
10.3390/jpm12020309
10.3390/s21217286
10.1016/j.ecoinf.2020.101182
10.1007/s12530-020-09345-2
10.1016/j.chaos.2020.110511
10.1007/s00521-022-07052-4
10.1016/j.asoc.2020.106580
10.1016/j.compbiomed.2022.105244 |
Efficient-ECGNet framework for COVID-19 classification and correlation prediction with the cardio disease through electrocardiogram medical imaging. | In the last 2 years, we have witnessed multiple waves of coronavirus that affected millions of people around the globe. The proper cure for COVID-19 has not been diagnosed as vaccinated people also got infected with this disease. Precise and timely detection of COVID-19 can save human lives and protect them from complicated treatment procedures. Researchers have employed several medical imaging modalities like CT-Scan and X-ray for COVID-19 detection, however, little concentration is invested in the ECG imaging analysis. ECGs are quickly available image modality in comparison to CT-Scan and X-ray, therefore, we use them for diagnosing COVID-19. Efficient and effective detection of COVID-19 from the ECG signal is a complex and time-taking task, as researchers usually convert them into numeric values before applying any method which ultimately increases the computational burden. In this work, we tried to overcome these challenges by directly employing the ECG images in a deep-learning (DL)-based approach. More specifically, we introduce an Efficient-ECGNet method that presents an improved version of the EfficientNetV2-B4 model with additional dense layers and is capable of accurately classifying the ECG images into healthy, COVID-19, myocardial infarction (MI), abnormal heartbeats (AHB), and patients with Previous History of Myocardial Infarction (PMI) classes. Moreover, we introduce a module to measure the similarity of COVID-19-affected ECG images with the rest of the diseases. To the best of our knowledge, this is the first effort to approximate the correlation of COVID-19 patients with those having any previous or current history of cardio or respiratory disease. Further, we generate the heatmaps to demonstrate the accurate key-points computation ability of our method. We have performed extensive experimentation on a publicly available dataset to show the robustness of the proposed approach and confirmed that the Efficient-ECGNet framework is reliable to classify the ECG-based COVID-19 samples. | Frontiers in medicine | 2022-11-22T00:00:00 | [
"MarriamNawaz",
"TahiraNazir",
"AliJaved",
"Khalid MahmoodMalik",
"Abdul Khader JilaniSaudagar",
"Muhammad BadruddinKhan",
"Mozaherul HoqueAbul Hasanat",
"AbdullahAlTameem",
"MohammedAlKhathami"
] | 10.3389/fmed.2022.1005920
10.1016/j.ejim.2020.06.015
10.1007/s15010-020-01401-y
10.1016/j.jinf.2020.02.016
10.1007/s13755-021-00169-1
10.7717/peerj-cs.386
10.1016/j.compeleceng.2020.106960
10.1109/ICAI52203.2021.9445258
10.1016/j.asoc.2020.106691
10.1016/j.asoc.2021.107323
10.1016/j.asoc.2021.107160
10.1155/2021/9619079
10.1155/2022/1575303
10.1016/j.compbiomed.2021.104575
10.1016/j.chaos.2020.110190
10.1016/j.bspc.2021.102588
10.1016/j.irbm.2021.01.004
10.1002/jemt.23578
10.1016/j.imu.2020.100360
10.1007/s10489-020-01826-w
10.1007/s00521-021-05910-1
10.1007/s10489-020-01902-1
10.1007/s10489-020-01943-6
10.1007/s11356-020-10133-3
10.21203/rs.3.rs-646890/v1
10.1007/s00521-020-05410-8
10.1186/s12911-021-01521-x
10.1109/JIOT.2021.3051080
10.1016/j.dib.2021.106762
10.1109/CVPR.2015.7298594
10.1016/j.inpa.2020.04.004
10.1109/CVPR.2016.90
10.1109/CVPR.2017.243
10.1109/CVPR.2018.00474
10.1109/CVPR.2018.00745
10.1109/LSP.2016.2573042
10.1109/TPAMI.2005.165
10.1049/cit2.12101
10.1007/s13369-021-06182-6
10.1007/s42979-020-0114-9
10.1109/ICCASIT50869.2020.9368658
10.1145/3341095 |
COVID-19 Data Analytics Using Extended Convolutional Technique. | The healthcare system, lifestyle, industrial growth, economy, and livelihood of human beings worldwide were affected due to the triggered global pandemic by the COVID-19 virus that originated and was first reported in Wuhan city, Republic Country of China. COVID cases are difficult to predict and detect in their early stages, and their spread and mortality are uncontrollable. The reverse transcription polymerase chain reaction (RT-PCR) is still the first and foremost diagnostical methodology accepted worldwide; hence, it creates a scope of new diagnostic tools and techniques of detection approach which can produce effective and faster results compared with its predecessor. Innovational through current studies that complement the existence of the novel coronavirus (COVID-19) to findings in the thorax (chest) X-ray imaging, the projected research's method makes use of present deep learning (DL) models with the integration of various frameworks such as GoogleNet, U-Net, and ResNet50 to novel method those X-ray images and categorize patients as the corona positive (COVID + ve) or the corona negative (COVID -ve). The anticipated technique entails the pretreatment phase through dissection of the lung, getting rid of the environment which does now no longer provide applicable facts and can provide influenced consequences; then after this, the preliminary degree comes up with the category version educated below the switch mastering system; and in conclusion, consequences are evaluated and interpreted through warmth maps visualization. The proposed research method completed a detection accuracy of COVID-19 at around 99%. | Interdisciplinary perspectives on infectious diseases | 2022-11-18T00:00:00 | [
"Anand KumarGupta",
"AsadiSrinivasulu",
"Olutayo OyeyemiOyerinde",
"GiovanniPau",
"C VRavikumar"
] | 10.1155/2022/4578838
10.1016/j.mlwa.2021.100138
10.1155/2021/6621607
10.1371/journal.pone.0262052
10.1148/radiol.2020200642
10.23750/abm.v91i1.9397
10.1109/access.2020.2997311
10.1056/NEJMoa2002032
10.1155/2022/4838009
10.1001/jama.2020.3786
10.21227/4kcm-m312
10.3390/ijerph18063056
10.1007/s13755-021-00152-w
10.1007/s13755-021-00158-4
10.3390/healthcare9050522 |
Calibrated bagging deep learning for image semantic segmentation: A case study on COVID-19 chest X-ray image. | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes coronavirus disease 2019 (COVID-19). Imaging tests such as chest X-ray (CXR) and computed tomography (CT) can provide useful information to clinical staff for facilitating a diagnosis of COVID-19 in a more efficient and comprehensive manner. As a breakthrough of artificial intelligence (AI), deep learning has been applied to perform COVID-19 infection region segmentation and disease classification by analyzing CXR and CT data. However, prediction uncertainty of deep learning models for these tasks, which is very important to safety-critical applications like medical image processing, has not been comprehensively investigated. In this work, we propose a novel ensemble deep learning model through integrating bagging deep learning and model calibration to not only enhance segmentation performance, but also reduce prediction uncertainty. The proposed method has been validated on a large dataset that is associated with CXR image segmentation. Experimental results demonstrate that the proposed method can improve the segmentation performance, as well as decrease prediction uncertainty. | PloS one | 2022-11-17T00:00:00 | [
"LucyNwosu",
"XiangfangLi",
"LijunQian",
"SeungchanKim",
"XishuangDong"
] | 10.1371/journal.pone.0276250
10.1002/ima.22469
10.1145/3411760
10.33889/IJMEMS.2020.5.4.052
10.1007/s13246-020-00865-4
10.1007/s10044-021-00984-y
10.3390/ijerph18063056
10.1097/RTI.0000000000000532
10.3390/s21217116
10.1016/j.compbiomed.2021.104984
10.1016/j.bspc.2021.103182
10.1175/1520-0450(1967)006<0748:VOPPAB>2.0.CO;2
10.1177/0962280213497434
10.1136/amiajnl-2011-000291
10.1007/BF00058655
10.1186/s12880-020-00529-5
10.1148/ryct.2020200082
10.1038/nature21056 |
CNN Features and Optimized Generative Adversarial Network for COVID-19 Detection from Chest X-Ray Images. | Coronavirus is a RNA type virus, which makes various respiratory infections in both human as well as animals. In addition, it could cause pneumonia in humans. The Coronavirus affected patients has been increasing day to day, due to the wide spread of diseases. As the count of corona affected patients increases, most of the regions are facing the issue of test kit shortage. In order to resolve this issue, the deep learning approach provides a better solution for automatically detecting the COVID-19 disease. In this research, an optimized deep learning approach, named Henry gas water wave optimization-based deep generative adversarial network (HGWWO-Deep GAN) is developed. Here, the HGWWO algorithm is designed by the hybridization of Henry gas solubility optimization (HGSO) and water wave optimization (WWO) algorithm. The pre-processing method is carried out using region of interest (RoI) and median filtering in order to remove the noise from the images. Lung lobe segmentation is carried out using U-net architecture and lung region extraction is done using convolutional neural network (CNN) features. Moreover, the COVID-19 detection is done using Deep GAN trained by the HGWWO algorithm. The experimental result demonstrates that the developed model attained the optimal performance based on the testing accuracy of 0.9169, sensitivity of 0.9328, and specificity of 0.9032. | Critical reviews in biomedical engineering | 2022-11-15T00:00:00 | [
"GotlurKalpana",
"A KanakaDurga",
"GKaruna"
] | 10.1615/CritRevBiomedEng.2022042286 |
A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. | Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths. | Neural computing & applications | 2022-11-15T00:00:00 | [
"PCelard",
"E LIglesias",
"J MSorribes-Fdez",
"RRomero",
"A SearaVieira",
"LBorrajo"
] | 10.1007/s00521-022-07953-4
10.1016/j.artmed.2021.102164
10.1016/j.artmed.2021.102165
10.1145/3464423
10.1145/3465398
10.1007/s00521-022-07099-3
10.1007/s00521-022-06960-9
10.1016/j.artmed.2020.101938
10.1016/j.neunet.2014.09.003
10.1016/j.compmedimag.2019.04.005
10.1245/ASO.2004.04.018
10.3109/02841851.2010.498444
10.7314/APJCP.2012.13.3.927
10.1136/bjo.80.11.940
10.1136/bjo.83.8.902
10.1016/j.compbiomed.2005.01.006
10.1109/10.959322
10.1016/j.compbiomed.2021.104319
10.1038/nature14539
10.1016/0730-725X(93)90417-C
10.1093/clinchem/48.10.1828
10.1245/ASO.2004.03.007
10.1179/016164104773026534
10.1145/3065386
10.1007/s00521-022-06953-8
10.1007/BF00344251
10.1109/42.476112
10.1016/j.media.2017.07.005
10.1007/978-3-319-46448-0_2
10.1016/j.ejca.2019.04.001
10.1371/journal.pmed.1002730
10.1016/j.cmpb.2020.105532
10.1016/j.media.2018.10.006
10.1016/j.media.2019.01.012
10.1016/j.media.2019.101557
10.1016/j.bspc.2021.102901
10.1007/s10278-019-00227-x
10.1016/j.media.2020.101884
10.1016/j.neucom.2019.02.003
10.1109/TPAMI.2021.3059968
10.1007/s10462-020-09854-1
10.1016/j.compeleceng.2021.107036
10.3390/diagnostics11020169
10.1016/j.compbiomed.2021.104699
10.3389/fgene.2021.639930
10.1038/s41592-020-01008-z
10.1155/2021/6625688
10.1109/TMI.2020.2995508
10.1016/j.neunet.2021.03.006
10.1109/JBHI.2020.2986926
10.1109/ACCESS.2019.2899108
10.1016/j.bspc.2019.101678
10.1016/j.bspc.2019.101641
10.1002/mp.13927
10.1002/mp.14006
10.1016/j.remnie.2016.07.002
10.1007/s00259-020-04816-9
10.1186/s13195-021-00797-5
10.1016/j.media.2020.101716
10.1016/j.cmpb.2020.105568
10.1007/s12539-020-00403-6
10.2174/1573405616666200604163954
10.1002/mp.15044
10.1016/j.cmpb.2021.106018
10.1561/2200000056
10.1007/s11548-018-1898-0
10.1016/j.ejmp.2021.02.013
10.1016/j.artmed.2020.102006
10.1007/s10278-020-00413-2
10.1016/j.media.2020.101952
10.1212/WNL.0b013e3181cb3e25
10.1007/s10916-019-1475-2
10.1016/j.compbiomed.2020.103764
10.1371/journal.pone.0140381
10.1016/j.compbiomed.2019.103345
10.1109/TMI.2014.2377694
10.1109/TMI.2018.2867350
10.1016/j.compbiomed.2020.103774
10.1016/j.acra.2011.09.014
10.12913/22998624/137964
10.1016/j.cell.2018.02.010
10.1016/j.cmpb.2020.105581
10.1016/j.media.2020.101794
10.1118/1.3528204
10.1038/s41597-021-00815-z
10.3758/BRM.42.1.351
10.1007/s13246-020-00865-4
10.1016/j.artmed.2020.101880
10.1016/j.media.2021.102327
10.1109/TMI.2022.3147426
10.1016/j.media.2022.102479
10.1109/ACCESS.2022.3172975
10.1016/j.dib.2022.108258 |
EVAE-Net: An Ensemble Variational Autoencoder Deep Learning Network for COVID-19 Classification Based on Chest X-ray Images. | The COVID-19 pandemic has had a significant impact on many lives and the economies of many countries since late December 2019. Early detection with high accuracy is essential to help break the chain of transmission. Several radiological methodologies, such as CT scan and chest X-ray, have been employed in diagnosing and monitoring COVID-19 disease. Still, these methodologies are time-consuming and require trial and error. Machine learning techniques are currently being applied by several studies to deal with COVID-19. This study exploits the latent embeddings of variational autoencoders combined with ensemble techniques to propose three effective EVAE-Net models to detect COVID-19 disease. Two encoders are trained on chest X-ray images to generate two feature maps. The feature maps are concatenated and passed to either a combined or individual reparameterization phase to generate latent embeddings by sampling from a distribution. The latent embeddings are concatenated and passed to a classification head for classification. The COVID-19 Radiography Dataset from Kaggle is the source of chest X-ray images. The performances of the three models are evaluated. The proposed model shows satisfactory performance, with the best model achieving 99.19% and 98.66% accuracy on four classes and three classes, respectively. | Diagnostics (Basel, Switzerland) | 2022-11-12T00:00:00 | [
"DanielAddo",
"ShijieZhou",
"Jehoiada KofiJackson",
"Grace UgochiNneji",
"Happy NkantaMonday",
"KwabenaSarpong",
"Rutherford AgbeshiPatamia",
"FavourEkong",
"Christyn AkosuaOwusu-Agyei"
] | 10.3390/diagnostics12112569
10.1148/radiol.2020200432
10.1109/ACCESS.2020.3033762
10.1128/JCM.01438-20
10.1016/j.knosys.2020.106647
10.3238/arztebl.2014.0181
10.1007/s11548-019-01917-1
10.1016/j.jemermed.2020.04.004
10.1148/radiol.2020201160
10.14245/ns.1938396.198
10.1038/s41746-020-0273-z
10.1109/TMI.2016.2535865
10.1109/ISBI.2018.8363572
10.1016/j.measurement.2019.05.076
10.1016/j.bspc.2022.103848
10.1016/j.bspc.2022.103595
10.1016/j.neucom.2022.01.055
10.1016/j.jksuci.2020.12.010
10.1038/323533a0
10.1561/2200000056
10.21437/Interspeech.2016-1183
10.1016/S0140-6736(20)30304-4
10.1109/TCYB.2020.2990162
10.1093/jtm/taaa080
10.3390/healthcare8010046
10.1038/s41467-020-17280-8
10.1016/j.compbiomed.2020.103869
10.3390/s21175813
10.1148/radiol.2020200905
10.3390/app12126269
10.1016/j.compbiomed.2020.103792
10.1109/CVPR.2017.690
10.1016/j.cmpb.2020.105581
10.1007/s12530-021-09385-2
10.1109/TNNLS.2021.3070467
10.3390/sym14071398
10.1007/s11517-020-02299-2
10.1007/s12539-020-00403-6
10.3390/healthcare10071313
10.1016/j.compbiomed.2022.105233
10.1007/s13246-020-00865-4
10.1109/CVPR.2017.243
10.1007/s13755-021-00140-0
10.1038/s41598-020-76550-z
10.1109/TCBB.2021.3065361
10.21037/atm.2020.03.132
10.1007/s10489-020-01900-3
10.1016/j.asoc.2020.106912
10.1109/IJCNN.2015.7280578
10.1109/ICCCI50826.2021.9402545
10.1155/2021/5527923
10.1007/978-3-030-74575-2_14
10.1111/j.1365-2818.2010.03415.x
10.1109/ICACCI.2014.6968381
10.32604/cmc.2022.020698
10.1109/TBDATA.2017.2717439
10.14569/IJACSA.2021.0120717
10.1155/2018/3078374
10.3390/jimaging7050083
10.1148/ryai.2021200218
10.1111/srt.13145
10.1007/s10489-020-01813-1
10.1016/j.micpro.2020.103280
10.1016/j.neucom.2015.08.104
10.1016/j.cmpb.2022.106883
10.1016/j.bbe.2021.09.004
10.1016/j.compbiomed.2021.105134
10.1155/2022/5329014
10.32604/cmc.2021.018449
10.1007/s10489-020-02002-w
10.1145/3451357
10.1016/j.patrec.2021.08.018
10.1038/s41598-022-05532-0
10.1016/j.bspc.2021.103326
10.1016/j.compmedimag.2021.102008
10.1016/j.bspc.2022.103677
10.1016/j.compbiomed.2022.105340
10.1016/j.jksuci.2021.07.005
10.1016/j.jksuci.2022.04.006
10.1016/j.compbiomed.2021.104319
10.1109/ACCESS.2020.3010287
10.1016/j.compbiomed.2021.104834
10.1016/j.compbiomed.2022.105244
10.1016/j.bspc.2022.103860 |
Generative adversarial network based data augmentation for CNN based detection of Covid-19. | Covid-19 has been a global concern since 2019, crippling the world economy and health. Biological diagnostic tools have since been developed to identify the virus from bodily fluids and since the virus causes pneumonia, which results in lung inflammation, the presence of the virus can also be detected using medical imaging by expert radiologists. The success of each diagnostic method is measured by the hit rate for identifying Covid infections. However, the access for people to each diagnosis tool can be limited, depending on the geographic region and, since Covid treatment denotes a race against time, the diagnosis duration plays an important role. Hospitals with X-ray opportunities are widely distributed all over the world, so a method investigating lung X-ray images for possible Covid-19 infections would offer itself. Promising results have been achieved in the literature in automatically detecting the virus using medical images like CT scans and X-rays using supervised artificial neural network algorithms. One of the major drawbacks of supervised learning models is that they require enormous amounts of data to train, and generalize on new data. In this study, we develop a Swish activated, Instance and Batch normalized Residual U-Net GAN with dense blocks and skip connections to create synthetic and augmented data for training. The proposed GAN architecture, due to the presence of instance normalization and swish activation, can deal with the randomness of luminosity, that arises due to different sources of X-ray images better than the classical architecture and generate realistic-looking synthetic data. Also, the radiology equipment is not generally computationally efficient. They cannot efficiently run state-of-the-art deep neural networks such as DenseNet and ResNet effectively. Hence, we propose a novel CNN architecture that is 40% lighter and more accurate than state-of-the-art CNN networks. Multi-class classification of the three classes of chest X-rays (CXR), ie Covid-19, healthy and Pneumonia, is performed using the proposed model which had an extremely high test accuracy of 99.2% which has not been achieved in any previous studies in the literature. Based on the mentioned criteria for developing Corona infection diagnosis, in the present study, an Artificial Intelligence based method is proposed, resulting in a rapid diagnostic tool for Covid infections based on generative adversarial and convolutional neural networks. The benefit will be a high accuracy of lung infection identification with 99% accuracy. This could lead to a support tool that helps in rapid diagnosis, and an accessible Covid identification method using CXR images. | Scientific reports | 2022-11-11T00:00:00 | [
"RutwikGulakala",
"BerndMarkert",
"MarcusStoffel"
] | 10.1038/s41598-022-23692-x
10.1016/S1473-3099(20)30120-1
10.1148/radiol.2020201160
10.1038/s42003-020-01535-7
10.1155/2021/3366057
10.1515/cdbme-2020-3051
10.1016/j.cmpb.2021.106279
10.1016/j.medengphy.2016.10.010
10.1016/j.cma.2020.112989
10.1016/j.mechrescom.2021.103817
10.1016/j.euromechsol.2006.12.002
10.1016/j.mechmat.2005.06.001
10.1038/s41598-021-93543-8
10.1016/j.imu.2020.100412
10.1016/j.imu.2020.100505
10.1016/j.ijmedinf.2020.104284
10.1148/radiol.2017162326
10.1109/ACCESS.2020.2994762
10.1007/s00521-022-06918-x
10.1016/j.compbiomed.2020.103792
10.1007/s10044-020-00950-0
10.3390/diagnostics12020267
10.1109/TMI.2020.2995518
10.3390/app112210528
10.1007/s00500-019-04602-2
10.1016/j.imu.2021.100779
10.1038/s41598-021-87994-2
10.1109/TMI.2013.2290491
10.1109/TMI.2013.2284099
10.1186/s40537-021-00444-8
10.1007/s13246-020-00865-4
10.1371/journal.pone.0262052 |
CXR-Net: A Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia From Chest X-Ray Images. | Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first-line imaging technique for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Currently, many deep learning (DL) models have been proposed to detect COVID-19 pneumonia from CXR images. Unfortunately, these deep classifiers lack the transparency in interpreting findings, which may limit their applications in clinical practice. The existing explanation methods produce either too noisy or imprecise results, and hence are unsuitable for diagnostic purposes. In this work, we propose a novel explainable CXR deep neural Network (CXR-Net) for accurate COVID-19 pneumonia detection with an enhanced pixel-level visual explanation using CXR images. An Encoder-Decoder-Encoder architecture is proposed, in which an extra encoder is added after the encoder-decoder structure to ensure the model can be trained on category samples. The method has been evaluated on real world CXR datasets from both public and private sources, including healthy, bacterial pneumonia, viral pneumonia and COVID-19 pneumonia cases. The results demonstrate that the proposed method can achieve a satisfactory accuracy and provide fine-resolution activation maps for visual explanation in the lung disease detection. Compared to current state-of-the-art visual explanation methods, the proposed method can provide more detailed, high-resolution, visual explanation for the classification results. It can be deployed in various computing environments, including cloud, CPU and GPU environments. It has a great potential to be used in clinical practice for COVID-19 pneumonia diagnosis. | IEEE journal of biomedical and health informatics | 2022-11-10T00:00:00 | [
"XinZhang",
"LiangxiuHan",
"TamSobeih",
"LianghaoHan",
"NinaDempsey",
"SymeonLechareas",
"AscanioTridente",
"HaomingChen",
"StephenWhite",
"DaoqiangZhang"
] | 10.1109/JBHI.2022.3220813 |
Strong semantic segmentation for Covid-19 detection: Evaluating the use of deep learning models as a performant tool in radiography. | With the increasing number of Covid-19 cases as well as care costs, chest diseases have gained increasing interest in several communities, particularly in medical and computer vision. Clinical and analytical exams are widely recognized techniques for diagnosing and handling Covid-19 cases. However, strong detection tools can help avoid damage to chest tissues. The proposed method provides an important way to enhance the semantic segmentation process using combined potential deep learning (DL) modules to increase consistency. Based on Covid-19 CT images, this work hypothesized that a novel model for semantic segmentation might be able to extract definite graphical features of Covid-19 and afford an accurate clinical diagnosis while optimizing the classical test and saving time.
CT images were collected considering different cases (normal chest CT, pneumonia, typical viral causes, and Covid-19 cases). The study presents an advanced DL method to deal with chest semantic segmentation issues. The approach employs a modified version of the U-net to enable and support Covid-19 detection from the studied images.
The validation tests demonstrated competitive results with important performance rates: Precision (90.96% ± 2.5) with an F-score of (91.08% ± 3.2), an accuracy of (93.37% ± 1.2), a sensitivity of (96.88% ± 2.8) and a specificity of (96.91% ± 2.3). In addition, the visual segmentation results are very close to the Ground truth.
The findings of this study reveal the proof-of-principle for using cooperative components to strengthen the semantic segmentation modules for effective and truthful Covid-19 diagnosis.
This paper has highlighted that DL based approach, with several modules, may be contributing to provide strong support for radiographers and physicians, and that further use of DL is required to design and implement performant automated vision systems to detect chest diseases. | Radiography (London, England : 1995) | 2022-11-07T00:00:00 | [
"HAllioui",
"YMourdi",
"MSadgal"
] | 10.1016/j.radi.2022.10.010
10.1016/j.neucom.2017.08.043
10.3389/fnins.2018.00777
10.1007/s13369-021-05958-0
10.3390/technologies10050105
10.1109/ISBI.2016.7493515
10.5114/pjr.2022.119027
10.1109/CISP-BMEI.2018.8633056
10.1016/j.asoc.2021.107160
10.48550/arXiv.2110.09619
10.7937/K9/TCIA.2017.3r3fvz08
10.5281/zenodo.375747
10.1111/cgf.14521 |
Contrastive domain adaptation with consistency match for automated pneumonia diagnosis. | Pneumonia can be difficult to diagnose since its symptoms are too variable, and the radiographic signs are often very similar to those seen in other illnesses such as a cold or influenza. Deep neural networks have shown promising performance in automated pneumonia diagnosis using chest X-ray radiography, allowing mass screening and early intervention to reduce the severe cases and death toll. However, they usually require many well-labelled chest X-ray images for training to achieve high diagnostic accuracy. To reduce the need for training data and annotation resources, we propose a novel method called Contrastive Domain Adaptation with Consistency Match (CDACM). It transfers the knowledge from different but relevant datasets to the unlabelled small-size target dataset and improves the semantic quality of the learnt representations. Specifically, we design a conditional domain adversarial network to exploit discriminative information conveyed in the predictions to mitigate the domain gap between the source and target datasets. Furthermore, due to the small scale of the target dataset, we construct a feature cloud for each target sample and leverage contrastive learning to extract more discriminative features. Lastly, we propose adaptive feature cloud expansion to push the decision boundary to a low-density area. Unlike most existing transfer learning methods that aim only to mitigate the domain gap, our method instead simultaneously considers the domain gap and the data deficiency problem of the target dataset. The conditional domain adaptation and the feature cloud generation of our method are learning jointly to extract discriminative features in an end-to-end manner. Besides, the adaptive feature cloud expansion improves the model's generalisation ability in the target domain. Extensive experiments on pneumonia and COVID-19 diagnosis tasks demonstrate that our method outperforms several state-of-the-art unsupervised domain adaptation approaches, which verifies the effectiveness of CDACM for automated pneumonia diagnosis using chest X-ray imaging. | Medical image analysis | 2022-11-05T00:00:00 | [
"YangqinFeng",
"ZizhouWang",
"XinxingXu",
"YanWang",
"HuazhuFu",
"ShaohuaLi",
"LiangliZhen",
"XiaofengLei",
"YingnanCui",
"JordanSim Zheng Ting",
"YonghanTing",
"Joey TianyiZhou",
"YongLiu",
"RickSiow Mong Goh",
"CherHeng Tan"
] | 10.1016/j.media.2022.102664 |
Assessment of COVID-19 lung involvement on computed tomography by deep-learning-, threshold-, and human reader-based approaches-an international, multi-center comparative study. | The extent of lung involvement in coronavirus disease 2019 (COVID-19) pneumonia, quantified on computed tomography (CT), is an established biomarker for prognosis and guides clinical decision-making. The clinical standard is semi-quantitative scoring of lung involvement by an experienced reader. We aim to compare the performance of automated deep-learning- and threshold-based methods to the manual semi-quantitative lung scoring. Further, we aim to investigate an optimal threshold for quantification of involved lung in COVID pneumonia chest CT, using a multi-center dataset.
In total 250 patients were included, 50 consecutive patients with RT-PCR confirmed COVID-19 from our local institutional database, and another 200 patients from four international datasets (n=50 each). Lung involvement was scored semi-quantitatively by three experienced radiologists according to the established chest CT score (CCS) ranging from 0-25. Inter-rater reliability was reported by the intraclass correlation coefficient (ICC). Deep-learning-based segmentation of ground-glass and consolidation was obtained by CT Pulmo Auto Results prototype plugin on IntelliSpace Discovery (Philips Healthcare, The Netherlands). Threshold-based segmentation of involved lung was implemented using an open-source tool for whole-lung segmentation under the presence of severe pathologies (R231CovidWeb, Hofmanninger
Median CCS among 250 evaluated patients was 10 [6-15]. Inter-rater reliability of the CCS was excellent [ICC 0.97 (0.97-0.98)]. Best attenuation threshold for identification of involved lung was -522 HU. While the relationship of deep-learning- and threshold-based quantification was linear and strong (r
The manual semi-quantitative CCS underestimates the extent of COVID pneumonia in higher score ranges, which limits its clinical usefulness in cases of severe disease. Clinical implementation of fully automated methods, such as deep-learning or threshold-based approaches (best threshold in our multi-center dataset: -522 HU), might save time of trained personnel, abolish inter-reader variability, and allow for truly quantitative, linear assessment of COVID lung involvement. | Quantitative imaging in medicine and surgery | 2022-11-05T00:00:00 | [
"PhilippFervers",
"FlorianFervers",
"AsthaJaiswal",
"MiriamRinneburger",
"MathildaWeisthoff",
"PhilipPollmann-Schweckhorst",
"JonathanKottlors",
"HeikeCarolus",
"SimonLennartz",
"DavidMaintz",
"RahilShahzad",
"ThorstenPersigehl"
] | 10.21037/qims-22-175
10.1148/rg.2020200159
10.1038/s41467-020-18786-x
10.3389/fpubh.2021.596938
10.1148/ryct.2020200130
10.1371/journal.pone.0237302
10.1186/s12931-020-01411-2
10.1007/s00330-020-07033-y
10.1186/s43055-021-00525-x
10.1148/ryct.2020200441
10.1148/radiol.2020200370
10.1148/ryai.2020200048
10.1007/s11760-022-02183-6
10.1148/radiol.2020201433
10.1007/s00330-020-07013-2
10.1148/radiol.2021203957
10.1186/s13104-021-05592-x
10.3390/bioengineering8020026
10.1007/s10278-013-9622-7
10.1007/s00330-021-08482-9
10.1117/12.2293528
10.1186/s41747-020-00173-2
10.1038/s41586-020-2649-2
10.1038/s41598-021-01489-8
10.1177/1536867X0800800212
10.3758/s13423-016-1039-0
10.1353/bsp.2020.0011
10.21037/jtd.2017.08.17
10.1117/1.1631315
10.1155/2021/6697677
10.1007/s00330-021-08435-2 |
A novel deep learning-based method for COVID-19 pneumonia detection from CT images. | The sensitivity of RT-PCR in diagnosing COVID-19 is only 60-70%, and chest CT plays an indispensable role in the auxiliary diagnosis of COVID-19 pneumonia, but the results of CT imaging are highly dependent on professional radiologists.
This study aimed to develop a deep learning model to assist radiologists in detecting COVID-19 pneumonia.
The total study population was 437. The training dataset contained 26,477, 2468, and 8104 CT images of normal, CAP, and COVID-19, respectively. The validation dataset contained 14,076, 1028, and 3376 CT images of normal, CAP, and COVID-19 patients, respectively. The test set included 51 normal cases, 28 CAP patients, and 51 COVID-19 patients. We designed and trained a deep learning model to recognize normal, CAP, and COVID-19 patients based on U-Net and ResNet-50. Moreover, the diagnoses of the deep learning model were compared with different levels of radiologists.
In the test set, the sensitivity of the deep learning model in diagnosing normal cases, CAP, and COVID-19 patients was 98.03%, 89.28%, and 92.15%, respectively. The diagnostic accuracy of the deep learning model was 93.84%. In the validation set, the accuracy was 92.86%, which was better than that of two novice doctors (86.73% and 87.75%) and almost equal to that of two experts (94.90% and 93.88%). The AI model performed significantly better than all four radiologists in terms of time consumption (35 min vs. 75 min, 93 min, 79 min, and 82 min).
The AI model we obtained had strong decision-making ability, which could potentially assist doctors in detecting COVID-19 pneumonia. | BMC medical informatics and decision making | 2022-11-04T00:00:00 | [
"JuLuo",
"YuhaoSun",
"JingshuChi",
"XinLiao",
"CanxiaXu"
] | 10.1186/s12911-022-02022-1
10.1016/S0140-6736(20)30211-7
10.1001/jama.2020.1585
10.1056/NEJMoa2001316
10.1056/NEJMoa2001191
10.1101/2020.02.11.20021493v2
10.1148/radiol.2020200432
10.1109/TMI.2020.2996645
10.1148/radiol.2020200343
10.1016/S0140-6736(20)30154-9
10.1148/ryct.2020204002
10.1148/radiol.2020200642
10.1038/s41467-020-17971-2
10.1016/j.cell.2020.08.029
10.1016/j.media.2021.102096
10.1007/s00354-022-00172-4
10.18280/ts.370313
10.1186/s41747-020-00173-2
10.1016/j.compbiomed.2020.103792
10.18280/ts.380117
10.1007/s10044-021-00984-y
10.21203/rs.3.rs-104621/v1
10.1148/radiol.2020200905
10.1001/jama.2020.8259
10.7326/M20-1495
10.1001/jama.2020.12839
10.1148/radiol.2020200702 |
Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging. | The infectious disease known as COVID-19 has spread dramatically all over the world since December 2019. The fast diagnosis and isolation of infected patients are key factors in slowing down the spread of this virus and better management of the pandemic. Although the CT and X-ray modalities are commonly used for the diagnosis of COVID-19, identifying COVID-19 patients from medical images is a time-consuming and error-prone task. Artificial intelligence has shown to have great potential to speed up and optimize the prognosis and diagnosis process of COVID-19. Herein, we review publications on the application of deep learning (DL) techniques for diagnostics of patients with COVID-19 using CT and X-ray chest images for a period from January 2020 to October 2021. Our review focuses solely on peer-reviewed, well-documented articles. It provides a comprehensive summary of the technical details of models developed in these articles and discusses the challenges in the smart diagnosis of COVID-19 using DL techniques. Based on these challenges, it seems that the effectiveness of the developed models in clinical use needs to be further investigated. This review provides some recommendations to help researchers develop more accurate prediction models. | IPEM-translation | 2022-11-01T00:00:00 | [
"MarjanJalali Moghaddam",
"MinaGhavipour"
] | 10.1016/j.ipemt.2022.100008
10.1038/s41591-020-0931-3
10.1007/s00330-020-06801-0
10.1016/j.tmaid.2020.101623
10.1148/radiol.2020200823
10.1007/s42600-021-00151-6
10.3390/ai1030027
10.3390/electronics8030292
10.1155/2021/8829829
10.22061/jecei.2022.8200.491
10.1038/s41598-021-99015-3
10.1109/UEMCON47517.2019.8993089
10.1186/s40537-021-00444-8
10.1186/s13634-021-00755-1
10.1016/j.media.2021.102253
10.1148/radiol.2020201491
10.1109/ACCESS.2021.3086020
10.3390/electronics11152296
10.1109/ACCESS.2020.2994762
10.1109/TMI.2020.2994459
10.1007/s40477-020-00458-7
10.1148/radiol.2020200490
10.1007/s10489-020-01900-3
10.1021/ci0342472
10.1109/IACC.2016.25
10.1016/S0031-3203(02)00121-8
10.1109/TKDE.2009.191
10.4018/978-1-60566-766-9.ch011
10.1016/j.cmpb.2020.105608
10.1108/IJPCC-06-2020-0060
10.1007/978-3-540-75171-7_2
10.1007/978-3-540-28650-9_5
10.1093/nsr/nwx106
10.1109/TMI.2020.2994908
10.1109/TMI.2020.3000314
10.1109/TMI.2020.2996645
10.1038/s41746-021-00399-3
10.1109/ACCESS.2020.3003810
10.3390/app10165683
10.1080/07391102.2020.1767212
10.1148/radiol.2020200905
10.1016/j.compbiomed.2020.103869
10.1038/nbt1206-1565
10.1007/s12010-021-03728-0
10.7326/M20-1495
10.1016/j.bea.2021.100003
10.1148/radiol.2020200432
10.1016/j.clinimag.2021.02.003
10.1148/radiol.2020200370
10.1148/radiol.2020200463
10.1148/rg.2020200159
10.1016/j.clinimag.2020.04.001
10.1016/j.chaos.2020.109947
10.1186/s12890-020-01286-5
10.1016/j.chaos.2020.109944
10.1016/j.compbiomed.2020.103805
10.1007/s10489-020-01829-7
10.1016/j.mehy.2020.109761
10.1016/j.compbiomed.2020.103792
10.1016/j.cmpb.2020.105532
10.1016/j.cmpb.2020.105581
10.1007/s40846-020-00529-4
10.3390/sym12040651
10.1007/s13246-020-00865-4
10.1016/j.imu.2020.100360
10.1007/s00264-020-04609-7
10.1007/s13246-020-00888-x
10.1109/TMI.2020.2993291
10.1016/j.imu.2020.100405
10.3389/fmed.2020.00427
10.3892/etm.2020.8797
10.1016/j.asoc.2020.106580
10.1007/s10489-020-01867-1
10.1016/j.radi.2020.10.018
10.1007/s00521-020-05636-6
10.1016/j.bbe.2020.08.008
10.1016/j.chaos.2020.110245
10.1007/s12652-020-02688-3
10.1016/j.compbiomed.2021.104252
10.1007/s11571-021-09712-y
10.1016/j.compbiomed.2021.104927
10.1007/s10489-020-01714-3
10.1080/07391102.2020.1788642
10.1016/j.ejrad.2020.109041
10.1038/s41467-020-17971-2
10.1016/j.compbiomed.2020.103795
10.1109/TMI.2020.2995508
10.21037/atm.2020.03.132
10.1109/TMI.2020.2996256
10.3390/e22050517
10.1007/s00330-020-06956-w
10.1016/j.irbm.2020.05.003
10.1007/s10096-020-03901-z
10.1007/s00259-020-04929-1
10.1109/TMI.2020.2995108
10.1109/ACCESS.2020.3005510
10.1183/13993003.00775-2020
10.2196/19569
10.1007/s00330-021-07715-1
10.1038/s41598-020-76282-0
10.1109/TCBB.2021.3065361
10.1016/j.compbiomed.2021.104306
10.1016/j.compbiomed.2021.104837
10.1016/j.chaos.2021.111310
10.1016/j.patcog.2021.107826
10.1007/s00521-020-05410-8
10.1007/s10916-018-1088-1
10.34133/2021/8786793
10.4018/978-1-7998-8929-8.ch001
10.1007/s00530-021-00794-6
10.1109/ICIRCA48905.2020.9183278
10.1007/s10462-020-09825-6
10.1002/acm2.13121
10.1007/s00521-021-06344-5
10.1002/mp.15419
10.1038/s41598-021-88807-2
10.2196/19673
10.1109/TIP.2021.3058783
10.1097/MCP.0000000000000765
10.3390/diagnostics11071155
10.1016/j.ultrasmedbio.2020.07.003
10.1002/int.22504
10.1016/j.acra.2020.04.032
10.1016/j.mri.2021.03.005
10.1117/1.JBO.19.1.010901
10.1021/acsnano.1c05226
10.3389/fmicb.2020.02014
10.1016/B978-0-08-100040-3.00002-X
10.1016/j.asoc.2021.107150
10.1016/j.compbiomed.2020.104037
10.1101/2020.03.19.20039354
10.1177/0846537120913033
10.1186/s13244-020-00933-z
10.1016/j.ijid.2020.06.026
10.1016/j.diii.2020.03.014
10.1093/cid/ciaa247
10.1007/s10462-021-09975-1
10.1007/978-1-4419-9326-7_1
10.1109/ICC40277.2020.9148817
10.1148/radiol.2020200330 |
A deep transfer learning-based convolution neural network model for COVID-19 detection using computed tomography scan images for medical applications. | The Coronavirus (COVID-19) has become a critical and extreme epidemic because of its international dissemination. COVID-19 is the world's most serious health, economic, and survival danger. This disease affects not only a single country but the entire planet due to this infectious disease. Illnesses of Covid-19 spread at a much faster rate than usual influenza cases. Because of its high transmissibility and early diagnosis, it isn't easy to manage COVID-19. The popularly used RT-PCR method for COVID-19 disease diagnosis may provide false negatives. COVID-19 can be detected non-invasively using medical imaging procedures such as chest CT and chest x-ray. Deep learning is the most effective machine learning approach for examining a considerable quantity of chest computed tomography (CT) pictures that can significantly affect Covid-19 screening. Convolutional neural network (CNN) is one of the most popular deep learning techniques right now, and its gaining traction due to its potential to transform several spheres of human life. This research aims to develop conceptual transfer learning enhanced CNN framework models for detecting COVID-19 with CT scan images. Though with minimal datasets, these techniques were demonstrated to be effective in detecting the presence of COVID-19. This proposed research looks into several deep transfer learning-based CNN approaches for detecting the presence of COVID-19 in chest CT images.VGG16, VGG19, Densenet121, InceptionV3, Xception, and Resnet50 are the foundation models used in this work. Each model's performance was evaluated using a confusion matrix and various performance measures such as accuracy, recall, precision, f1-score, loss, and ROC. The VGG16 model performed much better than the other models in this study (98.00 % accuracy). Promising outcomes from experiments have revealed the merits of the proposed model for detecting and monitoring COVID-19 patients. This could help practitioners and academics create a tool to help minimal health professionals decide on the best course of therapy. | Advances in engineering software (Barking, London, England : 1992) | 2022-11-01T00:00:00 | [
"Nirmala DeviKathamuthu",
"ShanthiSubramaniam",
"Quynh HoangLe",
"SureshMuthusamy",
"HiteshPanchal",
"Suma Christal MarySundararajan",
"Ali JawadAlrubaie",
"Musaddak Maher AbdulZahra"
] | 10.1016/j.advengsoft.2022.103317 |
A robust semantic lung segmentation study for CNN-based COVID-19 diagnosis. | This paper aims to diagnose COVID-19 by using Chest X-Ray (CXR) scan images in a deep learning-based system. First of all, COVID-19 Chest X-Ray Dataset is used to segment the lung parts in CXR images semantically. DeepLabV3+ architecture is trained by using the masks of the lung parts in this dataset. The trained architecture is then fed with images in the COVID-19 Radiography Database. In order to improve the output images, some image preprocessing steps are applied. As a result, lung regions are successfully segmented from CXR images. The next step is feature extraction and classification. While features are extracted with modified AlexNet (mAlexNet), Support Vector Machine (SVM) is used for classification. As a result, 3-class data consisting of Normal, Viral Pneumonia and COVID-19 class are classified with 99.8% success. Classification results show that the proposed method is superior to previous state-of-the-art methods. | Chemometrics and intelligent laboratory systems : an international journal sponsored by the Chemometrics Society | 2022-11-01T00:00:00 | [
"Muhammet FatihAslan"
] | 10.1016/j.chemolab.2022.104695
10.1109/RBME.2020.2987975
10.1007/s11760-022-02302-3
10.1016/j.eng.2020.04.010 |
Deep-learning-based hepatic fat assessment (DeHFt) on non-contrast chest CT and its association with disease severity in COVID-19 infections: A multi-site retrospective study. | Hepatic steatosis (HS) identified on CT may provide an integrated cardiometabolic and COVID-19 risk assessment. This study presents a deep-learning-based hepatic fat assessment (DeHFt) pipeline for (a) more standardised measurements and (b) investigating the association between HS (liver-to-spleen attenuation ratio <1 in CT) and COVID-19 infections severity, wherein severity is defined as requiring invasive mechanical ventilation, extracorporeal membrane oxygenation, death.
DeHFt comprises two steps. First, a deep-learning-based segmentation model (3D residual-UNet) is trained (N.ß=.ß80) to segment the liver and spleen. Second, CT attenuation is estimated using slice-based and volumetric-based methods. DeHFt-based mean liver and liver-to-spleen attenuation are compared with an expert's ROI-based measurements. We further obtained the liver-to-spleen attenuation ratio in a large multi-site cohort of patients with COVID-19 infections (D1, N.ß=.ß805; D2, N.ß=.ß1917; D3, N.ß=.ß169) using the DeHFt pipeline and investigated the association between HS and COVID-19 infections severity.
The DeHFt pipeline achieved a dice coefficient of 0.95, 95% CI [0.93...0.96] on the independent validation cohort (N.ß=.ß49). The automated slice-based and volumetric-based liver and liver-to-spleen attenuation estimations strongly correlated with expert's measurement. In the COVID-19 cohorts, severe infections had a higher proportion of patients with HS than non-severe infections (pooled OR.ß=.ß1.50, 95% CI [1.20...1.88], P.ß<.ß.001).
The DeHFt pipeline enabled accurate segmentation of liver and spleen on non-contrast CTs and automated estimation of liver and liver-to-spleen attenuation ratio. In three cohorts of patients with COVID-19 infections (N.ß=.ß2891), HS was associated with disease severity. Pending validation, DeHFt provides an automated CT-based metabolic risk assessment.
For a full list of funding bodies, please see the Acknowledgements. | EBioMedicine | 2022-10-30T00:00:00 | [
"GouravModanwal",
"SadeerAl-Kindi",
"JonathanWalker",
"RohanDhamdhere",
"LeiYuan",
"MengyaoJi",
"ChengLu",
"PingfuFu",
"SanjayRajagopalan",
"AnantMadabhushi"
] | 10.1016/j.ebiom.2022.104315
10.1016/j.acra.2012.02.022
10.48550/arXiv.1901.04056 |
Artificial Intelligence and Deep Learning Assisted Rapid Diagnosis of COVID-19 from Chest Radiographical Images: A Survey. | Artificial Intelligence (AI) has been applied successfully in many real-life domains for solving complex problems. With the invention of Machine Learning (ML) paradigms, it becomes convenient for researchers to predict the outcome based on past data. Nowadays, ML is acting as the biggest weapon against the COVID-19 pandemic by detecting symptomatic cases at an early stage and warning people about its futuristic effects. It is observed that COVID-19 has blown out globally so much in a short period because of the shortage of testing facilities and delays in test reports. To address this challenge, AI can be effectively applied to produce fast as well as cost-effective solutions. Plenty of researchers come up with AI-based solutions for preliminary diagnosis using chest CT Images, respiratory sound analysis, voice analysis of symptomatic persons with asymptomatic ones, and so forth. Some AI-based applications claim good accuracy in predicting the chances of being COVID-19-positive. Within a short period, plenty of research work is published regarding the identification of COVID-19. This paper has carefully examined and presented a comprehensive survey of more than 110 papers that came from various reputed sources, that is, Springer, IEEE, Elsevier, MDPI, arXiv, and medRxiv. Most of the papers selected for this survey presented candid work to detect and classify COVID-19, using deep-learning-based models from chest X-Rays and CT scan images. We hope that this survey covers most of the work and provides insights to the research community in proposing efficient as well as accurate solutions for fighting the pandemic. | Contrast media & molecular imaging | 2022-10-29T00:00:00 | [
"DeepakSinwar",
"Vijaypal SinghDhaka",
"Biniyam AlemuTesfaye",
"GhanshyamRaghuwanshi",
"AshishKumar",
"Sunil KrMaakar",
"SanjayAgrawal"
] | 10.1155/2022/1306664
10.1016/j.jbi.2008.05.013
10.1038/s41591-020-0931-3
10.1016/j.asoc.2020.106282
10.1016/j.chaos.2020.110055
10.1007/s42979-021-00923-y
10.3390/ai1020009
10.1016/j.chaos.2020.110059
10.1016/S2214-109X(20)30068-1
10.1016/j.clinimag.2020.02.008
10.1148/ryct.2020200082
10.1007/978-3-030-00889-5_1
10.1101/2020.03.12.20027185
10.1109/TCBB.2021.3065361
10.1016/j.cell.2020.04.045
10.1155/2022/7377502
10.1148/radiol.2020200905
10.1016/j.compbiomed.2020.103795
10.1038/s41598-020-76282-0
10.1007/s00330-021-07715-1
10.1016/j.irbm.2020.05.003
10.1007/s00330-020-06713-z
10.1016/j.ejrad.2020.109041
10.1109/RBME.2020.2987975
10.12669/pjms.36.covid19-s4.2778
10.1155/2022/8549707
10.3390/ijerph18063056
10.3390/healthcare9050522
10.1016/j.chaos.2020.109944
10.1155/2019/4180949
10.1038/s41598-020-76550-z
10.37896/jxu14.8/061
10.1007/s10489-020-01867-1
10.1007/s40846-020-00529-4
10.1109/CVPR.2018.00474
10.1016/j.compbiomed.2020.103805
10.1016/j.compbiomed.2020.103792
10.1007/s10044-021-00984-y
10.1109/CVPR.2015.7298594
10.1089/pop.2014.0089
10.33889/ijmems.2020.5.4.052
10.1109/CVPR.2017.243
10.1016/j.cmpb.2020.105581
10.1016/j.imu.2020.100360
10.1109/ACCESS.2020.3010287
10.1371/journal.pmed.1002686
10.1007/s13246-020-00865-4
10.1016/j.chaos.2020.110071
10.1016/j.cmpb.2020.105608
10.1016/j.mehy.2020.109761
10.1016/j.compbiomed.2020.103869
10.1016/j.cmpb.2020.105532
10.1007/s00330-020-07044-9
10.1007/s10489-020-01826-w
10.1007/s10489-020-01831-z
10.1007/s00264-020-04609-7
10.1007/s00500-020-05275-y
10.1155/2022/9697285
10.1007/s42979-021-00823-1
10.1101/2020.04.02.20051136v1
10.1101/2020.04.02.20051136
10.1088/1361-6560/abe838
10.1007/s41870-022-00949-2
10.1101/2020.02.29.20029603
10.1101/2020.03.25.20043331
10.1101/2020.04.04.20052092
10.1016/j.chaos.2020.110086
10.1186/s12859-018-2340-x
10.3390/a13100249
10.1007/s41870-022-00967-0
10.1101/2020.07.02.20145474
10.1186/s12911-020-01266-z
10.1101/2020.05.20.20107847
10.1101/2020.06.25.20140004
10.1101/2020.06.01.20119560
10.1080/09720502.2020.1833443
10.1016/j.patter.2020.100145
10.1101/2020.05.23.20110189
10.1016/j.chaos.2020.110058
10.1016/j.iot.2020.100222
10.1016/j.dsx.2020.04.012
10.1007/s41870-022-00973-2
10.1016/j.dsx.2020.04.032
10.1016/S2589-7500(20)30059-5
10.1007/s42247-020-00102-4
10.1109/access.2020.2992341
10.1088/1757-899x/1099/1/012005
10.1007/s00259-020-04953-1
10.1007/s10489-020-01770-9
10.1007/s42979-020-00410-w
10.1007/s41870-020-00571-0
10.1101/2020.04.24.20078584 |
A CNN-transformer fusion network for COVID-19 CXR image classification. | The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers' proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19. | PloS one | 2022-10-28T00:00:00 | [
"KaiCao",
"TaoDeng",
"ChuanlinZhang",
"LimengLu",
"LinLi"
] | 10.1371/journal.pone.0276758
10.1155/2021/2560388
10.1001/jama.2020.1585
10.1148/radiol.2020200432
10.1155/2022/1465173
10.1002/mp.13264
10.1148/radiol.2020200463
10.1136/bmj.m1328
10.1016/j.media.2017.07.005
10.1109/TPAMI.2019.2913372
10.1016/j.compbiomed.2020.103792
10.3389/fmed.2020.00427/full
10.1016/j.media.2020.101794
10.1038/s41598-020-76550-z
10.3389/fpubh.2022.948205/full
10.3389/fpubh.2022.948205
10.1155/2022/4254631
10.1016/j.mlwa.2021.100138
10.1007/978-981-33-4673-4_55
10.1148/radiol.2021203957
10.1007/s10278-013-9622-7 |
Classification and Detection of COVID-19 and Other Chest-Related Diseases Using Transfer Learning. | COVID-19 has infected millions of people worldwide over the past few years. The main technique used for COVID-19 detection is reverse transcription, which is expensive, sensitive, and requires medical expertise. X-ray imaging is an alternative and more accessible technique. This study aimed to improve detection accuracy to create a computer-aided diagnostic tool. Combining other artificial intelligence applications techniques with radiological imaging can help detect different diseases. This study proposes a technique for the automatic detection of COVID-19 and other chest-related diseases using digital chest X-ray images of suspected patients by applying transfer learning (TL) algorithms. For this purpose, two balanced datasets, Dataset-1 and Dataset-2, were created by combining four public databases and collecting images from recently published articles. Dataset-1 consisted of 6000 chest X-ray images with 1500 for each class. Dataset-2 consisted of 7200 images with 1200 for each class. To train and test the model, TL with nine pretrained convolutional neural networks (CNNs) was used with augmentation as a preprocessing method. The network was trained to classify using five classifiers: two-class classifier (normal and COVID-19); three-class classifier (normal, COVID-19, and viral pneumonia), four-class classifier (normal, viral pneumonia, COVID-19, and tuberculosis (Tb)), five-class classifier (normal, bacterial pneumonia, COVID-19, Tb, and pneumothorax), and six-class classifier (normal, bacterial pneumonia, COVID-19, viral pneumonia, Tb, and pneumothorax). For two, three, four, five, and six classes, our model achieved a maximum accuracy of 99.83, 98.11, 97.00, 94.66, and 87.29%, respectively. | Sensors (Basel, Switzerland) | 2022-10-28T00:00:00 | [
"Muhammad TahirNaseem",
"TajmalHussain",
"Chan-SuLee",
"Muhammad AdnanKhan"
] | 10.3390/s22207977
10.1016/j.jaut.2020.102433
10.1109/ACCESS.2020.3010287
10.1109/ACCESS.2017.2788044
10.1155/2022/6112815
10.1007/s11042-019-07820-w
10.1109/JBHI.2015.2425041
10.1016/j.irbm.2020.05.003
10.1016/j.compbiomed.2020.103795
10.1007/s10916-021-01745-4
10.1007/s10044-021-00984-y
10.3390/sym12040651
10.1007/s42600-021-00151-6
10.1007/s13246-020-00865-4
10.1016/j.compbiomed.2020.103792
10.1007/s11263-015-0816-y
10.1016/j.chaos.2020.109944
10.1016/j.cmpb.2020.105581
10.1038/s41598-020-76550-z
10.1016/j.eswa.2020.114054
10.1007/s00264-020-04609-7
10.1155/2018/2908517
10.1016/j.cell.2018.02.010
10.1109/TMI.2013.2284099
10.1109/TMI.2013.2290491
10.1016/j.compbiomed.2019.04.024
10.1145/3065386
10.1186/s40537-019-0192-5
10.1016/j.compbiomed.2021.104608 |
Role of Drone Technology Helping in Alleviating the COVID-19 Pandemic. | The COVID-19 pandemic, caused by a new coronavirus, has affected economic and social standards as governments and healthcare regulatory agencies throughout the world expressed worry and explored harsh preventative measures to counteract the disease's spread and intensity. Several academics and experts are primarily concerned with halting the continuous spread of the unique virus. Social separation, the closing of borders, the avoidance of big gatherings, contactless transit, and quarantine are important methods. Multiple nations employ autonomous, digital, wireless, and other promising technologies to tackle this coronary pneumonia. This research examines a number of potential technologies, including unmanned aerial vehicles (UAVs), artificial intelligence (AI), blockchain, deep learning (DL), the Internet of Things (IoT), edge computing, and virtual reality (VR), in an effort to mitigate the danger of COVID-19. Due to their ability to transport food and medical supplies to a specific location, UAVs are currently being utilized as an innovative method to combat this illness. This research intends to examine the possibilities of UAVs in the context of the COVID-19 pandemic from several angles. UAVs offer intriguing options for delivering medical supplies, spraying disinfectants, broadcasting communications, conducting surveillance, inspecting, and screening patients for infection. This article examines the use of drones in healthcare as well as the advantages and disadvantages of strict adoption. Finally, challenges, opportunities, and future work are discussed to assist in adopting drone technology to tackle COVID-19-like diseases. | Micromachines | 2022-10-28T00:00:00 | [
"Syed Agha HassnainMohsan",
"Qurat Ul AinZahra",
"Muhammad AsgharKhan",
"Mohammed HAlsharif",
"Ismail AElhaty",
"AbuJahid"
] | 10.3390/mi13101593
10.1109/IOTM.0011.2100053
10.1017/dmp.2021.9
10.3390/drones6010015
10.1007/s10796-021-10131-x
10.3389/frcmn.2020.566853
10.1080/14649365.2021.1921245
10.1177/2043820620934267
10.1016/j.future.2020.08.046
10.1016/j.retram.2020.01.002
10.2139/ssrn.3565463
10.1109/ACCESS.2018.2875739
10.1109/ACCESS.2019.2905347
10.3390/drones4040068
10.1109/TMM.2021.3075566
10.1109/TITS.2021.3113787
10.1109/TII.2022.3174160
10.3389/fpubh.2022.855994
10.1007/s11655-020-3192-6
10.1038/s41598-020-73510-5
10.1002/er.6007
10.3390/drones6060147
10.3390/rs11070820
10.3390/rs12213539
10.3390/mi13060977
10.1016/j.iot.2020.100218
10.1145/3001836
10.1016/j.adhoc.2020.102324
10.1080/17538947.2021.1952324
10.1049/ntw2.12040
10.3390/drones5030058
10.1002/ett.4255
10.3390/drones5010018
10.3390/drones6050109
10.3390/ijerph18052637
10.53553/JCH.v09i01.009
10.1016/j.trip.2021.100453
10.1016/j.ijhm.2020.102758
10.1016/j.ajic.2022.03.004
10.1016/j.vaccine.2016.06.022
10.3390/ijerph17239117
10.28991/HIJ-2020-01-02-03
10.1007/s13205-020-02581-y
10.1155/2022/9718580
10.1109/MNET.011.2000439
10.1007/s10462-021-10106-z
10.1007/s41666-020-00080-6
10.1016/j.ceh.2020.03.001
10.4108/eai.13-7-2018.163997
10.1016/j.clineuro.2021.106655
10.1109/MCE.2020.2992034
10.1101/2020.04.06.20039909
10.1002/ett.4245
10.1108/IJPCC-05-2020-0046
10.1016/j.scs.2020.102589
10.1136/bmjsem-2020-000943
10.1016/j.jnca.2022.103341
10.1109/TII.2021.3101651
10.1007/s10586-022-03722-z
10.1016/j.ijhcs.2020.102573
10.1007/s43926-021-00005-8
10.1016/j.bspc.2022.103658
10.3390/app12062828
10.3389/fnbot.2022.840594
10.1016/j.techfore.2020.120431
10.32604/cmc.2022.021850
10.1007/s00607-021-01022-9
10.1186/s12909-020-02245-8
10.3389/fneur.2021.646902
10.1007/s12311-020-01139-1
10.1002/adma.202103646
10.1016/j.jnlssr.2020.06.011
10.1109/JBHI.2021.3103404
10.1109/IOTM.1100.2000068
10.1109/MWC.001.2000429
10.1109/JSEN.2022.3188929
10.1109/ACCESS.2021.3133796
10.1007/s00607-022-01064-7
10.3991/ijim.v15i22.22623
10.1007/s11036-018-1193-x
10.3390/drones4040065
10.1016/j.jnca.2016.12.012
10.1109/JIOT.2020.3007518
10.1111/1758-5899.13007 |
Machine Learning and Deep Learning in Cardiothoracic Imaging: A Scoping Review. | Machine-learning (ML) and deep-learning (DL) algorithms are part of a group of modeling algorithms that grasp the hidden patterns in data based on a training process, enabling them to extract complex information from the input data. In the past decade, these algorithms have been increasingly used for image processing, specifically in the medical domain. Cardiothoracic imaging is one of the early adopters of ML/DL research, and the COVID-19 pandemic resulted in more research focus on the feasibility and applications of ML/DL in cardiothoracic imaging. In this scoping review, we systematically searched available peer-reviewed medical literature on cardiothoracic imaging and quantitatively extracted key data elements in order to get a big picture of how ML/DL have been used in the rapidly evolving cardiothoracic imaging field. During this report, we provide insights on different applications of ML/DL and some nuances pertaining to this specific field of research. Finally, we provide general suggestions on how researchers can make their research more than just a proof-of-concept and move toward clinical adoption. | Diagnostics (Basel, Switzerland) | 2022-10-28T00:00:00 | [
"BardiaKhosravi",
"PouriaRouzrokh",
"ShahriarFaghani",
"ManaMoassefi",
"SanazVahdati",
"ElhamMahmoudi",
"HamidChalian",
"Bradley JErickson"
] | 10.3390/diagnostics12102512
10.1007/s12525-021-00475-2
10.4997/jrcpe.2020.309
10.1093/eurheartj/ehw302
10.3389/fdata.2018.00006
10.3348/kjr.2017.18.4.570
10.1001/jama.2018.1150
10.1007/s00256-021-03876-8
10.1038/s42256-021-00305-2
10.1016/j.acra.2021.12.032
10.1007/s00330-021-07781-5
10.11622/smedj.2019141
10.1016/j.jacr.2017.12.021
10.1097/RTI.0000000000000453
10.1038/s41598-021-84698-5
10.1016/j.acra.2018.10.007
10.1259/bjro.20200037
10.1007/s00247-021-05177-7
10.1016/j.media.2017.06.015
10.1007/978-981-16-3783-4_15
10.7326/M18-0850
10.1007/s10462-021-10106-z
10.1145/3065386
10.5555/1593511
10.1186/s12874-021-01404-9
10.1007/s00330-020-07450-z
10.1148/radiol.2020200432
10.1148/radiol.2020204226
10.1016/j.compbiomed.2021.104304
10.1148/ryai.2019180041
10.1118/1.3528204
10.1016/j.cmpb.2021.106373
10.3348/kjr.2021.0148
10.3390/s21217059
10.1007/s00330-019-06628-4
10.1609/aaai.v33i01.3301590
10.1186/s41747-018-0068-z
10.1186/s13244-020-00887-2
10.1148/ryai.220010
10.1016/j.cmpb.2019.05.020
10.1016/j.compbiomed.2022.105466
10.1007/s00431-021-04061-8
10.1155/2021/6050433
10.1016/j.cmpb.2022.106815
10.1155/2022/4185835
10.1016/j.ejmp.2019.11.026
10.1002/mp.15019
10.3390/tomography7040054
10.1055/a-1717-2703
10.1016/j.cmpb.2019.105288
10.1016/j.media.2020.101823
10.1016/j.compbiomed.2021.104689
10.1109/TMI.2018.2833385
10.1016/j.artmed.2020.101975
10.1016/j.morpho.2019.09.002
10.1002/mp.14066
10.1016/j.media.2022.102362
10.1109/TMI.2021.3053008
10.1088/1361-6560/ab1cee
10.1371/journal.pone.0244745
10.1016/j.media.2022.102491
10.1088/1361-6560/ab18db
10.3233/XST-17358
10.1148/ryai.220061
10.1148/radiol.2018182294
10.1148/rg.2020200040
10.1016/j.ejmp.2021.08.011
10.1016/j.lungcan.2021.01.027
10.1016/j.media.2022.102389
10.1097/RLI.0000000000000707
10.1148/ryai.2020190043
10.1148/ryai.2021200267
10.1007/s00530-022-00960-4
10.1148/ryai.210290
10.1016/j.acra.2021.09.007
10.1097/RLI.0000000000000763
10.1145/3531146.3533193 |
Pseudo-Label Guided Image Synthesis for Semi-Supervised COVID-19 Pneumonia Infection Segmentation. | Coronavirus disease 2019 (COVID-19) has become a severe global pandemic. Accurate pneumonia infection segmentation is important for assisting doctors in diagnosing COVID-19. Deep learning-based methods can be developed for automatic segmentation, but the lack of large-scale well-annotated COVID-19 training datasets may hinder their performance. Semi-supervised segmentation is a promising solution which explores large amounts of unlabelled data, while most existing methods focus on pseudo-label refinement. In this paper, we propose a new perspective on semi-supervised learning for COVID-19 pneumonia infection segmentation, namely pseudo-label guided image synthesis. The main idea is to keep the pseudo-labels and synthesize new images to match them. The synthetic image has the same COVID-19 infected regions as indicated in the pseudo-label, and the reference style extracted from the style code pool is added to make it more realistic. We introduce two representative methods by incorporating the synthetic images into model training, including single-stage Synthesis-Assisted Cross Pseudo Supervision (SA-CPS) and multi-stage Synthesis-Assisted Self-Training (SA-ST), which can work individually as well as cooperatively. Synthesis-assisted methods expand the training data with high-quality synthetic data, thus improving the segmentation performance. Extensive experiments on two COVID-19 CT datasets for segmenting the infections demonstrate our method is superior to existing schemes for semi-supervised segmentation, and achieves the state-of-the-art performance on both datasets. Code is available at: https://github.com/FeiLyu/SASSL. | IEEE transactions on medical imaging | 2022-10-27T00:00:00 | [
"FeiLyu",
"MangYe",
"Jonathan FrederikCarlsen",
"KennyErleben",
"SuneDarkner",
"Pong CYuen"
] | 10.1109/TMI.2022.3217501 |
Comprehensive Survey of Machine Learning Systems for COVID-19 Detection. | The last two years are considered the most crucial and critical period of the COVID-19 pandemic affecting most life aspects worldwide. This virus spreads quickly within a short period, increasing the fatality rate associated with the virus. From a clinical perspective, several diagnosis methods are carried out for early detection to avoid virus propagation. However, the capabilities of these methods are limited and have various associated challenges. Consequently, many studies have been performed for COVID-19 automated detection without involving manual intervention and allowing an accurate and fast decision. As is the case with other diseases and medical issues, Artificial Intelligence (AI) provides the medical community with potential technical solutions that help doctors and radiologists diagnose based on chest images. In this paper, a comprehensive review of the mentioned AI-based detection solution proposals is conducted. More than 200 papers are reviewed and analyzed, and 145 articles have been extensively examined to specify the proposed AI mechanisms with chest medical images. A comprehensive examination of the associated advantages and shortcomings is illustrated and summarized. Several findings are concluded as a result of a deep analysis of all the previous works using machine learning for COVID-19 detection, segmentation, and classification. | Journal of imaging | 2022-10-27T00:00:00 | [
"BayanAlsaaidah",
"Moh'd RasoulAl-Hadidi",
"HebaAl-Nsour",
"RajaMasadeh",
"NaelAlZubi"
] | 10.3390/jimaging8100267
10.1177/2347631120983481
10.1016/j.jbusres.2020.05.030
10.1080/16549716.2020.1788263
10.11591/ijece.v10i5.pp4738-4744
10.1148/rg.2017160130
10.1016/j.jmir.2019.09.005
10.1147/rd.33.0210
10.1109/MCE.2016.2640698
10.25046/aj030435
10.1007/s10514-022-10039-8
10.1016/j.crad.2018.05.015
10.1038/nature14539
10.1021/acs.chas.0c00075
10.1561/2000000039
10.1016/j.neunet.2014.09.003
10.1109/TSMC.1971.4308320
10.1109/5.726791
10.1016/j.knosys.2018.10.034
10.1145/2347736.2347755
10.1016/j.jinf.2020.04.004
10.1016/j.imu.2020.100449
10.1038/s41598-021-93719-2
10.3390/s22062224
10.1093/clinchem/hvaa200
10.1016/j.chaos.2020.110120
10.1007/s10462-021-10106-z
10.1016/j.eswa.2022.116540
10.34306/ajri.v3i2.659
10.1111/exsy.12759
10.1007/s11042-020-09894-3
10.1109/TIP.2021.3058783
10.1109/JBHI.2021.3074893
10.1007/s12559-020-09785-7
10.1007/s13246-020-00865-4
10.1109/ACCESS.2020.3010287
10.1016/j.cmpb.2020.105608
10.1109/JBHI.2020.3037127
10.1016/j.patrec.2020.09.010
10.1007/s11042-021-10783-6
10.3389/fmed.2020.00427
10.1007/s10489-020-01826-w
10.1038/s41598-020-74539-2
10.1109/ACCESS.2020.2994762
10.1145/3422622
10.1016/j.compbiomed.2012.12.004
10.1002/mp.14676
10.1109/TMI.2020.2996645
10.1016/j.patcog.2021.108109
10.1155/2021/5544742
10.3390/s21217116
10.1016/j.compbiomed.2020.103869
10.1016/j.compbiomed.2020.104066
10.1016/j.compbiomed.2021.104348
10.1016/j.compbiomed.2021.104781
10.1016/j.chaos.2020.110170
10.1016/j.eswa.2020.114054
10.1016/j.aej.2021.06.024
10.1016/j.compbiomed.2021.104572
10.1016/j.imu.2020.100412
10.20944/preprints202003.0300.v1
10.1007/s11042-021-11299-9
10.1101/2020.04.11.20054643
10.1016/j.asoc.2021.108190
10.1101/2020.05.10.20097063
10.1101/2020.07.02.20136721
10.1016/j.imu.2022.100945
10.1101/2020.11.08.20228080
10.1016/j.compmedimag.2021.102008
10.1007/s40846-020-00529-4
10.1101/2020.03.30.20047787
10.1080/07391102.2020.1788642
10.1080/07391102.2021.1875049
10.1109/TII.2021.3057524
10.1016/j.radi.2020.10.018
10.1016/j.imu.2020.100505
10.1016/j.inffus.2021.04.008
10.1007/s42600-020-00091-7
10.1007/s10489-020-02010-w
10.3390/ijerph18158052
10.1002/ima.22525
10.1002/ima.22527
10.31661/jbpe.v0i0.2008-1153
10.1002/jemt.23713
10.1371/journal.pone.0242899
10.3390/jpm11010028
10.1007/s40747-020-00199-4
10.1007/s11760-020-01820-2
10.1016/j.patcog.2021.107848
10.1016/j.compbiomed.2020.104037
10.1007/s10044-021-00984-y
10.1152/physiolgenomics.00084.2020
10.14299/ijser.2020.03.02
10.20944/preprints202009.0524.v1
10.1148/radiol.2020200905
10.1007/s12539-020-00403-6
10.1007/s00530-021-00826-1
10.1186/s12938-020-00807-x
10.1186/s12938-020-00831-x
10.1038/s41598-020-76141-y
10.1007/s40747-020-00216-6
10.1007/s10096-020-03901-z
10.1007/s10489-020-02149-6
10.1007/s00259-020-04929-1
10.1007/s00521-021-06219-9
10.3390/sym12040651
10.1109/JAS.2020.1003393
10.1016/j.patcog.2020.107747
10.1109/ACCESS.2021.3058854
10.1016/j.bbe.2021.01.002
10.18517/ijaseit.10.2.11446
10.1109/TMI.2020.2995965
10.1007/s10489-020-01829-7
10.1007/s42600-020-00110-7
10.1080/02664763.2020.1849057
10.1016/j.compbiomed.2020.103792
10.3390/bioengineering8020026
10.1016/j.asoc.2022.108966
10.1155/2022/7377502
10.3389/fdgth.2021.662343
10.1049/ipr2.12474 |
Deep Transfer Learning for COVID-19 Detection and Lesion Recognition Using Chest CT Images. | Starting from December 2019, the global pandemic of coronavirus disease 2019 (COVID-19) is continuously expanding and has caused several millions of deaths worldwide. Fast and accurate diagnostic methods for COVID-19 detection play a vital role in containing the plague. Chest computed tomography (CT) is one of the most commonly used diagnosis methods. However, a complete CT-scan has hundreds of slices, and it is time-consuming for radiologists to check each slice to diagnose COVID-19. This study introduces a novel method for fast and automated COVID-19 diagnosis using the chest CT scans. The proposed models are based on the state-of-the-art deep convolutional neural network (CNN) architecture, and a 2D global max pooling (globalMaxPool2D) layer is used to improve the performance. We compare the proposed models to the existing state-of-the-art deep learning models such as CNN based models and vision transformer (ViT) models. Based off of metric such as area under curve (AUC), sensitivity, specificity, accuracy, and false discovery rate (FDR), experimental results show that the proposed models outperform the previous methods, and the best model achieves an area under curve of 0.9744 and accuracy 94.12% on our test datasets. It is also shown that the accuracy is improved by around 1% by using the 2D global max pooling layer. Moreover, a heatmap method to highlight the lesion area on COVID-19 chest CT images is introduced in the paper. This heatmap method is helpful for a radiologist to identify the abnormal pattern of COVID-19 on chest CT images. In addition, we also developed a freely accessible online simulation software for automated COVID-19 detection using CT images. The proposed deep learning models and software tool can be used by radiologist to diagnose COVID-19 more accurately and efficiently. | Computational and mathematical methods in medicine | 2022-10-27T00:00:00 | [
"SaiZhang",
"Guo-ChangYuan"
] | 10.1155/2022/4509394
10.1016/S0140-6736(20)30211-7
10.1001/jama.2020.1585
10.1111/all.14238
10.1016/j.jhin.2020.03.001
10.1093/cid/ciaa344
10.1148/radiol.2020200642
10.1148/radiol.2020200432
10.1007/s00330-020-06975-7
10.1016/S1473-3099(20)30086-4
10.1007/s00330-020-06731-x
10.1021/acssensors.0c02042
10.1146/annurev-bioeng-071516-044442
10.1007/978-1-4471-4929-3
10.1155/2018/5157020
10.1016/S2213-2600(18)30286-8
10.1148/radiol.2020192154
10.1136/gutjnl-2017-314547
10.1093/mind/LIX.236.433
10.1162/089976602760128018
10.1155/2017/8314740
10.1109/FG.2017.42
10.1109/HealthCom.2017.8210843
10.1016/j.compbiomed.2021.104306
10.1016/j.compbiomed.2020.103795
10.1038/s41591-020-0931-3
10.1016/j.ejrad.2020.109402
10.1016/j.compbiomed.2021.104348
10.1016/j.imu.2020.100427
10.7717/peerj.10086
10.1016/j.patrec.2020.10.001
10.3390/healthcare10010085
10.1109/TENSYMP52854.2021.9550819
10.1007/978-1-4842-2766-4
10.1109/ICEngTechnol.2017.8308186
10.1145/3065386
10.1609/aaai.v31i1.11231
10.3390/s22082988
10.1007/s11263-019-01228-7 |
Diagnostic performance of corona virus disease 2019 chest computer tomography image recognition based on deep learning: Systematic review and meta-analysis. | To analyze the diagnosis performance of deep learning model used in corona virus disease 2019 (COVID-19) computer tomography(CT) chest scans. The included sample contains healthy people, confirmed COVID-19 patients and unconfirmed suspected patients with corresponding symptoms.
PubMed, Web of Science, Wiley, China National Knowledge Infrastructure, WAN FANG DATA, and Cochrane Library were searched for articles. Three researchers independently screened the literature, extracted the data. Any differences will be resolved by consulting the third author to ensure that a highly reliable and useful research paper is produced. Data were extracted from the final articles, including: authors, country of study, study type, sample size, participant demographics, type and name of AI software, results (accuracy, sensitivity, specificity, ROC, and predictive values), other outcome(s) if applicable.
Among the 3891 searched results, 32 articles describing 51,392 confirmed patients and 7686 non-infected individuals met the inclusion criteria. The pooled sensitivity, the pooled specificity, positive likelihood ratio, negative likelihood ratio and the pooled diagnostic odds ratio (OR) is 0.87(95%CI [confidence interval]: 0.85, 0.89), 0.85(95%CI: 0.82, 0.87), 6.7(95%CI: 5.7, 7.8), 0.14(95%CI: 0.12, 0.16), and 49(95%CI: 38, 65). Further, the AUROC (area under the receiver operating characteristic curve) is 0.94(95%CI: 0.91, 0.96). Secondary outcomes are specific sensitivity and specificity within subgroups defined by different models. Resnet has the best diagnostic performance, which has the highest sensitivity (0.91[95%CI: 0.87, 0.94]), specificity (0.90[95%CI: 0.86, 0.93]) and AUROC (0.96[95%CI: 0.94, 0.97]), according to the AUROC, we can get the rank Resnet > Densenet > VGG > Mobilenet > Inception > Effficient > Alexnet.
Our study findings show that deep learning models have immense potential in accurately stratifying COVID-19 patients and in correctly differentiating them from patients with other types of pneumonia and normal patients. Implementation of deep learning-based tools can assist radiologists in correctly and quickly detecting COVID-19 and, consequently, in combating the COVID-19 pandemic. | Medicine | 2022-10-26T00:00:00 | [
"QiaolanWang",
"JingxuanMa",
"LuoningZhang",
"LinshenXie"
] | 10.1097/MD.0000000000031346 |
Automated diagnosis of COVID-19 using radiological modalities and Artificial Intelligence functionalities: A retrospective study based on chest HRCT database. | The spread of coronavirus has been challenging for the healthcare system's proper management and diagnosis during the rapid spread and control of the infection. Real-time reverse transcription-polymerase chain reaction (RT-PCR), though considered the standard testing measure, has low sensitivity and is time-consuming, which restricts the fast screening of individuals. Therefore, computer tomography (CT) is used to complement the traditional approaches and provide fast and effective screening over other diagnostic methods. This work aims to appraise the importance of chest CT findings of COVID-19 and post-COVID in the diagnosis and prognosis of infected patients and to explore the ways and means to integrate CT findings for the development of advanced Artificial Intelligence (AI) tool-based predictive diagnostic techniques.
The retrospective study includes a 188 patient database with COVID-19 infection confirmed by RT-PCR testing, including post-COVID patients. Patients underwent chest high-resolution computer tomography (HRCT), where the images were evaluated for common COVID-19 findings and involvement of the lung and its lobes based on the coverage region. The radiological modalities analyzed in this study may help the researchers in generating a predictive model based on AI tools for further classification with a high degree of reliability.
Mild to moderate ground glass opacities (GGO) with or without consolidation, crazy paving patterns, and halo signs were common COVID-19 related findings. A CT score is assigned to every patient based on the severity of lung lobe involvement.
Typical multifocal, bilateral, and peripheral distributions of GGO are the main characteristics related to COVID-19 pneumonia. Chest HRCT can be considered a standard method for timely and efficient assessment of disease progression and management severity. With its fusion with AI tools, chest HRCT can be used as a one-stop platform for radiological investigation and automated diagnosis system. | Biomedical signal processing and control | 2022-10-25T00:00:00 | [
"UpasanaBhattacharjya",
"Kandarpa KumarSarma",
"Jyoti PrakashMedhi",
"Binoy KumarChoudhury",
"GeetanjaliBarman"
] | 10.1016/j.bspc.2022.104297
10.1101/2020.02
10.1101/2020.03.12.20027185 |
The Deep Learning-Based Framework for Automated Predicting COVID-19 Severity Score. | With the COVID-19 pandemic sweeping the globe, an increasing number of people are working on pandemic research, but there is less effort on predicting its severity. Diagnostic chest imaging is thought to be a quick and reliable way to identify the severity of COVID-19. We describe a deep learning method to automatically predict the severity score of patients by analyzing chest X-rays, with the goal of collaborating with doctors to create corresponding treatment measures for patients and can also be used to track disease change. Our model consists of a feature extraction phase and an outcome prediction phase. The feature extraction phase uses a DenseNet backbone network to extract 18 features related to lung diseases from CXRs; the outcome prediction phase, which employs the MLP regression model, selects several important features for prediction from the features extracted in the previous phase and demonstrates the effectiveness of our model by comparing it with several commonly used regression models. On a dataset of 2373 CXRs, our model predicts the geographic extent score with 1.02 MAE and the lung opacity score with 0.85 MAE. | Procedia computer science | 2022-10-25T00:00:00 | [
"YongchangZheng",
"HongweiDong"
] | 10.1016/j.procs.2022.09.165 |
Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia. | Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts' performance. | Scientific reports | 2022-10-21T00:00:00 | [
"NabeelDurrani",
"DamjanVukovic",
"Jeroenvan der Burgt",
"MariaAntico",
"Ruud J Gvan Sloun",
"DavidCanty",
"MarianSteffens",
"AndrewWang",
"AlistairRoyse",
"ColinRoyse",
"KaviHaji",
"JasonDowling",
"GirijaChetty",
"DavideFontanarosa"
] | 10.1038/s41598-022-22196-y
10.1016/j.compbiomed.2021.104742
10.1186/s12931-020-01504-y
10.1016/j.hrtlng.2021.02.015
10.1007/s12630-020-01704-6
10.1111/1742-6723.13546
10.1186/s13089-018-0103-6
10.1016/j.crad.2020.05.001
10.1109/TMI.2021.3117246
10.1109/TUFFC.2021.3070696
10.1109/TMI.2020.2994459
10.1109/TUFFC.2022.3161716
10.1016/j.imu.2021.100687
10.1109/TUFFC.2020.3002249
10.1109/TUFFC.2020.3005512
10.1002/jum.16052
10.1038/s41598-021-90153-2
10.1016/j.ejmp.2021.02.023
10.1186/s13063-019-4003-2
10.1002/jum.15548
10.1097/00005382-199621000-00002
10.1093/nsr/nwx106
10.1118/1.3611983
10.2307/2095465
10.1016/j.ipl.2005.11.003
10.1371/journal.pone.0118432
10.1007/978-3-642-40994-3_29
10.11613/BM.2012.031 |
COVID19 Diagnosis Using Chest X-rays and Transfer Learning. | A pandemic of respiratory illnesses from a novel coronavirus known as Sars-CoV-2 has swept across the globe since December of 2019. This is calling upon the research community including medical imaging to provide effective tools for use in combating this virus. Research in biomedical imaging of viral patients is already very active with machine learning models being created for diagnosing Sars-CoV-2 infections in patients using CT scans and chest x-rays. We aim to build upon this research. Here we used a transfer-learning approach to develop models capable of diagnosing COVID19 from chest x-ray. For this work we compiled a dataset of 112120 negative images from the Chest X-Ray 14 and 2725 positive images from public repositories. We tested multiple models, including logistic regression and random forest and XGBoost with and without principal components analysis, using five-fold cross-validation to evaluate recall, precision, and f1-score. These models were compared to a pre-trained deep-learning model for evaluating chest x-rays called COVID-Net. Our best model was XGBoost with principal components with a recall, precision, and f1-score of 0.692, 0.960, 0.804 respectively. This model greatly outperformed COVID-Net which scored 0.987, 0.025, 0.048. This model, with its high precision and reasonable sensitivity, would be most useful as "rule-in" test for COVID19. Though it outperforms some chemical assays in sensitivity, this model should be studied in patients who would not ordinarily receive a chest x-ray before being used for screening.
Life and Medical Sciences • Machine Learning • Artificial Intelligence.
Jonathan Stubblefield, Jason Causey, Dakota Dale, Jake Qualls, Emily Bellis, Jennifer Fowler, Karl Walker and Xiuzhen Huang. 2022. COVID19 Diagnosis Using Chest X-Rays and Transfer Learning. | medRxiv : the preprint server for health sciences | 2022-10-21T00:00:00 | [
"JonathanStubblefield",
"JasonCausey",
"DakotaDale",
"JakeQualls",
"EmilyBellis",
"JenniferFowler",
"KarlWalker",
"XiuzhenHuang"
] | 10.1101/2022.10.09.22280877
10.1145/2939672.2939785
10.1109/CVPR.2017.369
10.1101/2020.02.25.20021568
10.1101/2020.04.11.20062091
10.3390/diagnostics10090669
10.7937/91ah-v663
10.1148/radiol.2021203957
10.1007/s10278-013-9622-7
10.1613/jair.953 |
Automated system for classification of COVID-19 infection from lung CT images based on machine learning and deep learning techniques. | The objectives of our proposed study were as follows: First objective is to segment the CT images using a k-means clustering algorithm for extracting the region of interest and to extract textural features using gray level co-occurrence matrix (GLCM). Second objective is to implement machine learning classifiers such as Naïve bayes, bagging and Reptree to classify the images into two image classes namely COVID and non-COVID and to compare the performance of the three pre-trained CNN models such as AlexNet, ResNet50 and SqueezeNet with that of the proposed machine learning classifiers. Our dataset consists of 100 COVID and non-COVID images which are pre-processed and segmented with our proposed algorithm. Following the feature extraction process, three machine learning classifiers (Naive Bayes, Bagging, and REPTree) were used to classify the normal and covid patients. We had implemented the three pre-trained CNN models such as AlexNet, ResNet50 and SqueezeNet for comparing their performance with machine learning classifiers. In machine learning, the Naive Bayes classifier achieved the highest accuracy of 97%, whereas the ResNet50 CNN model attained the highest accuracy of 99%. Hence the deep learning networks outperformed well compared to the machine learning techniques in the classification of Covid-19 images. | Scientific reports | 2022-10-19T00:00:00 | [
"BhargaveeGuhan",
"LailaAlmutairi",
"SSowmiya",
"USnekhalatha",
"TRajalakshmi",
"Shabnam MohamedAslam"
] | 10.1038/s41598-022-20804-5
10.1016/j.ajem.2020.04.048
10.1126/scitranslmed.abc1931
10.1136/bmj.m1403
10.1002/jmv.25721
10.1080/14737159.2020.1757437
10.1148/radiol.2021204522
10.1136/bmjopen-2020-04294
10.1016/j.ibmed.2020.100013
10.1007/s10489-020-01902-1
10.1007/s13246-020-00865-4
10.1007/s10140-020-01886-y
10.1007/s10096-020-03901-z
10.1007/s10044-021-00984-y
10.1007/s40009-020-01009-8
10.1155/2022/5329014
10.1016/j.jnlest.2022.100161
10.1016/j.compbiomed.2020.104037
10.1038/s41598-021-95537-y
10.1016/j.media.2020.101794
10.1016/j.bspc.2020.102365
10.1016/j.compbiomed.2020.103795
10.1148/radiol.2020191145
10.1148/rg.2017160130
10.18201/ijisae.2019252786
10.30534/ijatcse/2020/221932020
10.1109/TMI.2018.2806309
10.1016/j.jocs.2018.11.008 |
Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT. | Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art. | Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society | 2022-10-19T00:00:00 | [
"Mohammad ArafatHussain",
"ZahraMirikharaji",
"MohammadMomeny",
"MahmoudMarhamati",
"Ali AsgharNeshat",
"RafeefGarbi",
"GhassanHamarneh"
] | 10.1016/j.compmedimag.2022.102127
10.7937/tcia.2020.gqry-nc81 |
Attention induction for a CT volume classification of COVID-19. | This study proposes a method to draw attention toward the specific radiological findings of coronavirus disease 2019 (COVID-19) in CT images, such as bilaterality of ground glass opacity (GGO) and/or consolidation, in order to improve the classification accuracy of input CT images.
We propose an induction mask that combines a similarity and a bilateral mask. A similarity mask guides attention to regions with similar appearances, and a bilateral mask induces attention to the opposite side of the lung to capture bilaterally distributed lesions. An induction mask for pleural effusion is also proposed in this study. ResNet18 with nonlocal blocks was trained by minimizing the loss function defined by the induction mask.
The four-class classification accuracy of the CT images of 1504 cases was 0.6443, where class 1 was the typical appearance of COVID-19 pneumonia, class 2 was the indeterminate appearance of COVID-19 pneumonia, class 3 was the atypical appearance of COVID-19 pneumonia, and class 4 was negative for pneumonia. The four classes were divided into two subgroups. The accuracy of COVID-19 and pneumonia classifications was evaluated, which were 0.8205 and 0.8604, respectively. The accuracy of the four-class and COVID-19 classifications improved when attention was paid to pleural effusion.
The proposed attention induction method was effective for the classification of CT images of COVID-19 patients. Improvement of the classification accuracy of class 3 by focusing on features specific to the class remains a topic for future work. | International journal of computer assisted radiology and surgery | 2022-10-18T00:00:00 | [
"YusukeTakateyama",
"TakahitoHaruishi",
"MasahiroHashimoto",
"YoshitoOtake",
"ToshiakiAkashi",
"AkinobuShimizu"
] | 10.1007/s11548-022-02769-y
10.1148/radiol.2020200432
10.1148/radiol.2020200343
10.1148/radiol.2020200905
10.1016/j.compbiomed.2020.103795
10.1007/s11356-020-10133-3
10.1109/ACCESS.2020.3016780
10.1109/TMI.2020.2994908
10.1016/j.patrec.2021.06.021
10.1016/j.media.2012.08.002
10.1007/s11263-015-0816-y
10.1007/BF02295996 |
Multi-texture features and optimized DeepNet for COVID-19 detection using chest x-ray images. | The corona virus disease 2019 (COVID-19) pandemic has a severe influence on population health all over the world. Various methods are developed for detecting the COVID-19, but the process of diagnosing this problem from radiology and radiography images is one of the effective procedures for diagnosing the affected patients. Therefore, a robust and effective multi-local texture features (MLTF)-based feature extraction approach and Improved Weed Sea-based DeepNet (IWS-based DeepNet) approach is proposed for detecting the COVID-19 at an earlier stage. The developed IWS-based DeepNet is developed for detecting COVID-19to optimize the structure of the Deep Convolutional Neural Network (Deep CNN). The IWS is devised by incorporating the Improved Invasive Weed Optimization (IIWO) and Sea Lion Optimization (SLnO), respectively. The noises present in the input chest x-ray (CXR) image are discarded using Region of Interest (RoI) extraction by adaptive thresholding technique. For feature extraction, the proposed MLFT is newly developed by considering various texture features for extracting the best features. Finally, the COVID-19 detection is performed using the proposed IWS-based DeepNet. Furthermore, the proposed technique achieved effective performance in terms of True Positive Rate (TPR), True Negative Rate (TNR), and accuracy with the maximum values of 0.933%, 0.890%, and 0.919%. | Concurrency and computation : practice & experience | 2022-10-18T00:00:00 | [
"AnandbabuGopatoti",
"VijayalakshmiP"
] | 10.1002/cpe.7157
10.1007/s12559-021-09848-3
10.1109/LSP.2018.2817176 |
Computer-aided diagnostic for classifying chest X-ray images using deep ensemble learning. | Nowadays doctors and radiologists are overwhelmed with a huge amount of work. This led to the effort to design different Computer-Aided Diagnosis systems (CAD system), with the aim of accomplishing a faster and more accurate diagnosis. The current development of deep learning is a big opportunity for the development of new CADs. In this paper, we propose a novel architecture for a convolutional neural network (CNN) ensemble for classifying chest X-ray (CRX) images into four classes: viral Pneumonia, Tuberculosis, COVID-19, and Healthy. Although Computed tomography (CT) is the best way to detect and diagnoses pulmonary issues, CT is more expensive than CRX. Furthermore, CRX is commonly the first step in the diagnosis, so it's very important to be accurate in the early stages of diagnosis and treatment.
We applied the transfer learning technique and data augmentation to all CNNs for obtaining better performance. We have designed and evaluated two different CNN-ensembles: Stacking and Voting. This system is ready to be applied in a CAD system to automated diagnosis such a second or previous opinion before the doctors or radiology's. Our results show a great improvement, 99% accuracy of the Stacking Ensemble and 98% of accuracy for the the Voting Ensemble.
To minimize missclassifications, we included six different base CNN models in our architecture (VGG16, VGG19, InceptionV3, ResNet101V2, DenseNet121 and CheXnet) and it could be extended to any number as well as we expect extend the number of diseases to detected. The proposed method has been validated using a large dataset created by mixing several public datasets with different image sizes and quality. As we demonstrate in the evaluation carried out, we reach better results and generalization compared with previous works. In addition, we make a first approach to explainable deep learning with the objective of providing professionals more information that may be valuable when evaluating CRXs. | BMC medical imaging | 2022-10-16T00:00:00 | [
"LaraVisuña",
"DandiYang",
"JavierGarcia-Blas",
"JesusCarretero"
] | 10.1186/s12880-022-00904-4
10.1007/s10489-020-01888-w
10.1016/j.eswa.2020.114054
10.1007/s10489-020-01902-1
10.3389/fmed.2020.00427
10.1016/j.imu.2020.100405
10.1007/s10462-020-09825-6
10.1016/j.media.2021.102121
10.3233/XST-200831
10.1055/s-0039-1677911
10.1186/s43055-021-00524-y
10.1016/j.cmpb.2020.105608
10.1016/j.eswa.2021.115141
10.1109/ACCESS.2020.3031384
10.1016/j.eswa.2021.115401
10.1109/TII.2021.3057683
10.1007/s13246-020-00966-0
10.1016/j.compeleceng.2019.08.004
10.1016/j.eswa.2020.113909
10.1007/s11263-015-0816-y
10.1148/rg.2017160032
10.3390/s21051742
10.1007/s10044-021-00970-4
10.1109/ACCESS.2020.2971257 |
Deep learning of longitudinal chest X-ray and clinical variables predicts duration on ventilator and mortality in COVID-19 patients. | To use deep learning of serial portable chest X-ray (pCXR) and clinical variables to predict mortality and duration on invasive mechanical ventilation (IMV) for Coronavirus disease 2019 (COVID-19) patients.
This is a retrospective study. Serial pCXR and serial clinical variables were analyzed for data from day 1, day 5, day 1-3, day 3-5, or day 1-5 on IMV (110 IMV survivors and 76 IMV non-survivors). The outcome variables were duration on IMV and mortality. With fivefold cross-validation, the performance of the proposed deep learning system was evaluated by receiver operating characteristic (ROC) analysis and correlation analysis.
Predictive models using 5-consecutive-day data outperformed those using 3-consecutive-day and 1-day data. Prediction using data closer to the outcome was generally better (i.e., day 5 data performed better than day 1 data, and day 3-5 data performed better than day 1-3 data). Prediction performance was generally better for the combined pCXR and non-imaging clinical data than either alone. The combined pCXR and non-imaging data of 5 consecutive days predicted mortality with an accuracy of 85 ± 3.5% (95% confidence interval (CI)) and an area under the curve (AUC) of 0.87 ± 0.05 (95% CI) and predicted the duration needed to be on IMV to within 2.56 ± 0.21 (95% CI) days on the validation dataset.
Deep learning of longitudinal pCXR and clinical data have the potential to accurately predict mortality and duration on IMV in COVID-19 patients. Longitudinal pCXR could have prognostic value if these findings can be validated in a large, multi-institutional cohort. | Biomedical engineering online | 2022-10-15T00:00:00 | [
"HongyiDuanmu",
"ThomasRen",
"HaifangLi",
"NeilMehta",
"Adam JSinger",
"Jeffrey MLevsky",
"Michael LLipton",
"Tim QDuong"
] | 10.1186/s12938-022-01045-z
10.1056/NEJMoa2001017
10.1016/S0140-6736(20)30183-5
10.1148/radiol.2020200642
10.1148/radiol.2020200463
10.1148/radiol.2020200432
10.1148/radiol.2020200370
10.1016/j.clinimag.2020.04.001
10.1007/s10140-020-01808-y
10.1097/RTI.0000000000000533
10.1148/radiol.2020201160
10.1001/jama.2020.6775
10.1177/08850666211033836
10.1056/NEJMp2006141
10.1161/CIRCULATIONAHA.115.001593
10.1590/0100-3984.2019.0049
10.1016/S1470-2045(19)30333-X
10.1371/journal.pone.0213653
10.1371/journal.pone.0221339
10.1148/radiol.2020201754
10.1007/s11547-020-01232-9
10.1371/journal.pone.0236621
10.7717/peerj.10309
10.1186/s12938-020-00831-x
10.1371/journal.pone.0236618
10.1093/infdis/jiaa447
10.1161/CIRCRESAHA.120.317134
10.32604/cmc.2020.010691
10.1093/cid/ciaa414
10.1056/NEJMoa2001316
10.1002/emp2.12205
10.3389/fmed.2021.661940
10.7150/ijms.51235
10.7717/peerj.10337
10.7717/peerj.11205
10.1109/TPAMI.2016.2572683 |
A novel abnormality annotation database for COVID-19 affected frontal lung X-rays. | Consistent clinical observations of characteristic findings of COVID-19 pneumonia on chest X-rays have attracted the research community to strive to provide a fast and reliable method for screening suspected patients. Several machine learning algorithms have been proposed to find the abnormalities in the lungs using chest X-rays specific to COVID-19 pneumonia and distinguish them from other etiologies of pneumonia. However, despite the enormous magnitude of the pandemic, there are very few instances of public databases of COVID-19 pneumonia, and to the best of our knowledge, there is no database with annotation of abnormalities on the chest X-rays of COVID-19 affected patients. Annotated databases of X-rays can be of significant value in the design and development of algorithms for disease prediction. Further, explainability analysis for the performance of existing or new deep learning algorithms will be enhanced significantly with access to ground-truth abnormality annotations. The proposed COVID Abnormality Annotation for X-Rays (CAAXR) database is built upon the BIMCV-COVID19+ database which is a large-scale dataset containing COVID-19+ chest X-rays. The primary contribution of this study is the annotation of the abnormalities in over 1700 frontal chest X-rays. Further, we define protocols for semantic segmentation as well as classification for robust evaluation of algorithms. We provide benchmark results on the defined protocols using popular deep learning models such as DenseNet, ResNet, MobileNet, and VGG for classification, and UNet, SegNet, and Mask-RCNN for semantic segmentation. The classwise accuracy, sensitivity, and AUC-ROC scores are reported for the classification models, and the IoU and DICE scores are reported for the segmentation models. | PloS one | 2022-10-15T00:00:00 | [
"SurbhiMittal",
"Vasantha KumarVenugopal",
"Vikash KumarAgarwal",
"ManuMalhotra",
"Jagneet SinghChatha",
"SavinayKapur",
"AnkurGupta",
"VikasBatra",
"PuspitaMajumdar",
"AakarshMalhotra",
"KartikThakral",
"SahebChhabra",
"MayankVatsa",
"RichaSingh",
"SantanuChaudhury"
] | 10.1371/journal.pone.0271931
10.1016/j.tmaid.2020.101623
10.2214/AJR.20.22969
10.1007/s13246-020-00865-4
10.3390/sym12040651
10.1016/j.compbiomed.2020.103869
10.1109/TMI.2020.2993291
10.1016/j.compbiomed.2020.103792
10.1097/RTI.0000000000000533
10.1016/j.mehy.2020.109761
10.1038/s41598-020-76550-z
10.1371/journal.pone.0247176
10.1109/JBHI.2021.3111415
10.1007/s13278-021-00731-5
10.1016/j.media.2021.102046
10.1007/s10489-020-01900-3
10.1109/ACCESS.2020.3010287
10.1016/j.compbiomed.2021.104319
10.1109/TPAMI.2016.2644615
10.1371/journal.pmed.1002683
10.1126/science.aax2342
10.1109/TCYB.2019.2905157 |
An Intelligent Sensor Based Decision Support System for Diagnosing Pulmonary Ailment through Standardized Chest X-ray Scans. | Academics and the health community are paying much attention to developing smart remote patient monitoring, sensors, and healthcare technology. For the analysis of medical scans, various studies integrate sophisticated deep learning strategies. A smart monitoring system is needed as a proactive diagnostic solution that may be employed in an epidemiological scenario such as COVID-19. Consequently, this work offers an intelligent medicare system that is an IoT-empowered, deep learning-based decision support system (DSS) for the automated detection and categorization of infectious diseases (COVID-19 and pneumothorax). The proposed DSS system was evaluated using three independent standard-based chest X-ray scans. The suggested DSS predictor has been used to identify and classify areas on whole X-ray scans with abnormalities thought to be attributable to COVID-19, reaching an identification and classification accuracy rate of 89.58% for normal images and 89.13% for COVID-19 and pneumothorax. With the suggested DSS system, a judgment depending on individual chest X-ray scans may be made in approximately 0.01 s. As a result, the DSS system described in this study can forecast at a pace of 95 frames per second (FPS) for both models, which is near to real-time. | Sensors (Basel, Switzerland) | 2022-10-15T00:00:00 | [
"ShivaniBatra",
"HarshSharma",
"WadiiBoulila",
"VaishaliArya",
"PrakashSrivastava",
"Mohammad ZubairKhan",
"MoezKrichen"
] | 10.3390/s22197474
10.1152/physiolgenomics.00029.2020
10.1038/s41586-020-2008-3
10.1126/science.aba9757
10.1016/j.compbiomed.2020.103670
10.1016/j.ijid.2020.01.050
10.3390/e24040533
10.1016/j.cmpb.2020.105532
10.1016/j.compbiomed.2020.103792
10.1016/j.diii.2020.11.008
10.1148/radiol.2020203465
10.1016/j.diii.2021.05.006
10.1007/s10044-021-00958-0
10.1016/j.cmpb.2018.01.017
10.1016/j.ijmedinf.2018.06.003
10.1155/2021/6657533
10.1016/j.rmcr.2020.101265
10.1378/chest.125.6.2345
10.1007/s10140-020-01806-0
10.1148/radiol.2020202439
10.3390/s22166312
10.1016/j.ijmedinf.2022.104791
10.3390/jpm11100993
10.1109/TMI.2020.2993291
10.1109/TMI.2020.2996645
10.1038/s41598-020-76550-z
10.1007/s13246-020-00865-4
10.1007/s10489-020-01826-w
10.1016/j.cmpb.2020.105581
10.1007/s10044-021-00984-y
10.1007/s10489-020-01770-9
10.3390/s22155738
10.1016/j.cmpb.2020.105584
10.1002/jmv.25891
10.1148/radiol.2020202352
10.1183/13993003.02697-2020
10.1038/s41598-020-74164-z
10.1016/j.media.2021.102216
10.1016/j.cell.2018.02.010
10.1016/j.ecoinf.2016.11.006
10.1016/j.jocs.2017.10.006 |
Two-step machine learning to diagnose and predict involvement of lungs in COVID-19 and pneumonia using CT radiomics. | To develop a two-step machine learning (ML) based model to diagnose and predict involvement of lungs in COVID-19 and non COVID-19 pneumonia patients using CT chest radiomic features.
Three hundred CT scans (3-classes: 100 COVID-19, 100 pneumonia, and 100 healthy subjects) were enrolled in this study. Diagnostic task included 3-class classification. Severity prediction score for COVID-19 and pneumonia was considered as mild (0-25%), moderate (26-50%), and severe (>50%). Whole lungs were segmented utilizing deep learning-based segmentation. Altogether, 107 features including shape, first-order histogram, second and high order texture features were extracted. Pearson correlation coefficient (PCC≥90%) followed by different features selection algorithms were employed. ML-based supervised algorithms (Naïve Bays, Support Vector Machine, Bagging, Random Forest, K-nearest neighbors, Decision Tree and Ensemble Meta voting) were utilized. The optimal model was selected based on precision, recall and area-under-curve (AUC) by randomizing the training/validation, followed by testing using the test set.
Nine pertinent features (2 shape, 1 first-order, and 6 second-order) were obtained after features selection for both phases. In diagnostic task, the performance of 3-class classification using Random Forest was 0.909±0.026, 0.907±0.056, 0.902±0.044, 0.939±0.031, and 0.982±0.010 for precision, recall, F1-score, accuracy, and AUC, respectively. The severity prediction task using Random Forest achieved 0.868±0.123 precision, 0.865±0.121 recall, 0.853±0.139 F1-score, 0.934±0.024 accuracy, and 0.969±0.022 AUC.
The two-phase ML-based model accurately classified COVID-19 and pneumonia patients using CT radiomics, and adequately predicted severity of lungs involvement. This 2-steps model showed great potential in assessing COVID-19 CT images towards improved management of patients. | Computers in biology and medicine | 2022-10-11T00:00:00 | [
"PegahMoradi Khaniabadi",
"YassineBouchareb",
"HumoudAl-Dhuhli",
"IsaacShiri",
"FaizaAl-Kindi",
"BitaMoradi Khaniabadi",
"HabibZaidi",
"ArmanRahmim"
] | 10.1016/j.compbiomed.2022.106165
10.1016/j.pdpdt.2021.102287
10.1016/j.compbiomed.2021.104665
10.1148/radiol.2020201160
10.1148/radiol.2021202553
10.1016/S1120-1797(22)00087-4
10.1016/j.compbiomed.2021.104304
10.1038/s41598-021-88807-2
10.1007/s42979-020-00394-7
10.48550/arXiv.2003.11988
10.1186/s12967-020-02692-3
10.1007/s11432-020-2849-3
10.1016/j.acra.2020.09.004
10.1016/j.compbiomed.2022.105467
10.1038/s41598-022-18994-z
10.1002/ima.22672
10.1148/radiol.2020201473
10.1148/radiol.2020200370
10.1148/radiol.2462070712
10.1148/radiol.11092149
10.1148/ryct.2020200322
10.1148/radiol.2020200463
10.30476/ijms.2021.88036.1858.20
10.1148/radiol.2020191145
10.1186/s40644-020-00311-4
10.1002/mp.13649
10.1016/j.ejro.2020.100271
10.1101/2021.12.07.21267367
10.2967/jnumed.121.262567
10.1007/s00330-020-06829-2
10.1088/1361-6560/abe838
10.1109/ICSPIS51611.2020.9349605
10.21037/atm-20-3026
10.1155/2021/2263469
10.1016/j.media.2020.10182
10.1007/s12539-020-00410-7
10.1186/s12880-021-00564-w
10.1016/j.smhl.2020.100178
10.1016/j.csbj.2021.06.022
10.1038/s41598-021-96755-0
10.1016/j.bspc.2022.103662
10.21037/qims.2020.02.21
10.3389/fdgth.2021.662343
10.1186/s43055-021-00592-0
10.1007/s00259-020-05075-4
10.1038/s41598-021-99015-3
10.1016/j.ijid.2021.03.008
10.1016/j.compbiomed.2021.104531
10.1148/ryct.2020200047 |
Development and validation of chest CT-based imaging biomarkers for early stage COVID-19 screening. | Coronavirus Disease 2019 (COVID-19) is currently a global pandemic, and early screening is one of the key factors for COVID-19 control and treatment. Here, we developed and validated chest CT-based imaging biomarkers for COVID-19 patient screening from two independent hospitals with 419 patients. We identified the vasculature-like signals from CT images and found that, compared to healthy and community acquired pneumonia (CAP) patients, COVID-19 patients display a significantly higher abundance of these signals. Furthermore, unsupervised feature learning led to the discovery of clinical-relevant imaging biomarkers from the vasculature-like signals for accurate and sensitive COVID-19 screening that have been double-blindly validated in an independent hospital (sensitivity: 0.941, specificity: 0.920, AUC: 0.971, accuracy 0.931, F1 score: 0.929). Our findings could open a new avenue to assist screening of COVID-19 patients. | Frontiers in public health | 2022-10-11T00:00:00 | [
"Xiao-PingLiu",
"XuYang",
"MiaoXiong",
"XuanyuMao",
"XiaoqingJin",
"ZhiqiangLi",
"ShuangZhou",
"HangChang"
] | 10.3389/fpubh.2022.1004117
10.1111/tmi.13383
10.1002/jmv.25722
10.1080/14737159.2020.1757437
10.1148/radiol.2020200343
10.1111/eci.13706
10.1371/journal.pone.0242958
10.1016/j.radi.2020.09.010
10.1016/j.ijid.2020.04.023
10.1007/s11604-020-00948-y
10.2214/AJR.20.22961
10.1148/radiol.2020200642
10.1148/radiol.2020200330
10.1148/radiol.2020200432
10.1177/0846537120913033
10.1371/journal.pone.0263916
10.1016/j.compbiomed.2022.105298
10.2196/27468
10.1155/2021/9868517
10.1200/CCI.19.00155
10.1016/j.patcog.2014.10.005
10.1109/83.902291
10.1007/s11263-014-0790-9
10.1016/j.acra.2020.03.003
10.1097/TP.0000000000003412
10.1056/NEJMoa2015432
10.1038/s41591-020-0822-7
10.1016/j.tjem.2018.08.001
10.1089/ars.2012.5149
10.1093/cvr/cvaa078
10.1016/j.jcv.2020.104362
10.1146/annurev-virology-031413-085548
10.1016/0891-5849(96)00131-1
10.1371/journal.ppat.1008536
10.1038/s41577-020-0311-8
10.1016/S0140-6736(20)30937-5
10.1016/j.bcp.2009.04.029
10.1148/radiol.2020200905
10.1016/j.cell.2020.04.045
10.1038/s41591-020-0931-3
10.1109/TMI.2020.2996256
10.21037/qims.2020.04.02
10.1016/S1473-3099(20)30241-3
10.1148/radiol.2020201845
10.1148/radiol.2020201365
10.1038/d41586-020-01001-8
10.1136/bmj.m1367 |
RadImageNet: An Open Radiologic Deep Learning Research Dataset for Effective Transfer Learning. | To demonstrate the value of pretraining with millions of radiologic images compared with ImageNet photographic images on downstream medical applications when using transfer learning.
This retrospective study included patients who underwent a radiologic study between 2005 and 2020 at an outpatient imaging facility. Key images and associated labels from the studies were retrospectively extracted from the original study interpretation. These images were used for RadImageNet model training with random weight initiation. The RadImageNet models were compared with ImageNet models using the area under the receiver operating characteristic curve (AUC) for eight classification tasks and using Dice scores for two segmentation problems.
The RadImageNet database consists of 1.35 million annotated medical images in 131 872 patients who underwent CT, MRI, and US for musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, abdominal, and pulmonary pathologic conditions. For transfer learning tasks on small datasets-thyroid nodules (US), breast masses (US), anterior cruciate ligament injuries (MRI), and meniscal tears (MRI)-the RadImageNet models demonstrated a significant advantage (
RadImageNet pretrained models demonstrated better interpretability compared with ImageNet models, especially for smaller radiologic datasets. | Radiology. Artificial intelligence | 2022-10-08T00:00:00 | [
"XueyanMei",
"ZelongLiu",
"Philip MRobson",
"BrettMarinelli",
"MingqianHuang",
"AmishDoshi",
"AdamJacobi",
"ChendiCao",
"Katherine ELink",
"ThomasYang",
"YingWang",
"HayitGreenspan",
"TimothyDeyer",
"Zahi AFayad",
"YangYang"
] | 10.1148/ryai.210315 |
Deep learning models for COVID-19 chest x-ray classification: Preventing shortcut learning using feature disentanglement. | In response to the COVID-19 global pandemic, recent research has proposed creating deep learning based models that use chest radiographs (CXRs) in a variety of clinical tasks to help manage the crisis. However, the size of existing datasets of CXRs from COVID-19+ patients are relatively small, and researchers often pool CXR data from multiple sources, for example, using different x-ray machines in various patient populations under different clinical scenarios. Deep learning models trained on such datasets have been shown to overfit to erroneous features instead of learning pulmonary characteristics in a phenomenon known as shortcut learning. We propose adding feature disentanglement to the training process. This technique forces the models to identify pulmonary features from the images and penalizes them for learning features that can discriminate between the original datasets that the images come from. We find that models trained in this way indeed have better generalization performance on unseen data; in the best case we found that it improved AUC by 0.13 on held out data. We further find that this outperforms masking out non-lung parts of the CXRs and performing histogram equalization, both of which are recently proposed methods for removing biases in CXR datasets. | PloS one | 2022-10-07T00:00:00 | [
"AnusuaTrivedi",
"CalebRobinson",
"MarianBlazes",
"AnthonyOrtiz",
"JocelynDesbiens",
"SunilGupta",
"RahulDodhia",
"Pavan KBhatraju",
"W ConradLiles",
"JayashreeKalpathy-Cramer",
"Aaron YLee",
"Juan MLavista Ferres"
] | 10.1371/journal.pone.0274098
10.1016/j.dsx.2020.04.012
10.1016/j.compbiomed.2020.103792
10.3389/fmed.2020.00427
10.1007/s13246-020-00865-4
10.1136/bmj.m1328
10.1016/j.cmpb.2020.105532
10.1101/2020.09.13.20193565
10.1016/j.cell.2020.04.045 |
Automatic Detection of Cases of COVID-19 Pneumonia from Chest X-ray Images and Deep Learning Approaches. | Machine learning has already been used as a resource for disease detection and health care as a complementary tool to help with various daily health challenges. The advancement of deep learning techniques and a large amount of data-enabled algorithms to outperform medical teams in certain imaging tasks, such as pneumonia detection, skin cancer classification, hemorrhage detection, and arrhythmia detection. Automated diagnostics, which are enabled by images extracted from patient examinations, allow for interesting experiments to be conducted. This research differs from the related studies that were investigated in the experiment. These works are capable of binary categorization into two categories. COVID-Net, for example, was able to identify a positive case of COVID-19 or a healthy person with 93.3% accuracy. Another example is CHeXNet, which has a 95% accuracy rate in detecting cases of pneumonia or a healthy state in a patient. Experiments revealed that the current study was more effective than the previous studies in detecting a greater number of categories and with a higher percentage of accuracy. The results obtained during the model's development were not only viable but also excellent, with an accuracy of nearly 96% when analyzing a chest X-ray with three possible diagnoses in the two experiments conducted. | Computational intelligence and neuroscience | 2022-10-04T00:00:00 | [
"FahimaHajjej",
"SarraAyouni",
"MalekHasan",
"TanvirAbir"
] | 10.1155/2022/7451551
10.1186/s41256-020-00135-6
10.1186/s13643-021-01648-y
10.21203/rs.3.rs-198847/v1
10.1080/22221751.2020.1772678
10.1016/j.media.2020.101794
10.1155/2021/8148772
10.21931/RB/2020.05.03.19
10.1109/access.2020.3010226
10.3233/XST-200715
10.2991/ijcis.d.210518.001
10.1016/j.cmpb.2020.105532
10.1155/2022/4569879
10.1504/ijcsm.2022.122146
10.1016/j.matpr.2021.05.553
10.1016/j.ijin.2020.12.002
10.1007/s00521-022-06918-x
10.3390/s22031211
10.1155/2021/5759184
10.32604/csse.2022.022014
10.1016/j.bbe.2020.08.008 |