Cited 0 times in
Development and Validation of a Deep Learning System for Segmentation of Abdominal Muscle and Fat on Computed Tomography
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, HJ | - |
dc.contributor.author | Shin, Y | - |
dc.contributor.author | Park, J | - |
dc.contributor.author | Kim, H | - |
dc.contributor.author | Lee, IS | - |
dc.contributor.author | Seo, DW | - |
dc.contributor.author | Huh, J | - |
dc.contributor.author | Lee, TY | - |
dc.contributor.author | Park, T | - |
dc.contributor.author | Lee, J | - |
dc.contributor.author | Kim, KW | - |
dc.date.accessioned | 2022-11-29T01:43:14Z | - |
dc.date.available | 2022-11-29T01:43:14Z | - |
dc.date.issued | 2020 | - |
dc.identifier.issn | 1229-6929 | - |
dc.identifier.uri | http://repository.ajou.ac.kr/handle/201003/22937 | - |
dc.description.abstract | OBJECTIVE: We aimed to develop and validate a deep learning system for fully automated segmentation of abdominal muscle and fat areas on computed tomography (CT) images. MATERIALS AND METHODS: A fully convolutional network-based segmentation system was developed using a training dataset of 883 CT scans from 467 subjects. Axial CT images obtained at the inferior endplate level of the 3rd lumbar vertebra were used for the analysis. Manually drawn segmentation maps of the skeletal muscle, visceral fat, and subcutaneous fat were created to serve as ground truth data. The performance of the fully convolutional network-based segmentation system was evaluated using the Dice similarity coefficient and cross-sectional area error, for both a separate internal validation dataset (426 CT scans from 308 subjects) and an external validation dataset (171 CT scans from 171 subjects from two outside hospitals). RESULTS: The mean Dice similarity coefficients for muscle, subcutaneous fat, and visceral fat were high for both the internal (0.96, 0.97, and 0.97, respectively) and external (0.97, 0.97, and 0.97, respectively) validation datasets, while the mean cross-sectional area errors for muscle, subcutaneous fat, and visceral fat were low for both internal (2.1%, 3.8%, and 1.8%, respectively) and external (2.7%, 4.6%, and 2.3%, respectively) validation datasets. CONCLUSION: The fully convolutional network-based segmentation system exhibited high performance and accuracy in the automatic segmentation of abdominal muscle and fat on CT images. | - |
dc.language.iso | en | - |
dc.subject.MESH | Adolescent | - |
dc.subject.MESH | Adult | - |
dc.subject.MESH | Aged | - |
dc.subject.MESH | Aged, 80 and over | - |
dc.subject.MESH | Deep Learning | - |
dc.subject.MESH | Female | - |
dc.subject.MESH | Humans | - |
dc.subject.MESH | Image Enhancement | - |
dc.subject.MESH | Image Processing, Computer-Assisted | - |
dc.subject.MESH | Intra-Abdominal Fat | - |
dc.subject.MESH | Male | - |
dc.subject.MESH | Middle Aged | - |
dc.subject.MESH | Muscle, Skeletal | - |
dc.subject.MESH | Subcutaneous Fat | - |
dc.subject.MESH | Tomography, X-Ray Computed | - |
dc.subject.MESH | Young Adult | - |
dc.title | Development and Validation of a Deep Learning System for Segmentation of Abdominal Muscle and Fat on Computed Tomography | - |
dc.type | Article | - |
dc.identifier.pmid | 31920032 | - |
dc.identifier.url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6960305 | - |
dc.subject.keyword | Adipose tissue | - |
dc.subject.keyword | Artificial intelligence | - |
dc.subject.keyword | Deep learning | - |
dc.subject.keyword | Muscles | - |
dc.subject.keyword | Sarcopenia | - |
dc.contributor.affiliatedAuthor | Huh, J | - |
dc.type.local | Journal Papers | - |
dc.identifier.doi | 10.3348/kjr.2019.0470 | - |
dc.citation.title | Korean journal of radiology | - |
dc.citation.volume | 21 | - |
dc.citation.number | 1 | - |
dc.citation.date | 2020 | - |
dc.citation.startPage | 88 | - |
dc.citation.endPage | 100 | - |
dc.identifier.bibliographicCitation | Korean journal of radiology, 21(1). : 88-100, 2020 | - |
dc.identifier.eissn | 2005-8330 | - |
dc.relation.journalid | J012296929 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.