Peering into the black box of artificial intelligence: Evaluation metrics of machine learning methods

Guy S. Handelman*, Hong Kuan Kok, Ronil V. Chandra, Amir H. Razavi, Shiwei Huang, Mark Brooks, Michael J. Lee, Hamed Asadi

*Corresponding author for this work

    Research output: Contribution to journalReview articlepeer-review

    229 Citations (Scopus)

    Abstract

    OBJECTIVE. Machine learning (ML) and artificial intelligence (AI) are rapidly becoming the most talked about and controversial topics in radiology and medicine. Over the past few years, the numbers of ML- or AI-focused studies in the literature have increased almost exponentially, and ML has become a hot topic at academic and industry conferences. However, despite the increased awareness of ML as a tool, many medical professionals have a poor understanding of how ML works and how to critically appraise studies and tools that are presented to us. Thus, we present a brief overview of ML, explain the metrics used in ML and how to interpret them, and explain some of the technical jargon associated with the field so that readers with a medical background and basic knowledge of statistics can feel more comfortable when examining ML applications. CONCLUSION. Attention to sample size, overfitting, underfitting, cross validation, as well as a broad knowledge of the metrics of machine learning, can help those with little or no technical knowledge begin to assess machine learning studies. However, transparency in methods and sharing of algorithms is vital to allow clinicians to assess these tools themselves.

    Original languageEnglish
    Pages (from-to)38-43
    Number of pages6
    JournalAmerican Journal of Roentgenology
    Volume212
    Issue number1
    DOIs
    Publication statusPublished - Jan 2019

    Fingerprint

    Dive into the research topics of 'Peering into the black box of artificial intelligence: Evaluation metrics of machine learning methods'. Together they form a unique fingerprint.

    Cite this