Who gets held accountable when a facial recognition algorithm fails?

Ellen Broad

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Artificial Intelligence (AI) is used in infinite business objectives and policy decisions. Although algorithms are inclined towards the same prejudices - biases even - as us humans. The question beckons, is it possible to make machines either responsible or ethical?
    Original languageEnglish
    Pages (from-to)18-23
    JournalIQ : THE RIM QUARTERLY
    Volume34
    Issue number4
    Publication statusPublished - 2018

    Fingerprint

    Dive into the research topics of 'Who gets held accountable when a facial recognition algorithm fails?'. Together they form a unique fingerprint.

    Cite this