Abstract
Algorithm Decision Making (ADM) systems designed to augment or automate human
decision-making have the potential to produce better decisions while also freeing
up human time and attention for other pursuits. For this potential to be realised,
however, algorithmic decisions must be sufficiently aligned with human goals and
interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment
and trust. In a broad sense, ADM is beneficial if and only if human principals
can trust algorithmic agents to act faithfully on their behalf. This mirrors the
challenge of facilitating P-A relationships among humans, but the peculiar nature
of human-machine interaction also raises unique issues. The problem of asymmetric
information is omnipresent but takes a different form in the context of ADM.
Although the decision-making machinery of an algorithmic agent can in principle
be laid bare for all to see, the sheer complexity of ADM systems based on deep
learning models prevents straightforward monitoring. We draw on literature from
economics and political science to argue that the problem of trust in ADM systems
should be addressed at the level of institutions. Although the dyadic relationship between
human principals and algorithmic agents is our ultimate concern, cooperation
at this level must rest against an institutional environment which allows humans to
effectively evaluate and choose among algorithmic alternatives.
decision-making have the potential to produce better decisions while also freeing
up human time and attention for other pursuits. For this potential to be realised,
however, algorithmic decisions must be sufficiently aligned with human goals and
interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment
and trust. In a broad sense, ADM is beneficial if and only if human principals
can trust algorithmic agents to act faithfully on their behalf. This mirrors the
challenge of facilitating P-A relationships among humans, but the peculiar nature
of human-machine interaction also raises unique issues. The problem of asymmetric
information is omnipresent but takes a different form in the context of ADM.
Although the decision-making machinery of an algorithmic agent can in principle
be laid bare for all to see, the sheer complexity of ADM systems based on deep
learning models prevents straightforward monitoring. We draw on literature from
economics and political science to argue that the problem of trust in ADM systems
should be addressed at the level of institutions. Although the dyadic relationship between
human principals and algorithmic agents is our ultimate concern, cooperation
at this level must rest against an institutional environment which allows humans to
effectively evaluate and choose among algorithmic alternatives.
Original language | English |
---|---|
Number of pages | 68 |
Journal | Philosophy and Technology |
Volume | 37 |
DOIs | |
Publication status | Published - 2024 |