Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error
Publikation: Bidrag til tidsskrift › Tidsskriftartikel › Forskning › fagfællebedømt
Standard
Algorithmic Leviathan or Individual Choice : Choosing Sanctioning Regimes in the Face of Observational Error. / Markussen, Thomas; Putterman, Louis; Wang, Liangjun.
I: Economica, Bind 90, Nr. 357, 01.2023, s. 315-338.Publikation: Bidrag til tidsskrift › Tidsskriftartikel › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Algorithmic Leviathan or Individual Choice
T2 - Choosing Sanctioning Regimes in the Face of Observational Error
AU - Markussen, Thomas
AU - Putterman, Louis
AU - Wang, Liangjun
PY - 2023/1
Y1 - 2023/1
N2 - Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well-documented departure from non-behavioural game-theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule-based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi-continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence.
AB - Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well-documented departure from non-behavioural game-theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule-based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi-continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence.
KW - Faculty of Social Sciences
U2 - 10.1111/ecca.12443
DO - 10.1111/ecca.12443
M3 - Journal article
VL - 90
SP - 315
EP - 338
JO - Economica
JF - Economica
SN - 0013-0427
IS - 357
ER -
ID: 322121380