On the (non-) reliance on algorithms—A decision-theoretic account

In 2014, the media reported that Deep Knowledge Venture (DKV), a Hong Kong venture capital company, had brought a machine named Vital (‘Validating Investment Tool for Advancing Life Sciences’) into its boardroom. Under current Hong Kong and most countries’ corporate law, it is not possible to officially appoint an algorithm to a company’s board (Möslein, 2018). Yet, Vital’s ability to spot hidden underlying trends in complex historical data is said to have been key in the approval of major investment decisions. Several firms have since followed DKV’s example, some even deploying what is known as ‘algorithmic management’, i.e. granting self-learning algorithms the responsibility for making decisions affecting labour (Duggan et al., 2020 p. 119). According to Accenture, a business consulting firm, 85% of the surveyed executives now want to increase their investment in AI-related technologies.

Meanwhile, in laboratory experiments or field studies representative of the many empirical investigations led over the past decade, Gogoll and Uhl (2018) observed that most subjects prefer to delegate a calculation task affecting a third party to a human rather than a machine, Dietvorst et al. (2015) and Prahl and Van Swol (2017) that trust in algorithmic assistance is more sensitive to receiving an erroneous advice than is trust in human aid, Yeomans et al. (2019) that people refrain from using recommender systems in certain domains (in this case, picking jokes that people find funny), even when these systems outperform human advisors, and Liu et al. (2023) that drivers tend not to rely on algorithms if their peers do not or if they have relevant expertise. Similar behaviors have been reported to repeatedly occur in several occupations (e.g., auditing, consulting, forecasting, retail business, stock trading) and across various other groups of people (e.g., consumers and investors, physicians, and managers).

Such is the ongoing relationship between humans and algorithms.1 On the one hand, people increasingly rely on algorithms in their daily activities, from managing their home to organizing their leisure time and holidays to driving their car to doing their shopping. Algorithmic trading (which includes high-frequency trading) now accounts for over 70% of trade on U.S. stock markets (Parkes & Wellman, 2015). In health care, physicians often make use of artificial intelligence to establish their diagnostics and cures (Cadario et al., 2021), while robots are employed to help children with autism (Huijnen et al., 2017). In management, top executives at certain corporations are today using systems capable of analyzing the work of all their business units almost instantly at the touch of a button. All this demonstrates significant and pervasive ‘algorithm appreciation’ (Logg et al., 2019, Sharan and Romano, 2020, Thurman et al., 2018). Yet, since Meehl (1954)’s seminal investigation of clinical versus statistical predictions in the context of medical decision-making, a wealth of rigorous empirical evidence has been published showing the tendency of humans to shy away from using algorithms, or to discount algorithmic advice, even when algorithms could enhance human performance. This behavior is now referred to as ‘algorithm aversion’ (Dietvorst et al., 2015).

The fact that people’s attitude towards incoming technology can be ambivalent is nothing new, of course; it has been observed since at least the industrial revolution. The Luddites rebellion in 19th-century England reminds us that resistance to novelty can be quite costly. What is unseen and challenging with the current situation, however, is that people display aversion to algorithms not only when the latter are substitutes for human workers so their jobs are at stake, but also in areas where algorithmic features are expected to support or complement human skills (Miller, 2018, Wilson and Daugherty, 2018) so their use would likely enhance someone’s living or task performance. Private companies and public organizations thus worry about their own employees, client organizations and consumers displaying aversion to algorithms, as it can significantly affect their bottom line.2 In response, policy makers, business executives, consultants and academics are accordingly putting extra effort to understand the determinants and causes of algorithm aversion/appreciation.

At this point, researchers have uncovered a number of factors influencing the human-algorithm relationship.3 Following Goodhue (1995)’s initial analysis of user evaluations of information systems, the evidence can be classified as follows.

(i) Some factors have to do with some characteristics of the task under consideration (Castelo et al., 2019). For instance, machines are not yet accepted as autonomous moral agents (Bigman and Gray, 2018, Gogoll and Uhl, 2018, Jago, 2019). In their empirical survey, Ray et al. (2008) find that people would like robots to watch over their house, mow the lawn or do the laundry, but they would strongly resist having a machine babysit their children. Productions conceived as ‘subjective’ – like jokes and artworks – or requiring a fair amount of ‘intuition’ tend to be preferably assigned to humans (Burton et al., 2020, Yeomans et al., 2019). In retailing, Kawaguchi (2022) has found that a worker might rely less on machines with high sales volume but more on algorithms with high sales volatility. Meanwhile, Agrawal et al., 2018a, Agrawal et al., 2018b report that the pleasure of watching a sporting event depends on there being humans competing; it is not really interesting if a machine that runs faster than a human is allowed to compete. Professionals in certain fields might also fear that the use of decision aids could hurt their reputation (Kaplan, 2000). Finally, it is considered to be up to a human being to exercise leadership (Parry et al., 2016).

(ii) Another important set of factors relates to individual features. For instance, people unfamiliar with machines would rather seek human aid than algorithmic support (Kramer et al., 2017). Some people might also resist using a device which operates in a manner they see as overly opaque (Cadario et al., 2021) . Having experienced an algorithm’s failure would decrease as well someone’s trust in automation (Dietvorst et al., 2015, Prahl and Van Swol, 2017), but augmenting human involvement in algorithmic outcomes might then restore it (Dietvorst et al., 2018). There is, moreover, converging evidence that humans’ need for control has a biological explanation (Leotti et al., 2010); managers might thus be hesitant to use AI as they fear losing control over their work (Leyer & Schneider, 2019). Confidence in one’s own expertise/ability, whether it is low or high, is to also be taken into consideration for discounting or not an algorithm’s help (Allen & Choudhury, 2022). Dietvorst and Bharti (2020) submit that people’s ‘diminishing sensitivity to forecasting errors’ can also entail a preference for human over algorithmic advice. Ultimately, since humans experience different emotions when receiving advice from another human versus a nonhuman, this can affect their use of algorithms (Prahl & Van Swol, 2017).

(iii) Other factors, finally, concern the design or the scope of an algorithm. In their literature review, Jussupow et al. (2020) report that people treat ‘advisory’ algorithms (like recommender systems or prediction machines), which mainly provide information, differently from ‘performative’ algorithms, which are capable of executing complicated tasks independently. Also, people may exhibit different preferences for rule-based systems versus machine-learning systems and performative AI, as they consider the latter to be black boxes (Mahmud et al., 2022). Last, it seems that trust in anthropomorphized automation would be stronger than trust in traditional machines (Mahmud et al., 2022, Sharan and Romano, 2020)).

Thanks to this research, human attitude towards algorithms is now well-documented and articulated, while further empirical evidence keeps accumulating. As Duan et al. (2019, p. 68) point out, though: Overall, to have a systematic understanding of how and why AI based systems are used and affecting individual and organizational performance, an appropriate theoretical framework should be developed. A similar appraisal is expressed in other recent literature surveys (e.g., Jannach and Jugovac, 2019, Jussupow et al., 2020, Mahmud et al., 2022). Time seems thus ripe for doing theoretical work on human–AI relationships, in order to bring together the various empirical findings, elucidate their congruences and fundamental differences, spell out causalities, help distinguish noise in data, and draw meaning from contradictory facts.

While the observed postures towards algorithms are oftentimes attributed to human irrationality,4 I believe, nevertheless, that fruitful theorizing should first build on rationality as a useful benchmark. This would foster unity and simplicity of treatment, the orderly development of finer explanations, and a more precise identification of human failures, their respective causes, and their possible remedies. A perhaps inspiring analogy might be made h ere with what expected utility theory – thanks to the empirical challenges it prompted and the substantial amendments the latter lead to – has done for the understanding of decision making under risk and uncertainty. This paper’s aim is accordingly to develop a theoretical account of the empirical evidence regarding algorithm aversion/appreciation, while departing as little as possible (for the time being) from rational decision making.

The upcoming Sections 2 Algorithmic advice and the value of information, 3 Algorithmic decision-making and the value of control will thus bring respectively into play two basic notions from the field of Decision Analysis: the value of information and t he value of control. Both these notions are precisely defined and seem relevant to launch a first formal analysis of human-algorithm interaction. The former refers to the value of getting to know (at least partially) in advance what a random variable will be (Hilton, 1981, Wilson, 2015), the latter stands for the value of changing some random variables into decision variables (Howard, 1971, Howard, 1988, Matheson and Matheson, 2005). Howard (1971) applies a suggestive terminology to the ‘value of information’ and the ‘value of control’: getting to know what value a random variable will take is called ‘clairvoyance’, while changing random variables into decision variables is named ‘wizardry’ . An example illustrating and comparing the two notions can be found in Appendix.

The value of information surely matters when considering to rely on ‘advisory’ algorithms (using Jussupow et al., 2020’s terminology), since such devices essentially provide information and predictions. To clarify what might t hen be happening, Section 2 builds a first model which highlights the relative value of the information supplied by an algorithm whose advice can be mistaken. The impact of a person’s ambiguity aversion, subjective cost of using an algorithm, and asymmetric sensitivity to false negatives and false positives are spelled out in this context. Some remedies to algorithm aversion put forward in the literature – like favoring people’s exposure to algorithms or conveying machine improvements – are next examined, and some testable predictions are submitted.

The value of control, on the other hand, should matter when considering to delegate some key decisions or the execution of important tasks to a specific algorithm.5 Most people would then seek to exercise some control over the possible range of outcomes. Whether a rational subject will accept to delegate decision making to an algorithm, or rely instead on her own human capabilities, should depend on the expected value these distinct approaches respectively deliver. This assessment, however, would build on human cognition. Section 3 will posit that it might consequently derive from an imperfect understanding of algorithmic output, when the algorithm is actually a ‘black box’. Thanks to a first model of this situation, this section will characterize and illustrate the circumstances where a rational decision maker will be (un)favorable to using th e algorithm. Ways to then avert ex post regret or disappointment in this context will be discussed.

These developments will point to several research avenues, notably on the empirical side. Some are sketched in the concluding Section 4.

留言 (0)

沒有登入
gif