Previous experience with delays affects delay discounting in animal model of ADHD

Attention-Deficit/Hyperactivity Disorder (ADHD) affects ~ 5% of the child population, and 2.5% of the adult [1,2,3,4,5,6], and exists in three subtypes: inattentive, hyperactive-impulsive, and combined [7]. A defining aspect of the combined and hyperactive-impulsive subtypes is impulsive behaviour, characterized as a tendency to act without foresight or making choices based on poor reasoning of future consequences [8]. However, the term “impulsive” is a broad label that classifies a range of traits, such as impatience, restlessness, risk or sensation-seeking behaviour, spontaneous or quick decisions, and lack of foresight [9, 10]. The terms impulsivity or impulsiveness hold a central place in psychological theories and psychiatric symptom’s lists, and their various operationalizations and neurobiological bases have been extensively studied. Due to the term’s heterogeneity and multifaceted nature, some have even argued that these concepts should be rejected and replaced with betted defined terms as they fail to meet the requirements of a psychological construct [11].

The exact cause of impulsive behaviour in ADHD is debated and depends on both the operationalization of the term and what theory one adopts [12, 13]. One theory of ADHD is the delay aversion theory, which proposes that impulsive behaviour is the result of an unwillingness to endure the temporal aspect of a choice, including the length between repeated choices [14]. An expansion of this theory is the dual component model of ADHD, which in addition to delay aversion incorporates a concept called impulsive drive for immediate reward (IDIR) [15]. This impulsive drive is the tendency for impulsive behaviour to be affected by the time between response and reinforcer for any given choice, specifically that longer response-reinforcer delays reduce the likelihood of it being chosen [15, 16]. Thus, the dual component model of ADHD suggests that both delay aversion and impulsive drive contribute together towards impulsive behaviour [15]. The dual component model does not explain the mechanism or processes behind IDIR but refers to other theories and explanations like executive dysfunction and deficits in inhibitory control, or that people with ADHD have a steeper delay-of-reinforcement gradient as suggested in the Dynamic Developmental Theory of ADHD (DDT, see [17]). The DDT offers detailed hypotheses regarding the behavioural and neurological mechanisms behind impulsive behaviour. It proposes that the effectiveness of a reinforcer is a decreasing function of the time between response and consequence, termed the delay-of-reinforcement gradient, and that this gradient is steeper in people with ADHD, meaning a steeper discounting of future reinforcers leading to the preference for small immediate reinforcers over large delayed [12, 17].

Delay discounting

Delay discounting is a commonly used method for studying and measuring impulsive behaviour [18]. It usually involves choosing between a small reinforcer delivered immediately and a larger reinforcer delivered after a delay. All organisms eventually switch to choosing the immediate, small reinforcer as the delay to the larger reinforcer increases, despite the larger, delayed reinforcer being the theoretical option maximizing the amount of reinforcers in real time. Choosing the small, immediate reinforcer is adaptive in times of uncertainty (a bird in the hand is worth two in the bush), or during e.g., severe deprivation when immediate replenishment is needed for survival. Thus, as choosing the small, immediate reinforcer is sometimes normal or typical, impulsivity needs be defined relative to the choices of neurotypical controls [19]. Therefore, in this paper, we operationalize “impulsive” behaviour as when an organism significantly more often than neurotypical controls performs a choice leading to small reinforcers when large reinforcers are available at the cost of waiting and is the option maximizing amount of rewards in real time.

ADHD children, compared to controls, appear to display a reduced tendency to wait for a larger reinforcer and will typically choose the smaller option more often than neurotypical controls, i.e., they are impulsive [14, 20,21,22,23]. A meta-analysis suggests that people with ADHD are particularly sensitive to long delays, and are twice as likely as controls to make an impulsive choice in the presence of hypothetical reinforcers (e.g., points) compared to real reinforcers (e.g., money or food) in the task [24].

The SHR animal model of ADHD

The Spontaneously Hypertensive Rat (SHR) is the most commonly used animal model of ADHD [25, 26], and largely considered the most validated model [27,28,29,30]. The rats were initially bred for high blood pressure research [31], but when compared to controls they exhibit similar characteristics to people with ADHD: they express impulsivity [32,33,34,35], inattention [28], hyperactivity [36], and increased behavioural variability [37, 38]. The SHR model is well researched, but has only been used a little more than a dozen times in delay discounting research [32, 35, 38,39,40,41,42,43,44,45,46,47,48,49,50,51,52]. Most studies on delay discounting using SHRs find that the rats act more impulsively on the task compared to controls [32, 35, 40, 42, 43, 45,46,47, 49, 50], indicated by a higher tendency to choose the small reinforcer when long delays are present for the large reinforcer, although other studies have failed to find any such strain difference [38, 41, 44, 48].

The discrimination test and learning history in delay discounting

A discrimination test is a pre-experimental procedure where the animals are exposed to small and large reinforcers without delays, which purpose is to establish that the animal prefers the large over the small reinforcer option prior to any experimental manipulations. In other words, it is a test to verify that reinforcer size, and not operandum position or other variables, controls choice during no delay. This is a fundamental study requirement, as it is pointless to study choice as a function of delay to the larger reinforcer if reinforcer size does not control choice. Sjoberg and Johansen [19] emphasized the importance of including a discrimination test in order to avoid assumptions about the animals’ baseline preferences, but found that only three out of fourteen surveyed SHR studies clearly outlined the details of their discrimination test [39, 43, 51]. Three others specified the details but included a delay component during this phase [40, 41, 44], while the remainder either did not include such a test or did not specify the details involved. This reduces the possibility of direct comparison between studies.

In delay discounting, it can be argued that previous experience with the choice paradigm will influence the likelihood of a choice pattern occurring. An example of this is when animals are reused in a different experiment. For example, the rats used in Fox et al. [32] were later reused in a different delay discounting experiment by the same researchers [46], and the SHRs appeared to show a steeper discounting curve in the second experiment once a delay component was introduced. This observation alone, however, does not prove that previous experience was the cause, as a number of other factors may have influenced the results (e.g. the rats were also given saline or drug injections). In the SHR model of ADHD, only one previous study has examined the effects of learning history in delay discounting. Fox et al. [32] increased the delay between response and the large reinforcer in one condition, then subsequently reversed the order of the delays. The researchers found that SHRs relative to controls exhibited a greater preference for small, immediate reinforcers (small soon, SS) over larger, delayed reinforcers (larger later, LL) when delays were presented in descending order. The data showed that SS preference gradually increased along with increased delays for LL, but when this order was reversed the rats effectively maintained SS preference until the delay was almost absent. However, since this was a within-subject design, all the rats shared the same learning history, meaning that the results reflect a linear learning pattern where the rats adapt to increasing delays and then need time to readapt when these reinforcement contingencies are reversed. This suggests that once the rats are accustomed to delays, they require multiple repeated trials in order to readjust to short delays.

The current study aims to reproduce the experiment performed by Fox et al. [32], with certain adjustments. First, we will implement a lever preference test and assign the large later reinforcer to the lever least preferred. This will be followed by a discrimination test to ensure the rats discriminate between the small sooner (SS) and large later (LL) reinforcer. The rats must show a 66% or higher LL preference in two consecutive sessions before the experiment begins. This is similar to Fox et al. [32], who also used a discrimination test, but did not specify any criterion other than all rats preferring the large reinforcer “almost exclusively (p. 147)” by the end of the fourth session. Second, unlike Fox et al. [32], who used a within-subjects design, we will employ a between-subjects design. This means that, following the discrimination test, one group of rats will be exposed to gradually increasing delays (Ascending group) while another group will be exposed to these delays in reverse order (Descending group). Thus, the Descending group will be exposed to an abrupt and long delay to the large reinforcer and then decreasing delays as opposed to slow and gradually increasing delays in the Ascending group. Third, we will implement a procedure where the total trial length is constant and fixed at 24 s [19]. Therefore, as length of the delays change, inter-trial-intervals (ITIs) are adjusted to keep a constant trial duration. As a result, the two variables are always balanced and control for each other to the degree where one is absent, the other is at maximum (e.g., when delay is 0 s, ITI is 24 s). Fox et al. [32] also used a compensating design where the inter-trial interval would shrink in accordance with increased delays so as to assure that the trial lengths always remained constant [19, 53]. However, their inter-trial interval never disappeared completely. Finally, we will change the LL delay length between every daily session. This means that the animals will only be tested for 30 min at every delay condition, and no stable-state behaviour will be achieved. This will preclude the identification of pure reinforcer delay effects on LL choice, but has a larger ecological validity in terms of imitating naturally, rapidly changing contingencies and is also more like clinical testing in ADHD where one session of testing is the norm. Additionally, it has the advantage of showing the relative importance of learning history compared to reinforcer delay for the reinforcer sized used in the study.

Findings in previous studies of both animals and humans show that experience with increasing reinforcer delay can increase LL delay tolerance (e.g. [54,55,56]). In the Ascending condition in our experiment, LL delay is gradually increased, whereas in the Descending condition, LL delay is abruptly increased from 0 to 24 s. Therefore, without the gradual increase in LL reinforcer delay, we hypothesized that rats in the Descending condition will express steeper delay discounting and more SS choices compared to rats in the Ascending condition. Further, based on the results in Fox et al. (2008) and findings suggesting a steeper delay-of-reinforcement gradient in SHR/NCrl compared to WKY/NHsd [57,58,59], we expected to observe a higher percentage of SS choices and steeper delay discounting in SHR/NCrl relative to controls in both the Ascending and the Descending conditions.

留言 (0)

沒有登入
gif