Psychological processes unfold over time, implying that we need to actively consider the temporal nature of these processes to get a complete understanding of them. Statistical or computational models are a popular way to study these time-dependent fluctuations, given that they can serve as a formal basis for understanding and investigating the key dynamical features of a process of interest. One area of research that has embraced these computational models is the field of affect dynamics – which revolves around how emotional feelings change over time (Waugh & Kuppens, 2021) – as evidenced by the many computational models that are used within this field, going from linear to nonlinear, from discrete- to continuous-time, and from descriptive to theoretical models (to give a few examples, Ariens et al., 2020, Bennett et al., 2022, Deboeck and Bergeman, 2013, Loossens et al., 2020, Rutledge et al., 2014, Steele and Ferrer, 2011, Voelkle and Oud, 2013). Unfortunately, however, it is often unclear how these models relate to each other, which may unintentionally contribute to difficulties with the adoption of these models in substantive research. This currently limits the extent to which the promise of these models can be fulfilled.
Many dynamical models are related to each other, although the nature of these relationships is not always obvious. It is therefore of great importance to document the relationships between the different computational models that are used in the literature. In this article, we will investigate the relationship between two classes of models that are frequently used in the affect dynamical literature, namely the discounting models and the autoregressive models, both of which we will introduce in the next sections.
Let us first establish the mathematical notation we use in this text. Variables that are observed or measured are denoted with Roman letters, while Greek letters are reserved for the model parameters. Lowercase letters are used for scalars or, when they are bold, for vectors. Uppercase letters are used for matrices.
Despite the long presence of the discounting model within the economic literature (e.g., Cairns and van der Pol, 2000, Koopmans, 1960, Myerson and Green, 1995), Rutledge et al. (2014) only recently introduced it to the affect dynamical literature. In their study, Rutledge et al. (2014) investigated the relationship between participant’s happiness and the monetary outcomes that they received in a gambling paradigm. This paradigm required participants to repeatedly choose either a certain outcome or an uncertain gamble between two equally probable outcomes. If the participant selected the certain outcome, the reward was immediately added to the participant’s total. If the gamble was chosen, one of the outcomes was selected at random and was then added to the participant’s total.
To formalize the relationship between happiness and the outcomes of their experiment, Rutledge et al. (2014) created the following computational model: ht=α+β1∑j=0t−1γjcrt−j+β2∑j=0t−1γjevt−j+β3∑j=0t−1γjrpet−j+ϵt,where ϵt∼N(0,σ2),and where ht was a person’s self-reported happiness at trial t, cr is the value of the certain outcome (if chosen), ev is the expected value of the gamble outcomes (if chosen), and rpe is the difference between the ev and the actual obtained gamble outcome. In this equation, the happiness h at time t is a function of the cumulative sum of the discounted previous outcomes cr, ev, and rpe, meaning that the history of the received outcomes matters in determining one’s current happiness. To ensure that recent outcomes have a greater influence on happiness, Eq. (1) discounts past outcomes exponentially over time, the strength of which is determined by the parameter γ∈[0,1) which we will refer to as the discounting factor.
Eq. (1) closely resembles one of the solutions to the equation of the expected return, a concept that was introduced in the reinforcement learning literature. Expected return is the total reward people expect to receive at the end of a task if they behave in a given way and can be formalized as a function of all future rewards (Sutton & Barto, 2018): et=f(rt+1,…,rt+n),where e is the expected return looking forward from the current moment t, r is the reward that the agent receives at a given future time t+j, and f is a function of these future rewards. For the model to describe a stable process, the function f should converge to a finite value, even when the task itself continues forever (i.e., when n→∞). Because of this, f is typically assumed to take on a discounting form, such as (see Sutton & Barto, 2018): et=∑j=0∞γjrt+j+1.Note that this discounted sum is the same as the sum in Eq. (1) with one important difference, namely the frame of reference. While Eq. (3) deals with the discounting of future rewards, Eq. (1) is concerned with the discounting of outcomes that one has already received. One model thus deals with the future while the other one deals with the past. Despite this difference in the frame of reference, the discounting model itself remains the same.
At this point, it is also useful to make the connection to research around delay discounting, which is concerned with how people discount rewards that are received at different moments in the future (see e.g., Ballard et al., 2023, Blavatskyy, 2016, Myerson and Green, 1995).1 Within this field, Samuelson (1937) proposed the exponential discounting function, which has a similar functional form to those in Eqs. (1), (3) and which has since received ample attention. However, the exponential discounting model is not uncontroversial and some alternatives have been proposed. In the first section of this paper, we will only concern ourselves with relating the discounting model as used in the literature to the autoregressive models. Later, we will point out the need to empirically validate the exponential shape of the discounting function and propose the quasi-hyperbolic model as an alternative (Laibson, 1997).
In this article, we will use a generalized version of Eq. (1) which we define as (see Appendix A)2: yt=α+∑j=0t−1ΓjBxt−j+ϵtϵt∼N(0,Σ),where the d-dimensional vector yt contains the values for the dependent variables at time t. The d-dimensional vector α represents the intercept, which coincides with the mean of the process y. Within this sum, the d×d matrix Γ contains the discounting factors, defining the extent to which previous values of x influence the current values of y. Importantly, Γ is subject to several restrictions. First, the moduli of the eigenvalues of Γ are restricted to be smaller than 1, which ensures that the sum converges to a stable value (cf. restriction on function f, Sutton & Barto, 2018).3 Second, the values in Γ are restricted to be positive (Rutledge et al., 2014, Sutton and Barto, 2018), which means that the discounting of a value over time does not overshoot the baseline state (cf. the sawtooth pattern in Fig. 2). Finally, the k independent variables are contained in the vector xt and get scaled by the d×k matrix B. The matrix B determines the strength of the association between the discounted sum of the independent variables with each of the variables in y, playing the same role as the β’s in Eq. (1). We refer the reader to Fig. 1 for a visual representation of what each parameter does in the model.
After its introduction in the field of affect dynamics, the discounting model has been widely used to understand how affect changes in relation to its environment, both by the original authors (e.g., Blain and Rutledge, 2020, Chew et al., 2021, de Berker et al., 2016, Keren et al., 2021, Rutledge et al., 2017, Rutledge et al., 2015) and by independent researchers (e.g., Asutay and Västfjäll, 2022, Vanhasbroeck et al., 2021, Villano et al., 2020, Vinckier et al., 2018). Given its rising influence, it is important to find out how this model relates to other, already existing models within the field.
The autoregressive model is another model popular in the emotion literature, usually serving as a data-analytic tool to investigate how affect changes over time (e.g., Adolf et al., 2017, Booij et al., 2018, Congard et al., 2011, Hamaker et al., 2005, Sperry et al., 2020). While several variants of the autoregressive model exist, what binds these models together is the shared assumption that current values of a dependent variable y are related to past values of the same variable, implying that values of y are carried over across time. Formally, this assumption is implemented by regressing past values of a variable to its current value, so that: yt=δ+Θyt−1+ϵt,where ϵt∼N(0,Σ).This model is known as the vector autoregressive model or VAR. The d-dimensional vector δ contains the intercepts. The next term is the autoregressive component, which scales previous values of y by the d×d transition matrix Θ. This component defines an exponential decay of values of y towards the baseline state μ, which is equal to (Id−Θ)−1δ with Id being the d×d identity matrix. Importantly, the moduli of the eigenvalues of Θ should be smaller than 1 in order for the model to describe a stable process (Hamilton, 1994 also cf. Γ).
As we will show in the next section, the discounting model is a special case of a slightly more complicated autoregressive model, namely the moderated vector autoregressive moving-average model or VARMAX (Adolf et al., 2017, Shumway and Stoffer, 2006). Like the name suggests, this model consists of an autoregressive component, a moving average component, and a set of external variables that are thought to influence y. Combined, this model then becomes: yt=δ+Ψxt+Θyt−1+Φϵt−1+ϵt,where again ϵt∼N(0,Σ).In this equation, y changes with the values of the k independent variables x, which are scaled by the d×k matrix Ψ. Additionally, fluctuations in y are captured with a moving-average component, which consists of the d×d matrix Φ and defines the extent to which residuals at the previous time point t−1 are carried over to the current observations of y. Importantly, the restriction on Θ also applies to Φ, which in this case makes sure that the model is invertible (Shumway & Stoffer, 2006).
Fig. 2 visualizes the effect of the parameters on y. When looking at this figure, it is useful to keep in mind that while most parameters map directly onto the discounting model, we should mention two differences with the time series shown in Fig. 1. First, the values of the discounting factors in Γ are restricted to be positive, a restriction that does not apply to the transition matrix Θ. This allows the autoregressive models to capture a sawtooth-like dynamical pattern, as shown in Fig. 2. Second, changing the value of Θ independently of δ also changes the value of the baseline μ. To keep all time series equidistant, we counteracted this effect by changing the values of δ simultaneously with the values of θ. The full effect of only changing θ is thus not shown in this figure.
Due to their attractive properties and their intuitiveness, autoregressive models have been exceedingly popular in the affect dynamical literature. Typically, they are used as a data analytic tool (e.g., Dejonckheere et al., 2021, Lucas and Donnellan, 2007, Wendt et al., 2020), but autoregressive models have also been used to gain a deeper theoretical understanding of the dynamic structure of affect (e.g., Bonsall et al., 2012, de Haan-Rietdijk et al., 2017, Loossens et al., 2021).
留言 (0)