Immersive virtual reality for learning exoskeleton-like virtual walking: a feasibility study

Participants

Forty healthy participants (13 female, 27 male) without known motor or cognitive disorders and aged from 18 to 60 years old (\(27.73 \pm 7.91\)) participated in the study. Participants provided written informed consent to participate in the study and did not receive any compensation. The study was approved by the Human Research Ethics Committee of the Delft University of Technology (TU Delft) and conducted in compliance with the Declaration of Helsinki. The recruitment of participants was performed within the TU Delft via word-of-mouth and campus advertisement. Table 1 summarizes the participants’ demographics for each training modality, including gender, age, the highest level of education, and previous experience with VR and gaming.

Table 1 Participants demographics. Values are reported as median, minimum and maximum, and interquartile range (IQR)Virtual walking taskExperimental setup and virtual environment

The virtual walking task consisted of triggering virtual steps performed by a gender-neutral avatar (downloaded from the Unity Asset Store) visualized in the immersive VR using a commercial HMD (VIVE Pro 2 headset, HTC Vive, Taiwan & Valve, USA). In addition to the avatar, participants also visualized a virtual walker that mimicked the movements of a real 4-wheeled walker, which only allowed movements in the sagittal plane (Fig. 1a).

Fig. 1figure 1

Experimental set-up and virtual walking task (a) The set-up consisted of an HMD, two HTC Vive trackers (placed on the participant’s pelvis and the walker), an IMU (placed on the participant’s pelvis), a balance board, and the walker. Participants’ movements were tracked (left) and imitated by the avatar in the virtual environment (right). (b) The virtual walking task consisted of triggering virtual steps by executing three consecutive movements that resembled those required to trigger steps in a wearable exoskeleton: (1) move the walker forward, (2) weight shift, and (3) hip thrust

The avatar and walker were animated using the position and orientation of the HMD and two HTC Vive trackers, one attached to the participant’s pelvis at iliac crest level and the second one to the walker. To establish a connection between these components with the Unity software, we used the SteamVR plugin (version 2.7.3, Valve Corporation, USA). Likewise, the animation process was facilitated by using the Final IK package version 2.2 for Unity (Rootmotion, Estonia), which includes various inverse kinematics (IK) solvers and real-time procedural animation modification solutions. In addition, an inertial measurement unit (IMU) (Trigno Avanti Sensor, Delsys Inc., Boston, MA) was attached to the tracker on the pelvis to gather a more reliable measurement of the hip acceleration (i.e., hip thrust).

The avatar and virtual walker were scaled to match each participant’s (and walker’s) proportions. The walker scaling was performed by touching the top of the walker and pressing the HTC Vive controller’s button to record this position. The tracked height of the HMD was used to determine the scaling of the avatar. Before recording this position, we asked participants to stand up straight to make sure the height was recorded correctly.

Lastly, participants performed the virtual walking task while standing on a balance board (Bosu balance station, Domyos, Decathlon, France) to challenge their balance, enforcing them to rely on the walker. This resulted in an increased trunk inclination and ultimately causing fatigue in the arms, similar to what people with neurological disorders experience in real-life settings when learning to use a wearable exoskeleton.

The VE was developed using the Unity game engine (Unity Technologies, USA) version 2020.3.21, and ran with a framerate of 90 frames per second. The computer operated on Windows 10 Home 64-bit edition (Microsoft, USA) ran the task within the Unity Editor. The computer had 32 GB of DIMM DDR4 working memory, an NVIDIA GeForce RTX 3080 GPU, and an AMD Ryzen 5900X 3.70 GHz 12-Core processor (AMD, USA).

Step triggering

To trigger a (virtual) step, three consecutive movements needed to be successfully performed in sequential order (Fig. 1b):

Movement 1: Move walker forward First, the participant needed to move the walker forward to create space such that the (virtual) leg did not collide with the walker. The distance the walker is moved forward determines the maximum possible stride length. If a step is successfully triggered (Movement 3) but a collision with the virtual walker would occur, the step will not take place.

Movement 2: weight shifting Before the step could be triggered, the participant had to move the center of her/his pelvis laterally to match the center of the avatar’s leading leg, i.e., the foot currently positioned in front of the coronal plane, within a tolerance of 0.15 ms. This condition was required to trigger the step and had to be maintained until Movement 3: Hip thrust was achieved.

Movement 3: Hip trust Once the participant moves the walker forward and accomplishes the weight shift, the participant can trigger the step by generating hip thrust, i.e., accelerating the hip in the anteroposterior direction. If the sequence of movements is performed correctly, participants can see the avatar moving the trailing leg (i.e., the leg whose foot is positioned behind the coronal plane) forward, performing a step. This stepping motion simulates the movement that would be generated by a wearable exoskeleton. Note that the real leg remains in place and, therefore, participants have to check the avatar’s leg position (if needed) to understand the current body configuration, as they cannot rely on their proprioception for this, thus emulating people with sensory loss who cannot rely on lower-limb proprioception.

To define these movements, we got inspiration from the movements that people with neurological disorders usually need to follow and learn to safely trigger steps when using a wearable exoskeleton for overground walking, e.g., weight shifting is commonly used as a control input to trigger steps [53, 54], and the hip thrust simulates the step intention, which can be used as a control input as well [71]. In fact, given that the robotic gait of people with neurological disorders requires essential postural adjustments and balance during the double support phase, each step can be considered as the commencement of gait. The biomechanical requirements for successful gait initiation are the generation of momentum (in the forward direction and in the direction of the trailing leg) and the maintenance of balance [72]. Therefore, the hip thrust movement provides a natural way to determine the user’s intention to initiate a step, while also actively involving the user in the decision to launch a step.

Stride length control

The triggered virtual stride length is determined by the peak pelvis acceleration \(a_\) during hip thrust, measured with the IMU attached to the pelvis, according to the following linear relationship:

$$\begin SL = \frac}} \cdot SL_ & \text } } } \, \,Hip_ \ge 2 \ cm \\ \\ SL_ & \text \,} } \,\text \,Hip_ \ge 2 \ cm \\ \\ 0 & \text , \end\right. } \end$$

(1)

where SL is the triggered stride length in meters (m). The peak acceleration (\(a_\)) is the highest acceleration reached by the participant during the hip thrust movement and measured by the IMU on the pelvis in the anteroposterior direction. The maximum acceleration (\(a_\)) was fixed to \(0.4\, \hbox \cdot \hbox ^\) for all participants. In order to trigger a step, the hip’s peak acceleration needed to be higher than a predefined minimum acceleration (\(a_\) = \(0.1\,\)m\(\cdot \hbox ^\)), and the hip displacement in the anteroposterior axis (\(Hip_\)) higher than 2 cm to prevent accidental triggers/steps. \(SL_\) is the participant’s predefined maximum possible stride length and is calculated by multiplying the participant’s optimal stride length (\(SL_\)) by a factor of 1.5. The value of this factor, as well as \(a_\), \(a_\), and \(Hip_\), were determined through an iterative process of experimentation by the researchers. This involved trial and error until identifying reasonable values that provided optimal comfort and were easily achievable through natural and comfortable movements.

The participant’s optimal stride length depends on their height and is calculated for each participant as:

$$\begin SL_ = \frac \cdot SL_ \cdot BH, \ \ \ \end$$

(2)

where BH is the participant’s body height and \(SL_ = 0.7774\) is the average stride length (in percentage of body height, %BH) obtained by Bovi et al. in healthy adults [73]. Therefore, we defined the optimal stride length as half the average of healthy adults, because people with sensorimotor loss tend to take shorter steps when walking with wearable exoskeletons [71, 74,75,76,77]. Furthermore, a shorter stride length might mitigate the occurrence of motion sickness by reducing the visually induced motion sickness (VIMS) - a subcategory of motion sickness that specifically relates to the perception of motion while remaining still [78].

In order to reduce step-by-step variation and maintain a constant stride length, we encouraged participants to keep the optimal stride length for every step. Note that the stride length needed to perform the optimal stride length – defined as target stride length (\(SL_\)) – may vary depending on the previous stride length:

$$\begin SL_ =\frac \cdot SL_ + \left| Pos_-Pos_ \right| . \end$$

(3)

The target stride length (\(SL_\)), thus, depends on the distance between the position of the trailing foot (\(Pos_\)) and the leading foot (\(Pos_\)) in the anteroposterior axis and the optimal stride length (\(SL_\)) calculated through Eq. 2.

Training modalities

The experiment included four training modalities (Fig. 2a), each modality corresponding to combinations of two factors: visualization perspective (1PP or 3PP) and concurrent visual feedback (YES or NO).

Fig. 2figure 2

(a) The four training modalities. Each modality corresponds to a combination of two factors: concurrent visual feedback (ON or OFF) and visualization perspective (1PP or 3PP). (b) The experimental protocol followed a multi-arm pre-post design in which participants were randomly assigned to one of four training modalities

Person perspective

Participants, based on the training modality, experienced the VE through two distinct perspectives: 1PP or 3PP (Fig. 2a). In the 1PP training modalities, the camera was positioned at the eye level of the avatar, offering participants a direct and immersive view aligned with the avatar’s visual field. In the 3PP modalities, the camera was situated laterally to the avatar’s position (approximately 4 m in lateral direction, raised by 1 m from the floor, and rotated 90 degrees to face the virtual avatar). This deliberate placement was chosen to optimize the visualization, ensuring participants had a comprehensive view of both the avatar and the visual feedback.

Visual feedback

We aimed to design easy-to-understand and highly informative augmented visual feedback to support the learning of the different movements required to trigger a step. We attempted to achieve this by continuously projecting a fusiform object on the virtual floor in front of the avatar (Fig. 3a-b). The feedback provided by the virtual object is detailed in the following sections and summarized in Table 2. For a video of an experienced user demonstrating the virtual walking task and the visual feedback provided, see Additional file 1.

Table 2 Summary of the visual cues from the augmented visual feedbackConcurrent and terminal visual feedbackConcurrent feedback related to maximum stride length possible due to relative walker position

The position of the walker relative to the trailing leg is indicated in the fusiform object as the border that separates the object into lighter and darker areas, where the darker area is located towards the end of the object (Fig. 3a). Note that due to the scaling factor applied to the object in the longitudinal direction, the position of this border is proportional to the distance between the walker and the trailing leg, but does not necessarily match the actual walker position. The position of this border w.r.t. the participant indicates the maximum stride length that participants can reach without colliding with the walker. We determined the position of this border by normalizing the distance between the walker and the trailing leg with the maximum stride length \(SL_\), i.e., the closer the walker to the trailing leg, the smaller the possible stride length, and the closer the border to the participant. Therefore, if the stride length of the triggered step was longer than the distance between the trailing leg and the walker, it would result in a collision with the virtual walker. Thus, when this was the case, the step was not triggered on the avatar.

Concurrent feedback related to trunk inclination

The position of the walker might also affect the trunk inclination, i.e., the further the walker is in front of the participant, the larger might be the trunk inclination. To inform participants on their trunk inclination as a means to reduce it, we employed the length of the fusiform object in the anterior direction (Fig. 3d) – i.e., when the trunk inclination is \(\le\)15 degrees, the length of the object is maximum (length = 2.0 m), and when the trunk inclination is \(\ge\)90 degrees, the length of the object is minimum (length = 0.3 m). Note that trunk inclinations below 15 degrees did not affect the length of the fusiform object to avoid excessive size changes when standing up. Nevertheless, values below this threshold were still recorded for later analysis (see Section Data processing).

Fig. 3figure 3

(a) Fusiform object before hip thrust movement. The border that separates the object into lighter and darker areas informs about the position of the walker relative to the trailing leg. (b) Fusiform object after hip thrust movement. The fusiform object, initially translucent, displays a dynamically changing opaque layer, which fills up to reflect the current peak acceleration until the maximum is reached. (c) Hermite curve interpolation with four keyframes K1, K2, K3, and K4 that defines the shape of the fusiform object and the stride length score (\(SL \ score\) = 75 \(\cdot H(SL/SL_)\) + 25; where \(H(SL/SL_)\) corresponds to the value of the Hermite curve at the current stride length normalized over the maximum stride length). The stride length score ranges from 25 (minimum and maximum stride length) to 100 (target stride length). (d) Trunk inclination factor: the factor ranges from 1 (trunk inclination is \(\ge\) 90 degrees) to 10 (trunk inclination is \(\le\) 15 degrees). The total score, then, ranges from 25 to 1000

Concurrent feedback related to weight shifting

A longitudinal white line is displayed on the floor in front of the leading foot, i.e., the foot positioned in front of the coronal plane (left leg in Fig. 2a). The lateral position of the centerline of the fusiform object w.r.t. the participant’s sagittal plane shows the lateral position of the pelvis, i.e., if the participant moves the pelvis to the right (left) w.r.t the sagittal plane, the object moves to the right (left). When the lateral positions of the centerline of the fusiform object and the leading foot match, the longitudinal line displayed in front of the leading foot turns green (Fig. 3a, b). This means that the weight shift (Movement 2) is accomplished, and the step can be triggered with the hip thrust (Movement 3).

Concurrent feedback related to optimal stride length

The visual information regarding the target stride length was provided to participants by modulating the shape of the fusiform object using a piecewise cubic Hermite interpolation (achieved in Unity using the AnimationCurve class) to interpolate between key points smoothly. An example of the shape of this curve can be seen in Fig. 3c. We defined this curve using four keyframes, namely a start keyframe (K1 = (0, 0)), a keyframe to indicate the minimum stride length (\(K2=\left( SL_/SL_, 0\right)\)), a keyframe to indicate the target stride length (\(K3 = \left( SL_/SL_, 1\right)\)), and an end keyframe (\(K4 = \left( 1, 0\right)\)) representing the maximum stride length (\(SL_\)). Furthermore, we set the tangents (derivatives) of the four keyframes to zero.

The x-position of K3 in the curve indicates the target stride length (\(SL_\)), and we calculate it by normalizing the target stride length over the maximum possible stride length (\(SL_\)) that corresponds to the maximum acceleration (\(a_\)). The x-position of K2 indicates the minimum stride length (\(SL_\)) and corresponds to the minimum acceleration (\(a_\)) required to trigger a step (see also Fig. 3b). Once again, we calculate the x-position of this keyframe by normalizing this value w.r.t. the maximum stride length. Finally, the \(H(SL/SL_)\) in Fig. 3c is the value of the Hermite function that depends on the x-position, i.e., the current stride length normalized w.r.t. the maximum stride length.

As a result, the curve is the smallest at the base (spanning from K1 to K2) and at the end keyframe (Fig. 3c). Likewise, the position of the widest part of the object (K3) can vary in each step as we calculate it using the actual relative distance between both feet (see Eq. 3). Furthermore, the fusiform object is filled by a color gradient, with green on the wider part and red at the object’s extremes. The narrow base of the object (ending at K2,) is colored white to indicate the area in which no step will be triggered because \(a_\) was not reached.

The fusiform object, initially translucent, displays a dynamically changing opaque layer, which fills up to reflect the current peak acceleration until the maximum is reached (Fig. 3b). When this opaque layer surpasses the white base, which corresponds to the minimum stride length, a step is triggered. The object also features a dashed white line at its widest area, indicating the target stride length (Fig. 3a). Furthermore, the object contains a yellow line, representing the previous stride length normalized over \(SL_\) (terminal feedback; Fig. 3a). This visual aid encourages participants to maintain optimal stride length in subsequent steps based on their experience from the previous one.

The fusiform object includes a darker area near its end, whose starting point represents the position of the walker w.r.t. the trailing leg (see subsection Feedback related to maximum stride length possible due to relative walker position). If a step is to be landed within this darker area, a collision with the walker would occur. Therefore, a step must land between the threshold at the base and the border of the darker area to successfully be triggered.

Terminal feedback: Score

Participants who trained with visual feedback also received terminal feedback on their performance after each step to motivate and encourage them to enhance their performance. A pop-up window appeared in front of the avatar after each step with a score obtained for that step (Fig. 3b). The score is based on the trunk inclination and the deviation from the target stride length of each step following the equation:

$$\begin Score = SL \ score \cdot Trunk \ inclination \ factor, \end$$

(4)

where \(SL \ score\) is the score related to the stride length (see Eq. 5) and the \(Trunk \ inclination \ factor\) is a value that ranges linearly from 1 –when the trunk inclination is \(\ge\) 90 degrees– to 10 –when the trunk inclination is \(\le\) 15 degrees (Fig. 3d). Note that the trunk inclination is a continuous variable. The \(SL \ score\) depends on the value of the Hermite curve corresponding to the current stride length normalized over the maximum stride length \(SL_\) (see subsection Movement 3: Hip trust and Fig. 3c) following the equation:

$$\begin SL \ score = 75 \cdot H(SL/SL_) + 25. \end$$

(5)

Thus, the stride length score ranges from 25 (corresponding to the minimum and maximum stride lengths, i.e., \(SL_\) and \(SL_\)) to 100 points (target stride length, i.e., \(SL_\)). The total score, then, ranges from 25 to 1000. A minimum score of 25 was decided to prevent participants from receiving zero points that might hamper their motivation, ensuring that they would always receive at least this amount in the worst-case scenario. Note that the score was only shown once the step was triggered.

Experiment protocol

The experiment protocol followed a multi-arm pre-post study design (Fig. 2b) where we assigned participants randomly to one of four training modalities, with ten participants per condition, each modality corresponding to combinations of two factors: concurrent visual feedback (YES or NO) and visualization perspective (1PP or 3PP). The experiment was conducted collaboratively by a technical developer of the project and a support person not involved in the developmental phase.

Before starting the experiment, participants received theoretical training on the virtual walking task. We gave participants time to read the instructional slides (see Additional file 2) on a computer screen and ask questions if needed until they felt prepared. All participants were informed that their performance would be evaluated based on three sub-tasks: 1) their ability to walk the maximum distance possible (i.e., ability to trigger steps) while 2) maintaining an upright posture and 3) an efficient stride length (i.e., not too short, not too long). Further questions were allowed during the experiment except when performing the baseline and retention tests. Importantly, the research team in charge of the experiment only provided (or reminded) information that was in the instructional slides. After being briefed on the experiment objectives, instructions, and task details, participants answered an initial set of demographic questions (Table 1).

After the theoretical training, participants conducted a 3-minute familiarization phase, in 1PP and without feedback, to allow them to try the system and accustom themselves to the VE. After the familiarization, the experiment began with a baseline test. During baseline (and retention tests), we asked participants to virtually “walk” with the avatar the maximum distance possible, following the aforementioned instructions. During baseline, familiarization, and retention tests, participants observed the VE in 1PP and without concurrent visual feedback since this is the closest to the natural way we walk and experience the real world.

After the baseline test, the training phase started. This phase consisted of five trials of two minutes each, where participants trained to improve their performance under the training modality to which they were assigned. Before starting the training, participants allocated to the conditions with concurrent visual feedback received additional theoretical training on the different elements of the visual feedback (see Additional file 3). This training was presented in the same way as the instructional slides at the start of the experiment. Note that the score was shown only during the training and only for modalities with feedback. Participants were allowed to take brief breaks (\(\le\) 5 min) between trials to ask questions or take a rest.

After the training, we asked participants to answer four questionnaires to evaluate the embodiment they felt over the avatar, the usability of the system, the cybersickness experienced (if any), and the perceived workload (see Section Data analysis). The workload was also assessed after both the baseline and the retention tests. The questionnaires were filled out electronically in English and inside Unity using the VR Questionnaire Toolkit [79].

After answering the questionnaires, all participants carried out a second familiarization period of three minutes. This (re)familiarization aimed to wash out participants’ recent experience with the task environment and reduce any immediate aftereffects of training conditions on the performance. The retention test, which had the same form as the baseline test, was performed right after this (re)familiarization.

Outcome measures

We recorded the participants’ head and hip positions and orientations using the HMD and the HTC Vive trackers located on the hip and walker. The acceleration of the hip was recorded at all times by the IMU. The data processing was performed in MATLAB (MATLAB R2021b, The MathWorks Inc., Natick, MA, USA).

Motor learning

In evaluating the learning process, we discerned two key aspects: the sequence involving the initiation of steps, reflected in the number of steps performed (main outcome), and the quality of the sub-tasks sequence (secondary outcomes), reflected in trunk inclination and stride length. These aspects required participants to learn and train on the three distinct sub-tasks: triggering a step, controlling trunk inclination, and controlling stride length.

Main outcome The number of steps - the result from triggering steps effectively - was chosen as the main metric to assess learning, with a higher number of steps indicating greater proficiency and learning.

Secondary outcomes We used the trunk inclination and deviation from the target stride length metrics to assess the quality/technique of the steps that were triggered. The trunk inclination was estimated by the angular deviation of the segment that connects the HMD with the tracker on the hip and the calibrated vertical when the participant stood completely upright. We averaged the trunk inclination during the entire test. Note that good performance is associated with small trunk inclinations because an increased trunk inclination indicates that the participant is relying excessively on the walker.

Stride length, defined as the distance between the point of initial contact of one foot with the floor and the point of initial contact with the floor of the same foot, was recorded for each step directly from Unity. The deviation from the target stride length was then calculated as the average percentage difference between the participant’s stride length and the participant’s target stride length in absolute value. This outcome metric was calculated from all the steps performed during the test and averaged through the test.

Questionnaires

The impact of the visual feedback and perspective on participants’ experience was assessed using the following outcome metrics:

Embodiment To assess the level of embodiment over the avatar, we selected several statements from the well-established embodiment questionnaire in [80, 81] and adapted them for our application. The questionnaire consisted of six statements to assess all three embodiment components, namely, body ownership – i.e., one’s self-attribution of a body –, (self-)location – i.e., volume in space where one feels to be located –, and agency – i.e., feeling in control of own movements [20, 80]. Since the number of questions related to each component was different, we weighted them to ensure equality. Participants responded on a Likert scale between 1 and 7 points; 1 indicated “Strongly disagree” and 7 indicated “Strongly agree”. The statements, their weight during analysis, and their targeted component of embodiment can be found in Additional file 4.

Usability The System Usability Scale [82] (SUS) was employed to evaluate the usability of the four different training modalities. The SUS has been widely used to assess the usability of software and hardware solutions [83, 84] and measures different aspects such as efficiency, effectiveness, and satisfaction. The questionnaire consists of 10 questions (see Additional file 4) with five response options on a Likert scale; 1 indicated “Strongly disagree”, and 5 indicated “Strongly agree”.

Cybersickness Although the Simulator Sickness Questionnaire (SSQ) was initially intended for simulator sickness assessment [85], it is also currently employed for cybersickness assessment [86]. The questionnaire prompts participants to provide subjective severity ratings of 16 symptoms on a four-point scale (none = 0, slight = 1, moderate = 2, severe = 3) after the exposure to the system [85]. These symptoms can be classified into three categories: Oculomotor, disorientation, and nausea [85]. Each category has its own score and is defined as the sum of its symptom scores multiplied by a constant scaling factor. In addition, there is a total simulator sickness score (TS) to obtain a single score, which is calculated by adding the raw scores (i.e., without the individual scaling factor) of the three categories and multiplying by a constant factor [85, 86]. Additional file 4 contains information on the symptoms and how to compute the scores.

Workload To measure the overall workload while using the IVR system, we employed the widely accepted and validated Raw Task Load Index (RTLX) – the most common adaptation from the NASA Task Load Index [87] in which the weighting process is omitted [88]. The workload is calculated by asking participants to graphically indicate their perceived cognitive demand (low/high or good/poor) on a response scale of 21 marks across six dimensions, namely mental, physical, and temporal demands; performance; effort; and frustration. The total score is computed by adding the score of each question and dividing it by six. The questionnaire can be found in Additional file 4.

Statistical analysis

Normality was assessed using Shapiro-Wilk’s normality test, and homogeneity of variances was assessed by Levene’s test. To detect outliers, boxplots were examined, and extreme outliers – values exceeding \(Q3 + 3\cdot IQR\) or falling below \(Q1 - 3\cdot IQR\) – were identified and removed from all metrics. In these expressions, Q1 is the first quartile (25th percentile), Q3 is the third quartile (75th percentile), and the IQR refers to the interquartile range, which is the difference between Q1 and Q3. Additionally, two participants were excluded from the analysis of the deviation from the target stride length, as neither succeeded in taking a single step during the baseline test. Statistical analyses were carried out using R version 4.2.0, and the significance level was set to \(\alpha\) = 0.05.

We used one-way analysis of variance (ANOVA) to verify that potential confounding variables such as age, level of education, experience using VR, and experience using video games were fairly balanced (by randomization) across the groups. When the one-way ANOVA assumptions were violated, the Kruskall Wallis rank sum test was applied.

To evaluate whether, overall, participants significantly improved their gait performance - i.e., number of steps (main outcome), and trunk inclination and deviation from target stride length (secondary outcomes) - from baseline to retention, paired t-tests in the case of normally distributed data or paired Wilcoxon signed-rank tests for non-normal distributed data were employed for each condition.

To evaluate whether participants improved their gait performance differently depending on the training condition they were allocated to, we employed a two-way ANOVA with the main and secondary outcomes change from baseline to retention (i.e., the difference between the retention values and the baseline values) as dependent variables and with independent values the type of visual feedback (ON vs. OFF), the perspective (1PP vs. 3PP), and their interaction [89]. When the two-way ANOVA assumptions were violated, the robust two-way ANOVA (using the WRS2 package from R) was employed [90]. In the case of statistically significant interactions in the two-way ANOVA, posthoc pairwise comparisons with Tukey corrections were performed to compare levels of factors.

Regarding the questionnaires, a single value per questionnaire (and per subcomponents of the questionnaire) and per participant was computed following their specific conventions and utilized for the analysis. A two-way ANOVA was used to examine the main effect of the visual feedback condition and the perspective, and their interaction on the embodiment, usability (SUS), and cybersickness (SSQ) questionnaire answers collected after the training period. In the case of statistically significant interactions, posthoc pairwise comparisons with Tukey corrections were performed. Again, robust two-way ANOVA was used if the ANOVA assumptions were violated.

The participants’ cognitive load was subjectively measured using the RTLX questionnaire at three different time points, namely after baseline (B), after training (T), and after the retention test (R). A linear mixed-effects model (LMM) with participants as a random effect (see Eq. 6) was used to investigate the effect of time.

$$\begin dv \sim feedback \cdot perspective \cdot time; random = \ \sim 1|ID, \end$$

(6)

where dv is the dependent variable, feedback, perspective, and time are the fixed-effects, and ID is the participant identification and the random-effect. The LMM has no random slopes as indicated by \(\sim 1\).

留言 (0)

沒有登入
gif