Amputation can occur due to injuries but more often due to complications of vascular disease such as diabetes mellitus [1]. In the United States, approximately 1.6 million people were living with a limb loss in 2005, and the number is expected to double by 2050 to 3.6 million [2]. Among the amputee population, lower-limb amputation is the most common type, accounting for 65% of all amputations.
The loss of body segments greatly influences amputees’ physical and psychological health. Successfully employing lower-limb prostheses (LLPs) has a positive impact on the overall quality of life by regaining independence and a sense of self-efficacy [3]. Despite the potential benefits, not every amputee uses an LLP. It has been found that the adoption rate of LLP ranges from 49 to 95% [4,5,6] and users of LLPs who are younger age and have a distal amputation tend to utilize the device for more hours each day [7]. This is perhaps not surprising given the traditional LLPs usually are simple body-driven mechanical devices. Wearers can only rely on the physical interface between the residual limb and prosthetic socket to regulate the LLP’s behavior with their body movements, as well as to gain limited information about the LLP and how it interacts with the environment. When more segments of the lower limb are lost, it becomes more challenging for amputees to walk with their prosthetic legs, as indicated in walking speed and metabolic cost [8] and they are more likely to abandon the device [9]. Indeed, in a recent systematic review of therapeutic benefits of LLPs [10], it was found that devices with more advanced technology have the potential to provide more benefits. In particular, quasi-passive and active prostheses were much better than passive devices in enhancing amputees’ quality of life.
Intelligent LLPs refer to all computer-controlled lower limb prostheses equipped with advanced control systems and algorithms designed to minimize gait limitations [11] on various prosthetic devices, including quasi-passive devices [12] and active devices [11]. Intelligent LLPs continuously monitor their own status using embedded sensors and are able to make decisions based on predefined rules without direct instructions from wearers [13]. For example, a traditional ankle prosthesis’ stance dorsiflexion angle is optimized for level ground walking [14]. This setup forces wearers to adopt additional compensation efforts when they walk on a slope. An intelligent ankle prosthesis can be programmed to change its locomotion mode from level ground to ramp up based on measured foot orientation [14]. This mode change permits additional ankle dorsiflexion in the stance phase and makes it easier to walk on the ramp. The decision of mode change is made by the LLP directly based on its continuous monitoring of foot orientation and can be triggered by putting the foot on a slope without additional commands. As a result, the intelligent ankle prostheses allow wearers to walk on both level ground and slope naturally. Additionally, some technologies in intelligent LLPs offer channels for wearers to provide feedback and directly adjust their control rules [15]. These channels enable wearer inputs and additional interaction with the LLP. Alternatively, if wearers are unsure how to adjust the control parameters, they can use human-in-the-loop optimization [16]. In this process, wearers walk continuously for a period while the computer searches for the most effective control parameters by analyzing the wearer’s responses to large number of parameter combinations.
Two types of wearer-LLP interactionThe developed technologies generally affect the following two types of wearer-LLP interaction: (1) direct wearer-LLP interaction and (2) indirect wearer-LLP interaction through a user-controlled interface (UCI). Direct wearer-LLP interaction is a continuous process, which involves real-time information exchange between a wearer and an LLP when the LLP is in action. The ultimate goal of the direct wearer-LLP interaction is that wearers will eventually become intuitively involved in the interaction during locomotion. Efferent and afferent neural-machine interfaces are examples of technologies designed to enable and enhance direct interaction. The efferent neural interface recognizes user movement intent by decoding neuromuscular signals recorded from the residual limb, so the behavior of LLPs has adjusted accordingly (e.g., [17, 18]). The afferent neural interface restores somatosensory, which is lost due to amputation, on the residual limb with artificial electrical stimulations [19]. This afferent feedback is designed to inform wearers of the LLP status and its interactions with the external environment. Other forms of sensory substitution (e.g., via mechanical vibration, audio, or visual cues) have also been used to restore somatosensation [18].
Indirect wearer-LLP interaction through a UCI is initiated by the wearer when the LLP is not in action, with the wearer interpreting information from the LLP and delivering commands deliberately through the interface (see Fig. 1). Beyond the traditional ways of changing LLP settings through a third-party prosthetist, the field is striving to give wearers flexibility in customizing the LLP control through a UCI (e.g., [15, 20, 21]). The ultimate goal of the indirect interaction is to achieve optimal and preferred LLP control by the wearer adjusting the LLP settings on their own. Some UCIs have been developed and applied in both experimental devices (e.g., [20, 21]) and commercial devices (e.g., OttoBock). For commercial devices, the UCI allows the wearer to activate and change different modes to support various activities, as well as to adjust control parameters of the swing phase [15]. In experimental devices, the UCI aims to give wearers more options in adjusting LLP settings, including adjustment to the radial positions of the socket during ambulation[21] and defining the desired prosthesis knee impedance control [20].
Fig. 1Wearer-LLP interaction is generally divided into two types: (1) direct wearer-LLP interaction, and (2) the wearer’s indirect interaction with the LLP through a UCI
A need for a new framework of wearer-LLP interactionAn effective application and successful adoption of these technologies cannot be achieved without taking human factors into account during the design process. As defined by the Human Factors and Ergonomics Society, “Human factors is concerned with the application of what we know about people, their abilities, characteristics, and limitations to the design of equipment they use, environments in which they function, and jobs they perform” [22]. Human factors is a discipline that studies the interrelations among three aspects: human, system or device, and the work environment, with the goals to improve safety, comfort, and productivity while reducing human errors.
Recently, some studies have started to take human factors considerations into the design process of LLPs. For example, Beckerle et al.[23] developed a framework that prioritizes specific requirements in the human and device aspects of the powered LLP design, including satisfaction, the feeling of security, body-schema integration, support, socket, mobility, and outer appearance. Based on survey data, Fanciullacci et al.[24] suggested that an ideal LLP should prioritize design requirements such as reliability, comfort, weight, stability, adaptability to different walking speeds, and functionality related to the wearers’ lifestyle. In both studies, the reported requirements are determined in the context of existing LLPs and prioritized by wearers or prosthetists based on their personal experience. Although practical, this method has difficulty identifying all critical requirements to emerging technologies, as both wearers and prosthetists generally lack experience in these emerging technologies. In this case, expertise developed in the other domains can provide guidance when intelligent systems are considered. Intelligent LLPs can make sophisticated decisions such as adapting prosthesis control to user intent and environment [25]. This aligns closely with other systems that leverages high levels of automation or artificial intelligence, such as autonomous vehicles or robots. For example, transparency informs users about the status and planned actions of the device has shown to facilitate appropriate trust and improve task performance because users understand system capability and can anticipate the behavior of the system [26,27,28]. For the same reason, we can speculate that providing transparency that informs wearers of the planned action will help to avoid disturbance and dangerous consequences when a decision from the LLP leads to actions that are not aligned with wearers’ expectations.
New elements also bring novel dynamics to the wearer-LLP interaction, which amplifies the limitations of the traditional ways of evaluating the interaction to ensure wearer acceptance and adherence. According to Nielsen’s Model of Attributes of System Acceptability [29], system acceptance is affected by usability and utility, as both are important aspects of usefulness. Usefulness is the perceived quality of a system that users’ goals can be attained through using the system. Utility focuses on whether the functions of the system can work properly to support user needs, and usability focuses on whether the functions are pleasant and easy to use. In the context of LLPs, utility is how well an LLP is supporting a wearer to achieve the goals of improved mobility, independence, balance, and stability (see functional user needs in [30]). Usability, on the other hand, is more related to factors influencing wearer satisfaction and the system’s ease of use. This includes considerations such as how quickly can a wearer learns to use an LLP and how much attention is required for body movements when walking with the LLP. The development of LLPs has devoted heavily to ensuring the functionality (i.e., utility) [31], while there is little effort on how easy and pleasant wearers can use the functions (i.e., usability). Even among the scarce efforts, the results are often inconsistent with their objective performance and, therefore, cannot provide accurate information for the design process (e.g., [32,33,34]). For example, in a study comparing a powered and an unpowered LLP, participants preferred the powered LLP because they felt it helped them walk longer and faster; however, the self-reported results could not be confirmed with the objective measures as the majority of them did not increase their daily step count and speed [33]. Therefore, the wearers’ subjective preference does not necessarily reflect the overall usability of the LLP. Although the limitation of using solely self-reported questionnaires in usability evaluation may be less salient in the non-intelligent LLP, which is designed to achieve basic mobility functions, lack of holistic evaluation on usability may largely compromise wearers’ acceptance of the intelligent LLP that has versatile functions [35].
To enable systematic examination of human factors issues in wearers’ interaction with intelligent LLPs, the current paper proposes a framework (1) to introduce new human factors elements that emerge with the development of intelligent LLPs, and (2) to adopt Nielsen’s five usability requirements [29] to evaluate the wearer-LLP interaction. The paper is organized into three parts. The first part summarizes relevant elements in the wearer, device, and task aspects that affect the two types of interaction (i.e., direct wearer-LLP interaction and indirect interaction through a UCI). Although an LLP supports various kinds of activities, the direct wearer-LLP interaction in this paper primarily focuses on the interaction in locomotion. Also, in indirect interaction, when we use the word “interface”, we refer to the UCI on a separate device used to adjust the LLP settings but not the physical socket interface that has been researched extensively [36]. In the second part, we redefine the usability requirements for wearer acceptance by adopting Nielsen’s five usability requirements for two types of wearer-LLP interaction. In addition, for each requirement, we systematically organize existing evaluation methods and methods from other domains that can be used to inform the usability requirement for each type of interaction. The last part introduces future directions with discussion of some ongoing research in understanding wearers’ preferences in indirect interaction and standardizing assessments in direct interaction.
A framework of two interaction typesIn this framework, the goal is to achieve system acceptance and trust by improving the usefulness of the system. Acceptance is associated with successful introduction of and intention to use a technology [29]. Trust can be defined as the attitude that a system will help users achieve their goals in situations of uncertainty and vulnerability [37]. Both acceptance and trust are strongly related to users’ actual usage of the system. Usefulness refers to whether users’ goals can be attained through using the system, which has been proposed to be a key contributing factor to system acceptance in models such as the Technology Acceptance Model [38] and Nielsen’s Model of Attributes of System Acceptability [29]. Nielsen’s model further breaks usefulness down into two equally important components: utility and usability. Usability consists of learnability, efficiency, memorability, use error, and satisfaction [29]. Meeting these requirements means we are making both types of interactions (i.e., direct interaction in locomotion and indirect interaction through UCIs) easier to learn and remember with a lower likelihood to make mistakes as well as making LLPs more efficient and pleasant to use. As a result, the wearer will be more likely to have a positive attitude towards LLPs, which will help to ensure acceptance and trust.
To meet the five usability requirements, we need to consider human factors elements during the design process [29]. According to the classic model of the human-machine system [32], the elements influencing human-machine interaction come from three aspects: human, machine, and the task environment. Adopting this model, in wearer-LLP interaction, we organize the relevant elements into three key aspects: wearer, device, and task (see Fig. 2, left panel).
Wearer elements are the human characteristics that may influence the interaction quality, such as cognitive level and prior experience with LLP. Although these elements in the wearer aspect cannot be altered easily, they suggest individual differences and determine whether a design’s applicability is broad. Therefore, designers are recommended to consider the wearer elements with inclusive designs, and researchers should assess and report these characteristics as the context of the reported findings for appropriate interpretation. Device elements are the characteristics related to the usability of the device, which can be altered by interface design. In direct interaction, the device we focus on is the LLP; in indirect interaction, we focus on UCIs. Task elements are the contexts of under what situation and how the device is used which depend on the wearers’ goal. Interface design for an intelligent LLP to achieve high usability needs to be tested under as many contexts as possible under which the wearer will use this LLP to ensure adoption and long-term adherence.
In this prospective discussion, we only highlight the new elements that emerge as the new technologies in intelligent LLPs develop. These new elements include wearer elements (e.g., cognitive function, prior experience), device elements (e.g., transparency of an intelligent system), and task elements (e.g., walking environment). Other elements, such as the weight of an LLP which has been traditionally explored in prior research, are not considered in this discussion although they remain important for LLPs. Although we introduce individual elements respectively in each interaction type, the effects of these elements are not isolated from each other. These three aspects interact to affect how wearers feel and perform when using intelligent LLPs. It means that the results of one change in the device element may be specific to its wearer characteristics and task condition. Therefore, a good design should try to consider and report all three aspects of elements.
Fig. 2Proposed human factors framework for two types of wearer-LLP interaction. The framework illustrates the new elements affecting the interaction and the requirements for system acceptance in intelligent LLPs. The left panel is adapted from the model of the human-machine system [32] and the right panel is adapted from Nielsen’s Model of Attributes of System Acceptability [29]. Elements affecting utility can also be organized into the wearer, device, and task aspects. Given that usability is less understood than utility, we focus on highlighting the elements influencing usability in this prospective discussion
Elements to be considered for direct interaction in locomotionTo achieve ultimate acceptance of emerging technologies developed to enhance direct interaction during locomotion, we need to consider elements that become more crucial to the direct wearer-LLP interaction as the automation level increases in the intelligent LLP. In this section, we review elements that may contribute to these outcomes from three aspects: wearer, device, and task (see Fig. 3, left panel). Some elements emerged with the development of the intelligent LLP technologies, such as system transparency of the device and prior knowledge of the wearer; elements such as executive function have been examined before, but it is more important now with more information to learn and process at the same time; task elements have been studied extensively with traditional LLPs, but with the new dynamics in the interaction, the previous results may not be generalizable and the influence of these elements may need to be re-examined.
Fig. 3Examples of elements affecting direct wearer-LLP interaction in locomotion (left) and indirect interaction through a UCI (right)
Wearer elementsExample wearer elements include wearers’ executive functioning and prior knowledge. Executive function is one domain of cognitive functioning that supports goal-directed behavior and control of attention, which affects the cognitive resources available to manage task demands. Executive function plays a crucial role in determining how safely the wearer could operate an LLP, the wearer’s mobility performance, as well as rehabilitation outcomes after amputation (for a review, [39]). For example, O’Neil and Evans[40] examined post-amputation rehabilitation before limb fitting and followed up at six months, and they found that executive function negatively affected the hours of LLP use and mobility outcomes. Similarly, Hunter et al.[41] found that lower executive function scores at discharge from rehabilitation programs were associated with lower gains in gait velocity and functional mobility. Executive function declines with age or can be impaired as a result of chronic conditions. Considering that about 75% of lower extremity amputations happen in patients older than 65 years of age [42], possible age-related cognitive decline could place a high demand on those wearers.
Another wearer element is prior experience. Prior experience shapes the wearer’s understanding and expectations towards a new technology which would influence the acceptance of the technology [43, 44]. For example, if a wearer has an unpleasant experience with a new intelligent LLP, such as falling, they would be less likely to trust and fully engage with the technology for a long time. In addition, as traditional non-intelligent LLPs are still often used as the initial prosthetic legs for new amputees, many wearers could be exposed to emerging technologies with their knowledge and expectation of LLPs formed through prior experience with non-intelligent devices. A relevant question is whether the prior experience with non-intelligent devices would positively or negatively transfer to their understanding and expectations of the intelligent LLPs. Understanding how wearers’ prior experience affects trust and expectations can guide the design of the intelligent device and improve its usage and safety, such as how trust could be restored after an unpleasant experience and knowing how a wearer’s trust evolves with increasing experience with a device. Similar questions have been asked in the development of other intelligent systems [45,46,47,48] and LLPs will be no exception in the need to address these.
Device elementsTransparency refers to the idea that the wearers should be informed of the current status and planned actions of the LLP [49]. In a traditional LLP, it is difficult for a wearer to detect and predict the motion of the LLP because the wearer lacks accurate and reliable somatosensation such as joint angles or joint angle velocity from the residual limb to describe the status of the LLP. In addition, wearers are unable to acquire real-time information about the interaction of the LLP with the environment, such as contact points and amplitude of ground contact force, to handle unexpected disturbances [50]. The development of new technologies, such as the afferent neural-machine interfaces, can provide information to promote system transparency to keep wearers updated on LLP status and interactions with the external environment [18].
In addition to restore sensory information to inform wearers about the current status, another aspect of transparency is to inform wearers about the incoming actions. Newly developed technologies allow LLPs to automatically switch control based on recognized locomotion modes (e.g., [17, 18]) while wearers are often unaware of what the LLP is planning to do. If the locomotion mode changes unexpectedly, or the locomotion mode recognition has faults and mismatches with the wearer’s intention [51, 52], it could disturb the wearer’s task performance, negatively affecting trust and acceptance towards technologies. In this case, providing transparency to support the wearer’s understanding and prediction about the LLP’s upcoming motion can be beneficial.
Task elementsThe task elements can affect walking by imposing high task demands. One source of the demand is related to the environment. For LLP wearers, walking on different surfaces, such as uneven terrains, stairs, and slopes, or walking in a dynamic environment is considerably more challenging than walking on the level ground [53]. Not only does the needs for biomechanical support vary for different terrain conditions [54], but from a cognitive perspective, amputees wearing LLPs are less stable and more likely to fall when crossing or avoiding an unexpected obstacle than non-disabled individuals without LLPs [55]. Therefore, it is reasonable to speculate that the demand for the walking task is highly dependent on the environment in which the task is carried out. Considering the effects of different terrains and dynamics of the environment on cognitive and physical demands would benefit the development of emerging LLP technologies.
Furthermore, in everyday activities, wearers often carry out concurrent tasks such as looking for a coffee shop while walking. This additional task can heighten task demand and impair walking performance in amputees [56]. In the example of navigating while walking, people need to perform two concurrent tasks; one is walking that requires the integration of the basic visual and sensory feedback and control of physical movements, and the other is navigation, which also involves significant mental efforts. This process is common in our everyday life, such as crossing roads while watching for traffic, as well as walking while talking or texting on the phone. Compared to non-disabled individuals, amputees wearing LLPs experience a more adverse impact on gait performance from performing a concurrent task when walking [57]. Therefore, another element to consider when developing effective emerging technologies is the concurrent tasks in daily walking. By reducing the cognitive demands of operating LLPs, wearers can have additional cognitive resources to perform concurrent tasks while walking.
Elements to be considered for indirect interaction to define device controlWith newly developed technologies used to adjust device control (e.g., UCI), the acceptance may require the technology to provide the desired adjustments without inherent difficulty to use and learn over time for people without prior knowledge. A wearer would most likely be performing adjustments to the settings of a device in a stable environment such as at home, thus its performance is less susceptible to the surroundings. Therefore, task elements are not considered for this interaction. Nevertheless, the wearer and device elements remain important for the interaction between the wearer and the LLP.
Wearer elementsOne wearer element that affects the interface interaction is the domain-specific knowledge in tuning. Experts have much more elaborated declarative domain-specific knowledge than novices [58]. As a result, experts are more likely to perceive holistic meaningful patterns from an interface, whereas novices may focus the processing on individual pieces of information from the same interface [59]. An expert such as a prosthetist may understand the relationship between the control setting and the dynamic properties of the intelligent LLP, while a novice wearer may have to figure out this relationship through trial and error. Therefore, the design of an interface for novices needs to consider the minimal knowledge that a novice has and support exploration and recovery from errors. For example, some terminologies that are commonly used by clinicians can be translated into laymen’s terms with effective visualizations.
Another user element that affects the interaction is
留言 (0)