Emerging biological insights enabled by high-resolution 3D motion data: promises, perspectives and pitfalls

In the vast majority of cases, 3D information is measured by combining two or more 2D perspectives. Even if collecting images from more than one view facilitated the first 3D quantifications of motion, views were often taken asynchronously owing to technical limitations. For example, work exploring the function of the pectoral girdle in flight by Jenkins et al. (1988) combined a dorsal view and a latero-ventral view of a starling flying in a wind tunnel. Thanks to the cyclical nature of the avian wingbeat, the asynchronous data could be interpreted separately, and provided a detailed description of complex 3D motions, such as the furcula movements during flight. Similarly, the repetitive walking cycle of a quail or the cyclical paddling motions of a ringed teal allowed for the reconstruction of a frontal view, built from the temporal synchronisation of the lateral and dorsoventral views (e.g. Abourachid et al., 2011; Provini et al., 2012b). The stereotypic, cyclical and repetitive nature of locomotor movements perfectly fits these reconstruction methods. However, many natural motions are not predictable and repetitive cycles, for a myriad of reasons. These include moments of burst performance, isolated or brief behaviours, as well as variation in species age, abilities and health. To overcome this, what could be seen as a failure to record a clean movement sometimes happens to be useful. For example, when trying to quantify the oropharyngeal–esophageal cavity (OEC) volume in a white-throated sparrow, spontaneously singing in front of an X-ray camera (Riede and Suthers, 2009), the sudden and unexpected neck rotation, occurring during the production of a similar note, completed the information extracted from the pure lateral view and provided indispensable information to estimate the volume of the OEC.

To obtain synchronous views of the same movement, inclined mirrors were often used to split a single view into two. This technique was used with light-based video cameras in a complement of single-plane X-ray acquisitions, for example, to explore the respiration, eating and spitting motions of three-spined sticklebacks (Gasterosteus aculeatus) (Anker, 1977) or the locomotion of the lizard Sceloporus clarkii (Reilly and Delancey, 1997). Early stereophotography, combining two viewpoints, was used to quantify the wake of flying jackdaws (Spedding, 1986), and became a classical method to obtain 3D data (e.g. Ikeya et al., 2022). The idea of multiplying views to obtain several perspectives of the same object was pushed one step further with the design of advanced tracking devices (e.g. de Margerie, 2015, Décamps et al., 2017), adapted to motion capture in natural environments.

Extrapolating from two or more 2D viewpoints to reconstruct 3D data is notably different from the direct registration of 3D coordinates. Capturing multiple views synchronously has become easier over time, but combining those views into 3D information requires a significant effort. Dealing with calibration or distortion can be challenging, especially outside of laboratory conditions. Yet, these steps are indispensable to fully leverage the potential of 3D data, especially to reconstruct the 6 degrees of freedom of a structure of interest.

With technological advances, it is easier to collect and process high-resolution 3D data with small-pixel images (relative to the size of the organism), a high signal-to-noise ratio and enough markers to fully reconstruct the structure's shape and its motion in 3D. Many of these methods rely on tracking markers in each view to reconstruct the subject's 3D motion more rapidly. Depending on the imaging mode and equipment, these may include automatically tracking infrared-reflective (e.g. Pontzer et al., 2009; Warrick and Dial, 1998), radio-opaque (e.g. Brainerd et al., 2010) or active markers. Marker-tracking and 3D motion reconstruction is achieved through 3D motion capture (see Moeslund et al., 2006 for a summary of methods applied to human motion), or more generally using direct linear transformation (DLT) to track any kind of marker manually or automatically (Hedrick, 2008). Open-source versions of the DLT software (see Hedrick, 2008; Jackson et al., 2016; Theriault et al., 2014) have facilitated a burst of new 3D datasets, and additional techniques are now moving beyond markers to reconstruct 3D motion directly from silhouettes (e.g. Fontaine et al., 2009), and 3D temporal scanners that capture motion as a sequence of 3D meshes (Ruescas Nicolau et al., 2022).

Journal of Experimental Biology has been leading many of these breakthroughs in 3D kinematic analysis. In 2012, Theriault et al. (2014) reported that 70 papers, or 11% of Journal of Experimental Biology's published content that year, relied on videos to measure kinematics. More recently, in 2021, that percentage has increased to 55 papers, or 14% of the publications in the Journal of Experimental Biology. Of those, 32 papers, or 8% of total papers, 58% of kinematics-specific papers reported three-dimensional kinematics (see McHenry and Hedrick, 2023, for more details). This paradigm shift in data collection has either allowed for new insights into old questions, which sometimes led us to update textbooks, or opened questions completely new to science. In the next section, we highlight three case studies, illustrating those scientific processes.

留言 (0)

沒有登入
gif