Object weight can be rapidly predicted, with low cognitive load, by exploiting learned associations between the weights and locations of objects

Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights, participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lower working memory demands involved in predicting weight from object visual properties.

留言 (0)

沒有登入
gif