Number feature distortion modulates cue-based retrieval in reading

In sentence comprehension, what are the cognitive constraints that determine number agreement computation? Two broad classes of theoretical proposals are: (i) Representation distortion accounts, which assume that the number feature on the subject noun gets overwritten probabilistically by the number feature on a non-subject noun, leading to a non-veridical memory trace of the subject noun; and (ii) The cue-based retrieval account, a general account of dependency completion processes which assumes that the features on the subject noun remain intact, and that processing difficulty is only a function of the memory constraints on dependency completion. However, both these classes of model fail to account for the full spectrum of number agreement patterns observed in published studies. Using 17 benchmark datasets on number agreement from four languages, we implement seven computational models: three variants of representation distortion, two cue-based retrieval models, and two hybrid models that assume both representation-distortion and retrieval. Quantitative model comparison shows that the best fit is achieved by a hybrid model that assumes both feature distortion (specifically, feature percolation) and cue-based retrieval; numerically, the second-best quantitative fit was achieved by a distortion-based model of number attraction that assumes grammaticality bias during reading. More broadly, the work furnishes comprehensive evidence to support the idea that cue-based retrieval theory, which aims to be a general account of dependency completion, needs to incorporate a feature distortion process.

留言 (0)

沒有登入
gif