Modelling cognitive flexibility with deep neural networks

ElsevierVolume 57, June 2024, 101361Current Opinion in Behavioral SciencesAuthor links open overlay panel, Highlights•

We review recent advances in endowing deep RL systems with cognitive flexibility.

Dual-process RL unifies habit-driven System I and deliberative System II policies.

Deep meta-RL harnesses in-context computations to adapt its policy.

Deep RL models of meta-control adapt behavior based on the agent's current efficacy.

Neural networks trained with deep reinforcement learning can perform many complex tasks at similar levels to humans. However, unlike people, neural networks converge to a fixed solution during optimisation, limiting their ability to adapt to new challenges. In this opinion, we highlight three key new methods that allow neural networks to be posed as models of human cognitive flexibility. In the first, neural networks are trained in ways that allow them to learn complementary ‘habit’ and ‘goal’-based policies. In another, flexibility is ‘meta-learned’ during pre-training from large and diverse data, allowing the network to adapt ‘in context’ to novel inputs. Finally, we discuss work in which deep networks are meta-trained to adapt their behaviour to the level of control they have over the environment. We conclude by discussing new insights about cognitive flexibility obtained from the training of large generative models with reinforcement learning from human feedback.

© 2024 The Author(s). Published by Elsevier Ltd.

留言 (0)

沒有登入
gif