The aim of this Research Topic for Frontiers in Psychology under the section of Cognitive Science and Frontiers in Neurorobotics is to present state-of-the-art research, whether theoretical, empirical, or computational investigations, on open-ended development driven by intrinsic motivations. The topic will address questions such as: How do motivations drive learning? How are complex skills built up from a foundation of simpler competencies? What are the neural and computational bases for intrinsically motivated learning? What is the contribution of intrinsic motivations to wider cognition?
Autonomous development and lifelong open-ended learning are hallmarks of intelligence. Higher mammals, and especially humans, engage in activities that do not appear to directly serve the goals of survival, reproduction, or material advantage. Rather, a large part of their activity is intrinsically motivated - behavior driven by curiosity, play, interest in novel stimuli and surprising events, autonomous goal-setting, and the pleasure of acquiring new competencies. This allows the cumulative acquisition of knowledge and skills that can later be used to accomplish fitness-enhancing goals. Intrinsic motivations continue during adulthood, and in humans artistic creativity, scientific discovery, and subjective well-being owe much to them.
The study of intrinsically motivated behavior has a long history in psychological and ethological research, which is now being reinvigorated by perspectives from neuroscience, artificial intelligence and computer science. For example, recent neuroscientific research is discovering how neuromodulators like dopamine and noradrenaline relate not only to extrinsic rewards but also to novel and surprising events, how brain areas such as the superior colliculus and the hippocampus are involved in the perception and processing of events, novel stimuli, and novel associations of stimuli, and how violations of predictions and expectations influence learning and motivation.
Computational approaches are characterizing the space of possible reinforcement learning algorithms and their augmentation by intrinsic reinforcements of different kinds. Research in robotics and machine learning is yielding systems with increasing autonomy and capacity for self-improvement: artificial systems with motivations that are similar to those of real organisms and support prolonged autonomous learning. Computational research on intrinsic motivation is being complemented by, and closely interacting with, research that aims to build hierarchical architectures capable of acquiring, storing, and exploiting the knowledge and skills acquired through intrinsically motivated learning.
Now is an important moment in the study of intrinsically motivated open-ended development, requiring contributions and integration across a large number of fields within the cognitive sciences. This Research Topic aims to contribute to this effort by welcoming papers carried out with ethological, psychological, neuroscientific and computational approaches, as well as research that cuts across disciplines and approaches.