Current Projects
The Attentional Curve of Forgetting
Early Discrimination of Dementia Types in Synchronised Brain and Eye Activity
Lewy Body Dementia (DLB) is the second most common type of degenerative dementia after Alzheimer’s, yet it remains poorly understood and is frequently misdiagnosed
Enhancing Digital Neuropsychological Assessments
Healthy ageing has become a critical challenge with the global rise in life expectancy. While traditional neuropsychological tests are valuable, they are often administered too late and lack temporally sensitive measures, e.g., eye movements, which can reveal more subtle markers highlight dementia yet in an early stage. This project combines empirical research with software engineering to revolutionise how we monitor and assist healthy ageing. By building on digitalised neuropsychological tests, we are developing a new system that incorporates fine-grained data like eye movements and motor responses, tracks individuals over time, and uses machine learning to create personalised, adaptive assessments and training routines. The key objective of this project is to develop a robust, affordable, scalable and cost-effective system that capitalises on temporally sensitive eye movement responses, harvested through webcams, to improve the early detection of prodromal dementia and allow earlier interventions to slow or delay the progression of neurodegeneration. This project has been funded by Sapienza, Università di Roma, with grant agreement (RG123188B3F9299C).
Past Projects and Research Lines
Understanding Long-term Memory Mechanisms of Naturalistic Visual Information in Healthy and Neuro-Degenerate Ageing
The format of memory representations for visual information has always been of central interest in vision science. A key debate regards the role played by semantic knowledge in scaffolding such representations, and how it interfaces with perceptual (sensorial) information of independent episodic instances stored in memory. Moreover, as we age, the way we form, maintain and access these representations can significantly change, and so it is critical to assess what mechanisms are preserved (or impaired) in later life and especially in those individuals that may be experiencing early signs of neurodegeneration (e.g., mild cognitive impairment, MCI). By combining experimentation and computational modelling, this project has demonstrated that eye movements can reveal the effects of semantic interference during the learning of naturalistic scenes (Mikhailova et al., 2021, Psychonomic Bulleting and Review), and this mechanism is still operating in healthy aged individuals, as well as people suffering from MCI (Coco, et al., 2021, Neuropsychology). Moreover, we uncovered that semantic interference does not automatically imply a detriment to memory representations, as our ability to discriminate between similar items may benefit from it (Delhaye et al., 2024, Memory and Cognition). Through computational analyses of the representational layers of visual information using algorithms based on low- and high-level visual information, as well as more complex deep neural network models, we distinguished between perceptual and semantic interference on similarity judgment (Mikhailova et al., 2022, Lecture Notes in Computer Science) and recognition memory (Mikhailova et. al., 2024, Cognitive Processing), while developing predictive models of MCI with results comparable to state of the art (Rocha, et al., 2025, to appear). The achievements of this project, funded by Fundação para a Ciência e a Tecnologia (PT) and the EU through the European Regional Development Fund (ERDF) with grant agreement (PTDC/PSI-ESP/30958/2017), collectively advance our knowledge of visual memory, its degradation, and the potential for new diagnostic markers, contributing significantly to the fields of cognitive neuroscience and neuropsychology.
Semantic Influences on Gaze Control and Short-Term Visual Memory in Healthy and Neurodegenerative Aging
How high-level semantic knowledge versus low-level visual features are used to direct our gaze, and particularly what information can be processed outside the direct line of sight (in extra-foveal vision) has been extensively debated in vision science. Furthermore, understanding how these attentional and memory-binding processes are maintained or decline with age has often been neglected despite their critical role in identifying early signs of neurodegeneration. By combining eye-tracking, electrophysiology, and computational modelling, this project investigated how object and scene semantics shape attention and memory in healthy younger and older adults, as well as in individuals with Mild Cognitive Impairment (MCI). We demonstrated that the meaning of objects is processed in extra-foveal vision and can guide attention from the very first eye movement (Cimminella et al., 2020, Attention, Perception & Psychophysics), also in healthy older adults (Borges et al., 2020, Aging, Neuropsychology and Cognition) as well as in people suffering from Alzheimer’s disease (Cimminella, et al., 2022, Journal of geriatric psychiatry and neurology), challenging models assuming that extra-foveal guidance is based purely on low-level information. Further, we were the first to show, to our knowledge, that an object’s consistency with its surrounding scene is detected in brain responses timelocked to the fixation preceding foveation during natural vision (Coco et al., 2020, Journal of Cognitive Neuroscience). When examining these processes across the lifespan, we found that while healthy older adults and individuals with MCI can both use scene context to guide attention and form successful short-term representations (D’Innocenzo et al, 2022, Scientific Reports; Allegretti et al., 2025, Cortex), they employ compensatory strategies, such as longer fixations, to form stable memories (Coco et al., 2023, Neuropsychology). This highlights a close link between overt attention and memory formation in naturalistic settings
Action-Dynamics and Neural Correlates of Cross-Modal Integration
A fundamental question of human cognition is how it integrates information from different senses, like vision and language, to build coherent contextual representations of our surrounding world. The project explored what happens when we encounter a “context violation”, for example, when the information depicted in a visual scene doesn’t match what a sentence refers to. Our aim was to understand how our cognitive system processes and resolves unexpected information, such as implausible stimuli and consequentially incongruent messages. We demonstrate that the interaction of expectancy mechanisms manifests in action-dynamics responses (Coco and Duran, 2017, Psychonomic Bulletin and Review), and their temporal dynamics are differentially expressed in neural signatures with messages evaluated first by their sequential congruency and later by their semantic plausibility (Coco, et al, 2017, Neuropsychologia). This research, funded by Fundação para a Ciência e a Tecnologia with grant agreement (SFRH/BPD/88374/2012), provided a new approach to study cross-modal integration, offering a window into how our minds make sense of the world and the context we experience
Interpersonal Synchrony: towards a Multidimensional Understanding of Collective Cognitive Dynamics
Humans live in a very interactive context, in which the coordination of behaviours between people to exchange information with conspecifics occurs through temporally sensitive mechanisms of interpersonal synchrony of linguistic and non-linguistic responses. Alignment mechanisms, such as imitation, entrainment, or priming, are fundamental to achieve shared understanding and, consequently, optimise joint action during cooperative tasks. By investigating, the role played by feedback and alignment in achieving successful collaborations, we demonstrated that dyads unable to exchange feedback showed increased gaze alignment, but this actually led to a decrease in performance during a joint visual search task, suggesting that “too much” alignment can be detrimental if it prevents a diversified search strategy (Coco, et al., 2018, Topics in Cognitive Science). We also demonstrated that motor coordination between partners engaged in a cooperative block-stacking task is different across body parts (e.g., head vs. wrist) and influenced by task role (e.g., leader vs. follower; Coco et al, 2016, IEEE: Transactions on Cognitive and Developmental Systems). Moreover, we revealed that attentional coordination between a learner and a demonstrator’s actions is a key predictor of learning success, supporting the idea of “hand-eye coordination” as an alternative route to joint attention, challenging the more conventional view that gaze-following is the primary mechanism (Pagnotta, et al., 2022, Cognition). Finally, as interpersonal dynamics generate complex responses, our work has contributed to advancing the statistical methodologies available by implementing software packages capable of providing integrated analyses of multidimensional time series (Coco et al., 2021, The R-journal).


