Situated language processing: 
Everyday our cognitive system processes a massive stream of multi-modal information with seamless effort. Cross-modal mechanisms allow information to be shared, integrated, and efficiently utilized by the cognitive system to support our daily tasks.
Within this broad topic, my research focuses on the interaction between vision and language (production and comprehension) in tasks demanding their synchronous processing.
– In situated language production, we demonstrate, for example, that: (a) descriptions of objects situated in photo-realistic scenes are coupled to the eye-movement responses concurrently observed: language and vision are mutually predictive (Coco and Keller, 2012, CogSci), (b) sentence processing utilizes bottom-up features of the objects and contextual knowledge about the scene to select targets to name (Coco, et. al, 2014, QJEP), and (c) the density of visual information in a scene and the semantic features of described objects mediate the syntactic composition of the resulting descriptions, and more crucially the complexity of associated attentional responses at phrasal level (Coco and Keller, 2015a, CognProcess).
– In situated language understanding, we show that: (a) the sentence processor utilizes visual saliency to anticipate likely arguments, and exploit prosodic cues, such as intonational breaks, to resolve ambiguities during language understanding (Coco and Keller, 2015b, QJEP) and (b) contextual information is used to anticipate verbal arguments in naturalistic scene; when the object-argument of the verb is not depicted, and even when scene information is recalled from memory (Coco, et. al, 2015, in press, CogSci).

 

Alignment mechanisms during cooperative tasks:
Humans live in a very interactive context, requiring very frequent exchange of information with con-specifics, which itself requires subtle temporal calibration of linguistic and non-linguistic responses. Mechanisms of alignment, such as imitation, entrainment, or priming, are fundamental to achieve shared understanding, and consequently optimize joint-action during cooperative tasks.
In this research, we examine the conditions under which alignment is a positive predictor of task success. We have shown, for example, that: (a) interlocutors who could not exchange feedback during collaborative search tasks aligned their gaze more, and that highest gaze alignment corresponded to lower task performance, in contrast to most models of dialogue (Coco, ea, 2015, JML) and (b) the time-course of bodily coordination during a cooperative block-stacking task is different for head and wrist, and influenced by task role (e.g., leader vs. follower), Coco, ea, under review, TAMD.

Visual attention, the role of task and memory:
Theories of active visual perception emphasize the important role that task goals, and memory mechanisms, have on the allocation of visual attention during scene understanding.
In this context, our has demonstrated that: (a) eye-movement statistics can be used to train classification algorithms to predict the associated task very accurately (Coco and Keller, 2014, JoV), and that (b) in a visual search task, older participants primed with contextual information perform significantly worse than younger participants, especially when the target object is inconsistent with the primed context, suggesting that reliance on mechanisms of contextual expectations becomes stronger as we age (Borges and Coco, 2015, EAP CogSci, proceedings).

Expectancy mechanisms during cross-modal semantic integration in verification tasks:
The cognitive architecture routinely relies on expectancy mechanisms to evaluate the plausibility of stimuli and establish their sequential congruency.
Here, we demonstrate: (a) the interaction of expectancy mechanisms of stimulus plausibility and congruency on action-dynamics responses (Coco and Duran, under review, PBR), and (b) the differential neural signatures of congruency (first) and plausibility (later) effects (Coco, ea, under review, Neuropsychologia).

Syntactic priming in adults and children:
Syntactic priming, i.e., the tendency to repeat each others syntactic structure, is a very robust phenomena in language, which has been extensively used to probe the representational structure, and compositional mechanisms, of linguistic information processing.
In this context, we demonstrate that: (a) priming can boost the production rate of subject relatives, even in children with specific language impairment, especially in the short-term (Garraffa, ea, 2015, LLD) and (b) syntactic priming effects are adaptively affected by immediate experience by testing whether repeated exposure to a structure affects anticipatory structural expectations in an immediately subsequent sentence (Fernandes, ea, under review, JEP:LMC).