Overall, my research aims to understand how we use the ensemble of visual and vocal modalities available to us in order to communicate. How do we orchestrate, face, head, body, hands, and speech together into one multimodal whole, and how do others make sense of this? How do we adapt this complex array of behaviors to different contexts? How is neurodiversity reflected in different styles of communication, and how does this impact social interaction?
To this end, I use motion tracking, acoustic analysis, qualitative coding, and virtual agents to study both what people do in naturalistic settings, and to design experiments to test the hypotheses generated from studying more unconstrained behavior.
I currently teach Language, Speech and Dialogue Processing in the Artificial Intelligence bachelor programme, and regularly co-organize (largely methods-oriented) workshops on multimodal communication, such as the Practical Approaches to Human Multimodal Behavior summer school in Nijmegen, and the Behavioral Dynamics in Social Interaction workshop in Krakow.
I currently supervise PhD candidate Anna Palmann, as well as Master and Bachelor students on projects relating to cognition, language, and AI.