I am a senior assistant professor of Persuasive Communication & New Media Technologies at the Department of Communication Science. I am a born and raised communication scholar (BA, MA, PhD), with a lifelong fascination for people’s interpretations, perceptions, drives, goals, feelings, and meanings of life. Over the years, my empirical research has investigated how people respond to various forms of (persuasive) communication. I am eager to further refine concepts that are used in communication research, and —although I also conduct quantitative research— qualitative research is most close to my heart. A specific target group of interest in my research have been older adults because life-span and generational theories strongly suggest age differences in responses to (persuasive) communication.
I serve as president of NeFCA (the Netherlands-Flanders Communication Association) and as secretary for the ICA interest group Human-Machine Communication.
In 2019, I shifted my research focus to human-machine communication, and in particular to users’ perceptions of their interactions with (AI-enabled) chatbots. This started with my intuition that with the increasing use of this type of AI-enabled communication we run the risk of losing something fundamentally human, for instance because communication becomes more superficial. I see chatbots and other AI-enabled communicators as “evocative objects” or “philosophical provocateurs” (following among others Turkle’s work, see van der Goot & Etzrodt, 2023) that elicit questions about what being human actually means. For me, a distinctive characteristic of us humans is that we can reflect on ourselves, and “seek within” through introspection and for example meditation. From this place “within” we can connect with each other, in an embodied way, and be compassionate with each other. Obviously, we are very far from perfect in that. In that sense, the current hype around large language models such as ChatGPT and GPT-4 does not only invite us to consider what we are losing when we outsource part of our communication to such language models, but also how we as humans can use our own potential to the fullest— particularly by “seeking within” and truly embodying values such as compassion.
My empirical studies are often based on qualitative interviews and/or experiments. I am particularly interested in users’ social responses to chatbots, and I have investigated concepts such as source orientation (who or what do people think they are communicating with?), anthropomorphism, and social presence (van der Goot, 2022). One specific aspect of chatbots that I have looked into are disclosures, i.e., explicit statements that clarify to the user that they are interacting with a chatbot and not with a human (van der Goot et al., 2022). In terms of types of chatbots, I have looked into customer service chatbots (e.g., van der Goot et al., 2021) and into how users respond when the output of a language model is adapted to their age group (e.g., Jansen et al., 2022). In my current studies I turn the attention to interactions with chatbots in the mental health context (e.g., Henkel et al., 2023) and for social purposes, and will continue to take a critical stance.
I teach in the BA and MA program Communication Science, and in the BA Computational Social Sciences, particularly on qualitative research methods in a communication context.