Associate professor in experimental methods in AI and logic at the Institute of Logic, Language and Computation, University of Amsterdam.
It is fascinating how from the countless individual experience we make day for day we can derive a consistent and shared picture of the world surrounding us. How does that work?
In my research I try to understand how derive from our experiences an understanding of the mechanics of the world, how we use these insights to reason about the future and how we talk about them in communication. An important part of this human "sense-making" is understanding the world as causal. Every time we choose to act in a certain we do this based on causal predicitons we make about the world. I turn the light switch because I believe that this will turn on the light and help me find the keys I dropped. In my work I try to understand how we come to have this causal picture of the world and how it forms the way we think and reason. I study these questions from various angles, combining knowledge and methodology from linguistics, philosophy, cognitive science and artificial intelligence.
Over the last years we have seen the rise of AI models that show astonishing capabilities in making sense of the world as well. This was mostly achieved with respect to very narrow slices of reality, like in the case of the computer program AlpaGo. But recently we have seen the development of generative AI models like chatGPT that seem to display human-like skills in a much broader sense. But what do these models actually learn? And how do we have to understand these surprising capabilities in relation to human intelligence? These are questions my research has been turning to lately, with a particular focus on human like stereotypes and biases that these models models seem to pick up.
Current projects:
I'm a member of CERTAIN and managing the website.
Recent publications