'I grew up in Rotterdam and practically looked onto Erasmus University. For me, it seemed like the most logical place to go. Many people in my circle were studying cultural anthropology or sociology. Initially, I wanted to study medicine.
However, when we got a computer at home and computer science was emerging, I found it new and exciting. So, I decided last minute to pursue something technical, but at a broad university, as I was also interested in the social aspects. I wanted a place where I could study computer science while also bridging it to other disciplines. This is why I chose the UvA.
Additionally, I preferred the UvA because I saw many of my friends going to the VU, which was already known for its diverse student body. I thought: let me go to a university that, at that time, had less focus on diversity.
I reasoned that if I could thrive at the UvA, in a broad environment where I could build bridges as an Eritrean Dutch person, I could do that elsewhere as well. And yes, that worked out. Later, I also wanted to contribute to a more diverse, inclusive university with all the experiences I gained.'
The key condition is trust. Without it, collaborations don’t work. That trust exists in many places within the UvA network.
'Absolutely. I am indeed a child of UvA. I feel at home here. I’ve worked hard and long to get to where I am now. It’s incredibly important to me to link research and education to society, especially to Amsterdam. There is a lot happening here, both academically and in the broader world.
It’s a super-diverse society. Both from UvA and from the city of Amsterdam, the connection is still very strong, and I try to build bridges.
I did my master’s in Information Systems in the early 90s. Many of my current colleagues here in the Lab42 building were from that time. So, there is definitely a link to that study. In fact, I am now the programme director of that master’s.
But I have also been involved with other areas, like the AUC, the social faculty, and the medical faculty, and I try to keep those connections alive. Sometimes, unexpectedly, you come together with certain people and something happens - new collaborations emerge.
The prerequisite is trust. Without it, collaborations do not work. That trust is present in many places within the UvA network. It has been there for a long time, is solid, and facilitates collaboration.'
In academia, it happens a little too often that we say, "We are academics, we know best." But the big question is: how can we get ideas into and out of the fabric of society? By connecting with each other and having real conversations with ‘ambassadors’ from different communities.
'What you need for societal trust is structural contact with the public. Not a one-off interaction. And not a top-down approach where science approaches the public with the mindset of: we are bringing you something. No, you need to be very humble and ask: what can we learn, how can we be meaningful?
These are conditions for trust in institutions, such as science, politics, and journalism, from all levels of society.
In fact, if you do not do this, it can actually harm trust. I see this with many projects here in Amsterdam. It is said: there is a gap between citizens and the municipality, or between citizens and science, and we want to address that. But without really understanding what is going on in those communities. Which are the groups that feel excluded and distrustful? What can we learn from them?
In science, it happens too often that we say ‘we are academics, we know best’. But the big question is: how do we get ideas into and out of the veins of society? What is the next step? By connecting with each other, having real conversations with ‘ambassadors’ from various communities.
From the Communicity project, where we aim to bring AI technology closer to marginalised groups, the most challenging part is: seeking contact with those ambassadors, taking them seriously, and earning their trust. Even when you have identified groups and ambassadors, how do you stay engaged and make them a full partner in such a project? That is the challenge.
As a scientist, I learn a lot from communities. I am often not in the field as a scientist, but as a person, as part of a community that just happens to be scientific. I also have the ‘fortune’ of moving among various layers as an ex-refugee and Eritrean.
The municipality of Amsterdam initially knew me as an engaged Eritrean Dutch person who helped manage the integration and participation of refugees. Only later, when I asked the municipality to address inequality and discrimination through AI and participate in the Civic AI Lab, did they learn I was affiliated with UvA.
I did not gain the trust of the municipality because I was a scientist, but because I was initially an involved citizen with connections to hard-to-reach communities. That is how it happened.'
You can use AI to enhance interaction between certain groups, bringing them together. In this way, you can make society more social and foster mutual trust.
'AI can help build trust on multiple levels. Governments and companies are increasingly collecting various types of data. We also release various types of data ourselves, consciously or unconsciously, through sensors like our mobile phones: tracking our location, how often we view something, and what we do on social media.
AI makes it possible to discover patterns in that data, thereby creating insights into people, communities, cities, and countries, from which you can then take action. For example, AI can facilitate interaction between certain groups, bringing them together. And this can make society more social and create trust.
At the moment, this might seem like a naive thought because AI seems to contribute to polarisation, discrimination, and hatred. But this is the result of a choice, namely the choice to develop and use AI to increase profit, control, or power. Another choice is possible: using AI for justice, solidarity, etc.
Additionally, discussing AI itself is very important. AI can also be something frightening, just like topics such as migration. Engaging in informed dialogue about AI—what it is, what it does, its drawbacks, and its opportunities—builds trust. There are good initiatives like the National AI Course. But I mainly advocate for introducing AI awareness in schools from a young age. Alongside media literacy, AI awareness in the classroom.
Two hundred years ago, people asked why they needed to learn how to read. Now, nobody questions that. In a hundred years, people might say the same about algorithmic thinking or even coding: it’s strange that around 2024, people thought it wasn’t important.'
My goal is to bring AI’s potential into the fabric of society, so that everyone can use it.
'Society is becoming increasingly complex. There are problems we have not been able to solve for a long time, such as poverty and inequality, which have become constants. Now with AI, we are moving towards a new form of living together, an algorithmic digital world. Many people think AI will only reinforce existing patterns and constants, making inequality persist or even worsen.
I have a very different perspective. I believe that through all that data and insights, new opportunities arise to tackle complex problems like poverty and inequality more effectively. Opportunities that are currently perhaps only seen by a small group of scientists and technologists. But a time will come when everyone can handle data and AI. And everyone can use AI for positive societal changes.
My goal is to get AI’s potential into the veins of society, so that everyone can use it. I’m doing this in several ways. For instance, I started teaching AI in primary schools around 2008. I still do this occasionally, but it’s only a small part of the solution. The government should play a significant role through the national curriculum.
Additionally, I engage in science communication. I recently did an interview with a hip-hop podcast. That reaches a completely different, vast group of young people, allowing me to convey the AI message there.'
The earlier you learn to think creatively, in the field of AI and in general, the more diverse and hopefully better science, politics, journalism, and society will become.
'Young people should think even more creatively and out-of-the-box. In my inaugural lecture, I centred Paolo Freire, an out-of-the-box changemaker from Brazil a hundred years ago. He beautifully describes the cycle of socialisation - you are in a cycle in life that is very hard to escape: family, school, religion, social expectations. Freire also describes how to break out of that cycle by thinking and acting differently.
With AI, we see the same thing happening. We socialise it by developing and applying it in environments with entrenched patterns. And by feeding it data with various assumptions, biases, and expectations embedded. If you do what you always did, you get what you always got.
This applies to AI as well. So, you need out-of-the-box thinkers, new development environments, and data to enrich and liberate AI, and thereby society.
The earlier you learn to think creatively, both in AI and in general, the more diverse and, hopefully, better science, politics, journalism, and society will become.'