As artificial intelligence (AI) diffuses through our lives, it reconfigures central aspects of politics and society as we know it. It affects political deliberation as well as how and to whom public services are delivered. It also transforms political issues into technical problems to be solved through private means, thus replacing democratic institutions with private interests. Fuelled by rapidly expanding automated data collection, AI is a potent tool in the hands of the powerful, not least to maintain political order and to vie for and reproduce control, while also opening novel avenues for resistance.
The Artificial Intelligence & Politics RPA will combine research at the Faculty of Social and Behavioural Sciences into the interaction between AI and politics (in broad terms) with research into the social consequences of this interaction. The guiding question for this line of research is: how is AI changing contemporary politics, and how is the development and application of AI affected by political dynamics?
This RPA embraces a broad definition of politics, spanning both formal and informal dynamics through which societal challenges and conflicts are processed in a variety of contexts. The RPA focuses on four dimensions in which AI challenges traditional political dynamics:
Algorithms and the platforms whose content they shape increasingly structure the information environment of citizens, political actors, and groups. These automated interventions can themselves become politicized. Consider for instance how large-language models have already become enmeshed in political conflict, as right-wing commentators have accused OpenAI and ChatGPT of liberal bias. The political consequences of such generative AI constitutes a largely unexplored field of academic study. More generally, AI-tools affect the constructive deliberation that is essential for viable democracies, and people find themselves in increasingly segmented (mainstream or fringe) information bubbles—bubbles that may drive them apart.
AI-tools—and digital platforms that use them—not only create and amplify these bubbles. They can also needle citizens exactly where it is necessary to spur them into political action. Political campaigns frequently leverage persuasive communication tools perfected in the private sector, both for targeting potential voters and for campaign design, such as in the selection of promising campaign topics and positions.
At the same time, it is unclear to what degree these dynamics themselves can be mended through political interventions: recent debates have focused on technological and regulatory fixes that take the distorting edge off information-sorting algorithms, in order to expose citizens to a more balanced and varied information environment. Given that algorithms are bound to remain central to our informational infrastructures, this first dimension therefore concentrates not only on how AI changes politics in (un)intended ways, but also, in a subsequent step, on the scope for and efforts to manage these effects.
Many jurisdictions are currently crafting rules about AI, ranging from codes of conduct or ethical guidelines to legislation, most prominently in the so-called “AI Act” currently working its way through the EU legislative procedure. These rules concern the application of AI to social processes in a broad sense, including potentially discriminatory algorithms, limits to data collection and automated decision making, AI-use on so-called “sharing economy” platforms, the deployment of facial recognition in public space, or AI use in advertising and on social media.
Not least because of their novelty, the political dynamics are largely unexplored: who sits at the rule-making table? Which forms of regulation are selected, and why? What motivates decisionmakers? Whose voices and expertise are heeded or ignored? And how does global economic competition affect the AI-rules that jurisdictions can and do put in place? This second dimension focuses on the contested political governance of AI as a set of transformative technologies. The central ambition of this dimension is to connect the empirical patterns we observe to more established theories about political dynamics and thus to probe and anchor claims about AI governance theoretically.
AI systems increasingly handle decisions that originally had a clearly political character, for example public service provision, reviewing income tax declarations, profiling potential criminals, or law enforcement strategies. They also gain prominence regarding more technical decisions—say, the optimal location of new windmills—for which algorithms can offer guidance. At the same time, platforms such as Airbnb have become major forces shaping cities and life in them, de facto contesting public city planners’ roles.
Here too, AI changes “politics as usual" through a selective automation and often privatization of hitherto contested processes. This dynamic cuts deeply into the fabric of how citizens relate to each other and to the body politic. It also redraws the boundaries of transparency and political accountability: when decision making is de facto outsourced to algorithms – as notoriously happened in the Dutch childcare benefit scandal – where can citizens turn when they suspect having been wronged? (Partial) automation of public tasks commonly relies on private technologies, which themselves remain opaque to public decisionmakers – think of border control officials who themselves do not understand why an algorithms flag specific passengers for additional scrutiny. This third dimension explores the consequences of such automated politics and administration, both in terms of their actual outcomes for citizens and its normative legitimacy.
AI can serve to depoliticize decisions by public authorities, for example when the allocation of resources or, say, policing capacity is handed over to an AI system and thus a putatively objective instrument. But in which instances AI requires political regulation is itself open to debate, and it is unclear when and why “the technological is political”. This is both an empirical question and a normative-theoretical one. Empirically, we must ask who does, or does not, see AI applications as requiring public rules and debate, and why so. Normatively, we ask when and why (de-)politicisation may be appropriate—or problematic. After all, while algorithmic decision making may be inherently imperfect, so are human decisions.
At the same time, it is unclear how citizens experience, understand, and respond to such technology-fuelled depoliticization. When does it encounter resistance, and when does it simply pass as a seemingly common-sensical solution to a technical problem? Either way, the boundaries of politics are redrawn as AI is rolled out through society. This fourth dimension therefore focuses on the political charge that citizens see in AI diffusion. It investigates the selective politicisation of AI in public debate, and thus a central dimension of how these technologies are incrementally diffused throughout societies.
The RPA AI & Politics offers a seed funding programme to all researchers at the University of Amsterdam’s Faculty for Behavioral and Social Sciences (FMG). This seed funding programme will help to integrate existing research that is currently dispersed over at least four departments – communication science, sociology, political science, and geography and international development. The seed funding program adds value in bringing them together to understand the interconnections between the different parts of the AI-politics link that they explore.
By sponsoring four postdoc projects that each centrally focus on one of the RPA dimensions the RPA also deepens the research agenda on AI & Politics. The postdocs will conduct their own empirical research projects and help to build the community by organizing events, research meetings, and activities and to disseminate academic knowledge among relevant stakeholders outside academia. The assigned postdocs for these projects are Marieke van Hoof and Lisa Fenner.