An Interview with Corinne Cath-Speth

On March 18 2020, ICTC spoke with Corinne Cath-Speth as part of ICTC’s Tech & Human Rights series. Corinne is a cultural anthropologist currently completing her PhD at the University of Oxford’s Oxford Internet Institute, where she studies internet governance. Specifically, she examines the cultures of the organizations that enable the technical functioning of the internet, as well as the role of human rights and civil liberties NGOs that are attempting to effect social change by changing computer code instead of legal code. In this interview, Kiera Schuller, Research and Policy Analyst with ICTC, speaks to Corinne about the governance of the internet and data, tech policy and regulation, human rights and the COVID-19 crisis.

K: Thank you so much for making the time to speak with me today, Corinne! First of all, to give some background, can you tell me briefly about how you, as a cultural anthropologist, came to study the internet?

C: Thanks so much for inviting me to talk about my work. As an undergraduate student of cultural anthropology at the University of Utrecht in the late 2000s, I grew up on the internet. Not only was it an integral part of how we were taught (via webinars) and what we were taught (like methods for online ethnography), technology was also central to the worlds of the people and places we studied as anthropologists. At that time, my interest in studying code-based human rights advocacy efforts was sparked. During my bachelor and master’s degrees in anthropology, I researched how human rights defenders used social media in their work to protest, organize, advocate, and negotiate with governments. My personal and political interests in questions of technology, social movements, and accountability led me to work on a number of projects, which refocused my interests from individuals’ technology use to technology governance. For my PhD, I flipped my initial question and asked, do human rights advocates actively engage in the development of the internet’s infrastructure and governance? And if so, what effects does that type of advocacy have? The question seemed urgent given the growing importance of the internet and, by extension, its governance. This was also right after the Snowden revelations, which revealed how the internet’s infrastructure was tied into mass surveillance.

K: Your current research at Oxford focuses on ‘internet governance’ and the management of the internet’s infrastructure. These are complex concepts, yet they are important. What exactly is ‘internet governance’ and who manages the internet’s infrastructure? Why does it matter?

C: ‘Internet governance’ is the collective stewardship of the internet. It requires a large number of people to work together across government, the private sector, civil society, and academia to develop both the technologies and the policies that enable the vast network of networks we refer to as “the internet” to work. Many crucial parts of the internet function because of internet governance organizations, which are largely self-regulated, bringing together stakeholders to discuss its functioning. Without going into the details of how it works (if you are interested, look at this guide about internet governance that I co-wrote some years back), it is very complicated. It requires a lot of trust, travel, and cooperation between people.

For my PhD research, I focus on one organization called the Internet Engineering Task Force (IETF), which makes the standards and protocols that enable different networks to exchange information, which is crucial. Most of the people in the IETF work for large companies like Cisco or Huawei, but it is open to anyone to join. If you have the technical skills and the resources to join their meetings, you can have a say in standards development. More specifically, I study a group of people who work for civil society organizations, like the American Civil Liberties Union (ACLU), that participate in the IETF to ensure internet standards become more secure and respectful of privacy and human rights. Their work is key to ensure that public interest concerns are taken on board in the development of the internet.

K: I’m particularly interested in the intersections between technology and human rights. The internet is often portrayed as a disruptive equalizer, giving people access to information and a platform to freely share their opinions. But we’re also seeing that internet can be an instrument for surveillance, censorship, and information warfare. In 2020, what do you see as the major human rights opportunities and concerns around the internet? Have these changed from previous years?

C: I think the internet can be both of those things, at the same time. I think it is important to keep in mind that the internet is not magic. It is also not a tool that exists separate from our societies. It is naïve to think that, for example, in unequal societies (or cities or even neighborhoods) the internet will magically upend entrenched socio-economic disparities; quite the opposite: a lot of recent academic research by academics like Safiyah Noble and Virginia Eubanks suggests that these technologies are likely to exacerbate, rather than alleviate, inequality.

Some of the major human rights concerns I am worried about are recent. I think the current COVID-19 crisis is likely to have long-lasting human rights implications. A number of governments are working together with tech companies to enable tracking of people that would be unacceptable in normal times. Once that genie is out the bottle, it will be hard to put it back in. We need to make sure that such efforts are proportional and temporary and have commitments on paper from all those involved that these tracking programs will be dismantled when this crisis subsides.

K: Connecting the previous two questions, how does internet governance or internet infrastructure impact human rights?

C: In many ways. The most obvious, for people using the internet on a daily basis, is encryption. Whenever you visit a website, you will see a little green lock saying HTTPS. The S stands for secure. This means that whatever you search for on that website is protected from prying eyes—whether you are internet banking on the WiFi network of your favourite café or you are searching for information about resisting surveillance. In both instances, and many in between, encryption is crucial. The IETF is one of the organizations making these decisions about what parts of the network to encrypt. This is only one of the many ways in which internet governance can impact human rights.

K: Turning specifically to AI, AI is increasingly presented as a solution to our problemsfrom law enforcement, to healthcare, to shopping. What are some of the main emerging human rights concerns or opportunities with AI?

C: The main current risk with AI is that it offers the promise of efficiency and cost-reduction—words that resonate well with companies and governments alike. Yet, with such promises, it is important to also consider the social costs of these efficiency-driven systems.

A progressive approach to AI would recognize that government use of new technologies—be it for mass surveillance, policing, or the welfare state—always particularly risks violating human rights and undermining civil liberties. Last year, the UN rapporteur on extreme poverty produced a devastating account of the “digital welfare state,” which argued that new digital technologies are deteriorating the interaction between governments and the most vulnerable in society. A recent court judgement in the Netherlands ruled that an automated surveillance system for detecting welfare fraud violated basic human rights. Even in low-risk situations, encouraging the spending of tax money on proprietary systems that are hard to audit, whether for fairness or accuracy, is simply irresponsible. AI presents real risks to human rights when used in certain ways.

K: Do you see any national or supranational developments toward governing or protecting against these risks? You wrote recently that the EU’s white paper on AI falls short. Does it fall short and why?

C: The EU’s white paper on AI is a great first attempt at tackling a very complex question. What makes the current EU white paper disappointing is that earlier drafts contained a number of proposals, including a moratorium on facial recognition and special rules for the public sector; however, in the final proposal, many of these were removed. Furthermore, I believe that the ‘high-risk’ approach in the paper—focusing primarily on ‘high-risk’ sectors and ‘high-risk’ applications of AI—is overly optimistic because it assumes that if AI systems in these high-risk categories are well-scrutinized and regulated, most of the known negative ramifications of AI use will be mitigated. The problem with that approach is that there are many examples of AI systems currently considered low-risk, like targeted advertisements, that can undermine democratic processes—and remain unaddressed. Strong enforcement of existing rules like the GDPR and other crucial laws like the ePrivacy Regulation must be centre stage in order to effectively protect people.

K: Turning to the private sector, what role does the private sector have in meeting responsibilities toward human rights? For example, in 2017 the UN Special Rapporteur on Freedom of Expression set out standards for the role of the private sector in the provision of internet and telecommunications access, arguing that companies should adopt the UN Guiding Principles on Business and Human Rights, including implementing ‘human rights by design,’ due diligence, transparency, etc. To start, what is ‘human rights by design’?    

C: Human rights by design can mean different things. In UN Special Rapporteur Kaye’s report, it requires internet companies to set and meet norms to ensure their technologies are not inherently complicit in, or easily abused toward, human rights violations. In my opinion, this is an important step forward but, in practice, is hard to ensure because the requirements on companies to uphold human rights are limited and mostly voluntary. A number of companies have made good-faith efforts, but I feel that the willingness of many to implement recommendations will only persist if bottom-lines are not too drastically impacted. The question of the human rights responsibility of tech companies is one of the biggest challenges facing human rights organizations today, and one in which they often have least access and leverage.

K: Increasingly, tech solutions are being proposed to solve the some of the problems created by other technologies. For instance, a recent version of Firefox makes it harder for Facebook to track people across the web and makes in-browser calls and chats more secure. In your experience, how effective are such tech solutions?

C: They are effective and fit within part of a larger strategy, but it is important to keep in mind that tech fixes rarely address the underlying causes of their need. For example, why do we need Firefox to make it harder for Facebook to track people across the web? Because Facebook’s business model is ultimately based on surveillance. These data-hungry business models are not addressed by solutions such as the new version of Firefox—they are only mitigated. We need solutions that address the root as well as the symptoms. Tech solutions alone are like only fixing the faucet in a burning building.

K: Finally, you also work with various civil society organizations, governments, and businesses to provide policy guidance on the ethical and political issues arising from new technologies. What issues are organizations most concerned about, and how responsive are they to advice?

C: They are worried about an array of issues. It often depends on their mandate, but most of it comes down to: “How do I continue to do my job (of protecting human rights or enabling privacy, or whatever it is) given the new (and old) challenges raised by the internet?” I try to help them think beyond the bits and wires of technology and develop new perspectives on their work. I start from the assumption that they are experts at their work and have the necessary knowledge in-house to work on emerging technologies. I help them draw out this knowledge and support them in developing and managing work at the intersection of digital tech and society.

K: Do you have anything else you’d like to add or discuss?

C: Part of my role as a researcher is to make sure my research and broader academic conversation are accessible to people working in civil society and government, so please feel free to reach out to me or strike up a conversation on Twitter. You can find me there at @C__CS.

Thank you so much for your time, Corinne. It was a pleasure to speak!

ICTC’S TECH & HUMAN RIGHTS SERIES:

ICTC’s Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues, like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also explores questions of governance, participation, and various uses of technology for social good.

The first set of interviews features experts affiliated with the University of Oxford in England—particularly, the Oxford Internet Institute (OII) and Future of Humanity Institute (FHI). Having recently completed her Master at the University of Oxford, Kiera reached out directly to professors and researchers at both institutes to have conversations about their work on the above topics. However, this series is not affiliated with any particular institution, and aims to bring the voices of experts and professionals from various backgrounds, fields, and globe locations to the discussion.