On March 4th 2020, ICTC spoke with Carl Öhman as part of ICTC’s new Technology and Human Rights Series. Kiera Schuller, Research & Policy Analyst with ICTC, interviewed Carl about his doctoral work at the Oxford Internet Institute, which falls at the intersection between economic sociology and ethics. Specifically, Carl’s research looks at the ethical challenges regarding commercial management of ‘digital human remains,’ the term for data left by deceased users on the internet, which was the topic of his 2016 award-winning thesis at Oxford. In this interview, Kiera and Carl delve into the intersections of emerging technologies and ethics, and discuss personal data, the digital afterlife, and the management/ethics of digital remains.

 

K: Thanks for making the time to speak with us today, Carl. I know that you’ve been immersed in your topic for ages, but for the sake of our readers who may be unfamiliar with this field, could you talk a bit about “digital remains,” the “digital afterlife,” and what these terms refer to?

 

C: ‘Digital remains’ is the term researchers use to denote online data that you leave behind after you pass away. That can refer to any kind of data—not just stuff that is online or in the cloud, but also things that are stored on your private hard drive, cellphone, and other private devices. It can also be conceived as digital estates or an informational corpse. ‘Digital afterlife’ is a term that was primarily used early on when this field emerged and is essentially synonymous with the ‘being’ of your person after death. Our informational remains continue having a social life by themselves after we’re gone. ‘Digital Afterlife’ refers to the posthumous social life we have online—it is what happens to our profiles, their continuous role in the networks where we leave them behind.

 

It’s important to note that the relationship between information and the individual is not just something that you own: information is something that you are, like your body, hence the analogy of digital remains or a corpse.

 

K: What are some major ethical, legal, and human rights challenges regarding the commercial management of digital remains?

 

C: Although my research doesn’t take an explicitly human rights perspective, I focus on the concept of human dignity, which is a part of the ideology of human rights. A central argument I make is that ideas about human rights and morality should not only be applied to living humans; they are also relevant to the non-living, including the dead and the non-born. Indeed, if we claim to be humanists and care about humanity, we should acknowledge that humanity is not just a spatially dispersed project but a temporally extended project. Humanity includes also those who have lived and those who will live.

 

Of course, I don’t argue that everything that is bad for a living human is bad for someone who is dead – physical pain, for example, is only relevant for the living, because the dead cannot sense being harmed.  But there are plenty of human rights that in fact do not require a physical living body. Dignity is one example: regardless of whether you experience being humiliated or wronged, it can still be considered an ethical harm. I consider this mainly from a societal point of view: specifically, with the emergence of the digital afterlife on the internet, the dead have now entered our sphere of moral consideration that they haven’t before. Scholars and historians of death—and the cultural role of death—often speak of modernity as ‘the era of hidden death’ (where most people don’t see death, people die in hospitals surrounded by professionals, and the dead are hidden from society). However, most researchers today argue that this era is coming to an end with digital technologies. With digital remains and profiles of the dead remaining on social media, the dead are once again entering into public space in a way that they haven’t done yet in modernity—they’re constantly accessible from your phone, on the social networks you use, in the pictures and videos you keep on your device or in the cloud. It’s the end of the era of hidden death, and hence we must begin to consider the ethical significance of this new presence.

 

K: Are there cultural differences in the ways people perceive how digital remains, or profiles of the dead online, should be treated?

 

C: A common assumption is that questions of death are very culturally imbued, because death occupies such a central position in any culture, with rituals, etc. But when it comes to handling digital remains, there actually isn’t much evidence to support the idea that various cultures and religions have different perspectives on it. Rather, it seems that religions, which have given answers to so many parts of life, do not give guidance as to what to do with your Facebook profile. There is no cultural standard for how to approach digital remains, so we need to now invent a new cultural approach. Research so far shows that there tends to be conflicts in approaches between individuals—family members, friends, etc.—as to how to approach the digital remains of a loved one. People have different approaches to the status of the data—for example, perhaps a mother wants to take down any trace of her child from the internet, while her friends use these platforms to make sense of the absence of their friends, or perhaps vice versa. But these clashes of individual differences are not bound to cultures or religions. Generally, though, we need more hard data and research on this topic.

 

K: You have argued that digital remains deserve the same treatment as physical/archaeological ones. What exactly does that mean?

 

C: I actually don’t argue that digital remains require the same treatment as physical ones. The treatment shouldn’t and cannot be the same because we’re talking about two very different objects. But I argue that their ethical status is equivalent. Thus, when we regulate the industry and put regulations down about these new issues in the digital sphere, we should therefore draw inspiration from other similar areas where we’ve already developed regulations. Areas like organ trade/donation and archaeological museums, for example, have gotten fairly far on these issues. There are ethical guidelines on what you can do with human remains in archaeological museums; these conventions stipulate that in an exhibition, you must treat remains with dignity and never treat them solely as a means of attracting visitors or as a commercial means to generate profit for the museum, but rather as always worthy of the respect, in and of themselves. I believe something similar could be applied to the digital afterlife industry: some kind of convention which could, to start, be a soft regulation of firms coming together, agreeing to not compete on developing the most profitable use of remains, etc. That would be a good start in regulating this industry.

 

K: Looking at the concrete state of the world, how do companies currently handle digital remains? What regulations exist around the data of the deceased?  

 

C: It’s very complicated, but generally speaking, the dead don’t have any data privacy rights. Take Europe’s General Data Protection Regulation (GDPR) as an example: it’s a very strong protection of information privacy, but it explicitly states that it only applies to living data subjects. So the dead are not at all protected by the data protection of the EU. Individual EU member states can provide protection—Denmark is a leading example, and Spain and Italy have followed with similar approaches. But the general rule around the world is that when you die, all your privacy rights cease to exist, and companies can do whatever they want with the data.

 

K: A recent article, citing your 2019 study “Are the Dead Taking Over Facebook,” posited that “social media giants are becoming digital graveyards.” Digitally stored information already grows four times faster than the world economy, and now this data includes millions (and increasing numbers) of online ghosts. Your study projects 4.9 billion people’s digital remains may be “buried” on Facebook by the end of the century, outnumbering living users. This presents a new and growing problem for social media the companies. What kind of practical, moral, and legal dilemmas are they facing, as the number of dead profiles increases so rapidly?

 

C: At the moment, digital remains are generally not yet a huge financial burden to social media companies, but based on projected mortality rates, it is very rapidly becoming more significant. Within only a couple of decades, we are talking hundreds of millions—potentially billions—of dead profiles on Facebook. The problem from a corporate viewpoint is that most business models depend on selling advertisements. Thus, user data is only worth saving insofar as it can be used to increase attention and clicks. Dead people, however, are not consumers: they do not click on stuff, but they still take server space. From an economic view, their data must either be re-commercialized and put back into production or be destroyed.

 

Now, this is a huge ethical dilemma because both options open up rather scary scenarios. First, imagine re-commercializing the data of the dead: the bereaved would start paying to have the profiles of the dead stay on the platform, like paying for a tombstone at the cemetery. Then we would effectively end up with a filter, whereby the data that future generations have access to is not history of diverse individuals exactly as they were, but a filtered history of who had money to pay for themselves or others to become part of history. From a long-term macro perspective, this is a rather concerning development. When Facebook goes through their data and asks themselves what to keep or destroy, they won’t ask what data is the most historically significant, what has most value for future generations, or what is the most ethical to keep, but rather what is profitable and what can be kept to make money off of. That is a dangerous development, where we will be filtering history—and what evidence is kept—based on one lens: what is profitable.

 

K: How have tech or industry leaders been responding to these types of questions?

 

While I don’t have much experience interacting directly with tech leaders, the experience I have had is generally positive. I get the sense that most people [in the tech industry] are well-meaning. They care about these issues and want to do the right thing. That being said, the scope of my research doesn’t limit itself to ethical matters of what is right for an individual to do in a given situation, but rather looks at the technological and economic systems within which these individuals operate, what kind of situations do they generate, and what incentives to they promote? Such questions go beyond the benevolence of individual tech leaders.

 

K: Looking ahead, in light of your work at the Digital Ethics Lab and at Oxford, what are you hoping to examine next? What major issues around data, privacy, identity, digital afterlife, or ethics are you interested in?

 

C: I have one study coming out soon on a topic that emerged out of that original 2019 study, ‘Are the Dead Taking Over Facebook.’ The immediate response we get from everyone when we talk about projecting mortalities onto Facebook is, “How do you know Facebook will exist in two decades? What if Facebook goes down and something else emerges?” My response is always that such a scenario would make this question even more pressing: What happens to all these profiles if Facebook goes bankrupt or is forced to close? In a typical bankruptcy or business closure, you have an insolvency administrator that comes in and sells off all the company’s assets to the highest bidder. In this case, the assets are the data, so someone could come in and buy all of that data. In some jurisdictions, like the EU, there are some protections—for example, the GDPR says, “You cannot sell that data to companies not working within that same industry, and users have the right to have data destroyed if they wish.” But this certainty does not apply to all jurisdictions, and the GDPR does not apply to dead people. So if Facebook goes down, all dead profiles and potentially living profiles in other places, could be sold off to anyone—in China, India, anywhere. That raises huge ethical questions. You can read a pre-print version of this new study—titled ‘‘What if Facebook Goes Down? Ethical and Legal Considerations for the Demise of Big Tech Platforms” —here.

 

ICTC’s Tech and Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

The first set of interviews features experts affiliated with the University of Oxford in England—particularly, the Oxford Internet Institute (OII) and Future of Humanity Institute (FHI). Having recently completed her Master at the University of Oxford, Kiera reached out directly to professors and researchers at both institutes to have conversations about their work on the above topics. However, this series is not affiliated with any particular institution, and aims to bring the voices of experts and professionals from various backgrounds, in various fields, and around the globe to engage in these discussions.

Carl Öhman is a doctoral candidate at the Oxford Internet Institute and is affiliated with the Oxford Digital Ethics Lab. His interests fall at the intersection between economic sociology and ethics. Specifically, Carl’s research looks at the ethical challenges regarding commercial management of “digital human remains,” data left by deceased users on the internet, which was the topic of his 2016 award-winning thesis at Oxford.