Skip to navigation – Site map

HomeNuméros20Section 3. Ethical and Political ...Solidarity and Data Access: Chall...

Section 3. Ethical and Political Implications of Digital Technologies

Solidarity and Data Access: Challenges and Potentialities

Francesco Tava
p. 118-126

Abstract

This paper provides an account of the challenges and potentialities of a solidarity-based approach to data access and governance. To do that, it offers an infraethical understanding of solidarity that describes it as a structural moral enabler that can sustain collective action and risk taking. The paper ends with a brief discussion of health data access as a possible case study to test this approach.

Top of page

Editor’s notes

DOI: https://doi.org/10.17454/pam-2010

Full text

1. Introduction

1The emergence of big-data technologies and the consequent increasingly networked character of society caused a paradigm shift in contemporary ethics. Traditional ethical theories, which assume the relevance of individual responsibility and individual decision making, needed to be conceptually reframed insofar as new moral actors—for example, artificial multiagent systems—entered the moral domain and transformed it deeply. From this reconceptualisation, new, big-data-based ethical models emerged, which soon took a central position in philosophical debates. These include models of distributed morality (Floridi, 2013a; Heersmink, 2017), which allow ethicists to address the fact that, because of complex interactions among multiagent systems, moral actions and responsibilities are no longer centred on individuals, but are also distributed across society.

2In this paper, I will first provide a brief analysis of these new models, highlighting both their main characteristics and their potential societal effects (section 2). In doing this, I will focus on the notion of infraethics (or infrastructure ethics), which refers to the “first-order framework of implicit expectations, attitudes, and practices that can facilitate and promote morally good decisions and actions” (Floridi, 2013a, p. 738; see also Floridi, 2017). Among these practices are trust, respect, transparency, and reliability. I will then argue in favour of including solidarity among such infraethical practices (section 3). To do that, I will provide a brief account of solidarity, which I contend is particularly relevant because it is conducive to sustained collective action and risk taking. More precisely, solidarity can be understood as a moral and political desideratum insofar as it stimulates and supports longer-term and risk-laden collective action aimed at addressing perceived injustices (Meacham and Tava, 2021; Prainsack and Buyx, 2017; Scholz, 2008)—all characteristics that clarify how solidarity is not merely a descriptive notion that indicates a certain form of human togetherness, but a fundamental, infrastructural dimension of democratic life.

3In section 4, I will then contend that an infraethical model of solidarity might offer a powerful tool for tackling one of the most challenging issues that underpins our digital way of life—namely, how we own and use digital data. Looking at one of the most advanced models of data access and governance (the evidence-based, default-open, risk-managed, user-centred [EDRU] model—see Ritchie, 2014 and Ritchie and Green, 2016), one can easily detect the growing importance that collective and societal aspects have in this domain. Whilst traditional models are defensive in nature as they are essentially anchored in the costs and risks to the data owner, this more advanced model relies on the principle that society is the relevant locus of costs and benefits. I will argue that adding an infraethical narrative to this approach, which would pinpoint solidarity-based practices’ impact on data sharing, would permit data-access researchers and stakeholders to better understand the inner mechanisms of the community of interest that data owners and users constitute. Introducing the concept of solidarity into data-access practices would therefore highlight the growing relevance of collective interests and aims to the determination of how personal information is owned and used.

4I will conclude by showing why developing an infraethical, solidarity-based approach to data access is particularly urgent now that humanity is facing the Covid-19 pandemic (section 5). Today more than ever, access or lack of access to data (specifically, health data) might have major consequences for public security and public health. I will show how an approach based on solidarity can help implement models of data access and sharing that place societal needs at their centre.

5Distributed morality (Floridi and Sanders, 2004; Floridi, 2013a; Floridi, 2013b, chap. 13) is a phenomenon with ever-increasing impact on society due to the emergence and dissemination of new information and communication technologies that range from digitisation techniques to algorithmic decision making. The idea that morality is not exclusively centred on the individual and on her capacity to act autonomously and responsibly is not new. For instance, phenomena such as collective moral responsibility and obligation, which highlight the significance of non-individual decision making and accountability, have often been analysed by ethicists, jurists, and political scientists (Isaacs, 2011; Isaacs and Vernon, 2011; Hess et al., 2018). However, this research is gaining momentum in light of the central role that artificial agents and hybrid multiagent systems are playing in the infosphere in which we live. Our everyday life (on both the personal and societal levels) is increasingly influenced by operations undertaken by artificial agents—that is, by “sufficiently informed, ‘smart’, autonomous artefacts, able to perform morally relevant actions, independently of the humans who engineered them, causing ‘artificial good’ and ‘artificial evil’” (Floridi, 2013a, p. 728). Interlinked technical innovations such as, for instance, big-data technologies, deep learning, the semantic web, and the Internet of Things have given rise to a variety of new agents that are progressively less dependent on their creators and therefore play an independent role in the public sphere. This phenomenon has a major impact on certain pivotal moral concepts, such as responsibility and power, whose relational (rather than individual) facet is progressively coming to light (Zwitter, 2014).

6This scenario is made even more complex by the formation of multiagent systems, which can be human, artificial, or hybrid. The fundamental assumption of distributed morality is that a series of small actions, which a number of agents (human, artificial, or mixed) perform and which taken individually are morally neutral or morally negligible, can generate morally charged (either good or evil) big actions as soon as they are embedded in a powerful multiagent system. Floridi (2013a) gives several examples of this phenomenon, focusing on the good side of the coin. Take, for instance, a company that reinvests part of the profits deriving from its customers’ purchases to sustain humanitarian projects. Such an operation involves a series of actions (performed by the company itself, its customers, and all the artificial agents that constitute the company’s technological platform) that taken individually can be described as morally neutral. Only the big action of this multiagent system, and not the actions of its components, seems to be morally loaded.

7Assuming that the majority of individual actions that human or artificial agents perform are morally neutral or morally negligible, the central challenge in a distributed-morality framework is how to ensure that the sum of these individual actions will generate a good rather than an evil outcome. This can be ensured by strengthening the resilience and fault tolerance of our ethical environment whilst weakening its inherent inertia (Floridi, 2013a, p. 736). In other words, we have to learn how to identify potentially good actions and facilitate their aggregation into virtuous multiagent systems, while at the same time we have to isolate and neutralise possibly evil actions. To do that, we need to establish what Floridi (2013a) calls “infraethics”—that is, a “first-order framework of implicit expectations, attitudes, and practices that can facilitate and promote morally good decisions and actions” (p. 738). These expectations, attitudes, and practices operate as moral enablers insofar as they help aggregate potentially good actions and disperse potentially negative actions, although they do not need to be morally characterised either positively or negatively. Examples of such enablers are information transparency (Turilli and Floridi, 2009) and privacy (Floridi, 2005; 2006), trust online (Taddeo, 2009; 2010), and openness (Chopra and Dexter, 2008). In what follows, I will contend that solidarity (as I define it) should be understood as an infraethical phenomenon that can facilitate the emergence of positive moral behaviours in an environment characterised by distributed morality.

3. Why Solidarity?

8The concept of solidarity has a relatively short history compared with other pivotal moral and political concepts such as democracy and freedom (Metz, 1999; Stjernø, 2004). Various accounts of solidarity have identified several traits that are deemed as essential to understanding its meaning and function. Solidarity relations are often described as historically grounded on a principle of equal and freely given support among peers (Metz, 1999). People who establish a bond of solidarity must recognise their similarity in a relevant respect (Prainsack and Buyx, 2017), whether it be similarity of features (for example, belonging to a certain social group or sharing the same language, nationality, or religion) or to similarity of motivation and agency (for example, sharing the same goals). Other commonly identified traits include mutual responsibility (Bayertz, 1999), the recognition of individual freedom (Honneth, 1995) and the aim of democratically establishing it (Brunkhorst, 2005), and the entailment of positive duties such as cooperation and reciprocity (Scholz, 2008, p. 58).

  • 1 By justice and injustice, I do not mean here any specific theory of justice or juridical system bu (...)

9I think that at least two features stem from these characterisations, which I contend are essential to defining solidarity relations. First, in order for people or institutions to set solidarity in motion, they must share certain goals and ideals of justice. In this sense, although acknowledging their shared status (for example, qua compatriots or fellow workers) might help create or strengthen solidarity bonds, the emergence of solidarity also requires collectively aiming to overcome perceived inequalities or injustices that a certain social condition might involve.1 Take, for instance, Thelma and Louise. They might enjoy knowing that they are both American citizens, and the knowledge of this similarity may be the origin of a number of sentiments such as sympathy, trust, or even camaraderie. Nonetheless, this would not be a sufficient condition for the emergence of a solidarity relation. To be in solidarity, Thelma and Louise must also act in unison in response to (for instance) a threat or act of violence and to re-establish what they believe is just.

10Second, the sharing of goals depends on the willingness of individuals and groups to also assume the costs and burdens that these goals might involve with at least the expectation that their action will be reciprocated. This element helps differentiate solidarity from other intersubjective relationships such as benevolence and charity. For instance, when a group of workers decide to establish a mutual relationship of solidarity, they know that this decision has consequences that might be detrimental (for example, job loss, discomfort due to a prolonged strike) and nonetheless decide to act anyway and to equally share the burden of these potential consequences.

11We can also argue that the first element justifies the second in that sharing goals or ideals incites willingness to also share costs and risks with an eye towards longer-term gain and an increased probability of achieving the shared goals. The willingness to shoulder burdens and take risks with the aim of pursuing potentially longer-term political and socioeconomic goals is furthermore supported, in a virtuous circle, by the sharing of the risk, which can mitigate individual risk. Hence solidarity facilitates collective decision making, collective action, and longer-term political action and planning—all characteristics that can be described as desiderata of a democratic society. Solidarity is thus not just a descriptive notion that indicates a certain form of human togetherness, but a fundamental, infrastructural dimension of democratic life.

12This infrastructural dimension is what suggests that solidarity might be employed within an infraethical framework as a powerful moral enabler. Like other enablers (for example, trust or privacy), solidarity as such is morally neutral insofar as it is not inherently good or evil (even a group of criminals can share goals, a perception of justice, and risks and burdens). Despite this neutrality, however, solidarity has the power to foster collective action and collective risk taking and may or may not (depending on the use that we make of it) therefore be conducive to morally good outcomes and consequently to a general reinforcement and amelioration of the infraethical environment in which it operates. In the next section I will portray a concrete scenario in which solidarity can be employed in this way.

4. Solidarity and Data Access

13How we collect, own, and employ data is one of the oldest questions that humankind has had to address. The capacity to retain information from our own experience and to make good use of this information over time without having to relearn notions and practices that (as a species) we have already learned is a fundamental human skill whose evolution and perfection enabled the progress of humanity. In the present age, new problems have arisen concerning how to analyse and organise the increasing amount of information that new technological devices allow us to obtain. The ontological and ethical consequences of big data are tightly connected to the topic of distributed morality insofar as they require us to move away from traditional moral concepts, such as individual responsibility and decision making, causality, and culpability, and to envision forms of relational, systemic morality. The development of network and information ethics (Bynum, 2011) has allowed researchers to address this modified scenario and to tackle new ethical dilemmas stemming from it. The ethics of big data shows how moral responsibility is no longer entirely ascribable to individual agents, but is rather spread throughout a network of data generators, collectors, and users. Consequently, whilst concepts that were once pivotal, such as individual agency, lose their centrality, other phenomena, such as network knock-on effects—that is, the unintended consequences of and collateral damage caused by actions within a network—become of primary importance (Zwitter, 2014).

14One of the sectors that the current setup has most profoundly altered is data access and data security (Micheli et al., 2018). How do we grant access to data in a responsible and productive way—that is, without compromising (for instance) the property, security, and privacy of data owners and users? What are the ethical guidelines that citizens and institutions should follow to make good use of their data? An example of how new digital, data-driven technologies are altering established notions in this area is group privacy. It is well known how public and private companies make strategic use of data analysis in order to mine data about habits and customs of citizens and customers. These datasets constitute raw material that, once thoroughly refined, can generate valuable outcomes such as higher revenues or more efficient policies. The collection and analysis of personal data involves the risk of privacy breach, which is traditionally prevented through de-individualisation. In other words, according to traditional privacy protocols, personal data can be collected as long as they are disjointed from the specific person who generated them. Although de-individualisation makes personal anonymisation possible, it cannot guarantee group anonymisation (Dwork, 2006; Zwitter, 2014). This means that de-individualised data still provide information regarding the opinions, habits, and tastes of social groups and population strata, and those data can be employed in a targeted way for specific purposes (from marketing to political campaigning). Big-data technologies have the power to facilitate this process and to provide more and more fine-grained pictures of group characteristics thanks to their ability to enhance hyperconnectivity and identify hidden correlations among data. This aspect raises huge ethical issues and must be considered in the development of innovative and data-informed privacy policies.

  • 2 This report was prepared for the Australian Department of Social Services by Elizabeth Green and F (...)

15These and other aspects have substantially informed the design of new models of data access and governance. Traditional models of data access are defensive in nature and anchored in the costs and risks to the data owner. This perspective implies that the primary aim of any data-access strategy is to prevent malicious misuse, which results in the extensive use of worst-case scenarios and protection against hypothetical possibilities (Ritchie and Green, 2016, p. vi). A default-closed strategy underpins these models. Their developers start from the fundamental question “Are we allowed to do this?”, which is defensive insofar as it interprets regulations as a shield. The aforementioned technological advancements in the field of digital, data-driven technologies, and the subsequent emergence of hypernetworked societies widely characterised by distributed moral frameworks, help reveal the inadequacy of default-closed traditional models. Recent analytic reports such as the Data Access Project2 exemplify an alternative approach: “An alternative is to consider the law as one of the tools to be used in designing data strategies; the appropriate question is ‘how do I lawfully achieve what I want?’. This alternative approach, of deciding objectives and studying the legal framework to see how an objective can be achieved, is a key part of the EDRU ethos” (Ritchie and Green, 2016, p. 33). According to its developers, an EDRU approach has the power to substantially modify and improve the way in which data are owned and used (Ritchie, 2014; 2016; Ritchie and Green, 2016).

16The aim of this paper is not to provide an exhaustive analysis of the EDRU approach or of any other innovative model of data access. Of all the traits that are central in the EDRU approach, the one that is most relevant here is its focus on the collective and societal aspects and constraints of data access. According to this model, the relevant locus of costs and benefits of data access is no longer the data owner but society, which corresponds to a community of interest between data owners and users. Therefore, the decision making is not grounded on individual agency and responsibility, but (in line with the fundamental assumptions of distributed morality) rather corresponds to a balance of subjective probabilities. On the basis of these premises, this model establishes that data should be made available for research purposes if “the expected benefit to society outweighs the potential loss of privacy for the individual” (Ritchie and Green, 2016, p. v). The most challenging question is how to calculate this benefit. I contend that a solidarity-based approach would offer a decisive contribution to overcoming this challenge. In section 3, I characterised solidarity as a principle that has the power to foster collective action and risk taking and that can therefore be conducive to morally good outcomes. This makes solidarity a moral enabler in an infraethical environment. For the same reason, solidarity can also be seen as a valuable tool to identify what societal benefits justify individual risk taking. In other words, analysing the solidarity relations among the members of a society might help determine their willingness to shoulder burdens and take risks (for example, to partially renounce their privacy by granting broader data access) in order to benefit their community of interest. In the next section, I will briefly point to a potential case study that might highlight this mechanism.

5. Data Access and the Public Good

17In the middle of a global pandemic, the major impact of data access on public health has become one of the most discussed topics not just among experts but more generally in the public discourse. This discussion essentially concerns the potentialities and boundaries that accessing and sharing health data in order to track the confirmed cases of Covid-19 infections and prevent further epidemic outbreaks would imply. This ongoing crisis is a perfect example of how solidarity practices might successfully be implemented in the governance of data access. Appeals to solidarity have emerged in various scientific reports on the pandemic. The European Group on Ethics in Science and New Technologies (2020) issued a statement on 2 April 2020 in which they claimed that “this pandemic should be seized, not as an opportunity but as a call, to foster solidarity at the European and global level. This must manifest itself in concrete actions such as the honest sharing and pooling of information, experiences, innovations and resources”. Barbara Prainsack (one of the authors of the aforementioned statement), discussing the difficulties of raising solidarity in times of pandemic, has recently claimed that “rather than only celebrating solidarity where we see it happen, we need to build institutions and circumstances that can make solidarity stable and lasting” (Prainsack, 2020; see on this also Wagenaar & Prainsack, 2020). These institutions include public infrastructures, solidaristic healthcare, and fair taxation systems. Implementing a solidarity-based healthcare system depends on the willingness of data owners to share data in order to help trace the contagion and allow public health institutions to take countermeasures that might lead to clear public goods and societal benefits. Several analyses of the importance of a solidarity-based approach to biomedicine and public health precede the pandemic and highlight new areas of application for this concept (for example Prainsack and Buyx, 2011; 2017). What is still lacking is a more general account of the potentialities of this solidarity-based approach—whereby solidarity is understood as an infrastructural moral enabler in today’s society—in the broader field of data access. This paper provides a first step in this direction.

Top of page

Bibliography

Arbuckle, L. & Ritchie, F. (2019). The Five Safes of Risk-Based Anonymization. IEEE Security and Privacy, 17, 5, 84–89. doi:10.1109/MSEC.2019.2929282;

Bayertz, K. (1999). Four Uses of ‘Solidarity’. In K. Bayertz (Ed.), Solidarity. Dondrecht: Kluwer;

Brunkhorst, H. (2005). Solidarity: From Civic Friendship to a Global Legal Community. Cambridge, MA: MIT Press;

Bynum, T. (2011). Computer and Information Ethics. In The Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/archives/spr2011/entries/ethics-computer/;

Chopra, S. & Dexter, S. (2008). Decoding Liberation: The Promise of free and Open Source Software. New York: Routledge;

Dwork, C. (2006). Differential privacy. In M. Bugliesi, B. Preneel, V. Sassone, I. Wegener I. (Eds.), Automata, Languages and Programming. ICALP 2006. Lecture Notes in Computer Science, vol. 4052. Berlin-Heidelberg: Springer;

Floridi, L. (2005). The Ontological Interpretation of Informational Privacy. Ethics and Information Technology, 7, 4, 185–200. doi:10.1007/s10676-006-0001-7;

Floridi, L. (2006). Four Challenges for a Theory of Informational Privacy. Ethics and Information Technology, 8, 3, 109–119. doi:10.1007/s10676-006-9121-3;

Floridi, L. (2013a). Distributed Morality in an Information Society. Science and Engineering Ethics, 19, 3, 727–742. doi:10.1007/s11948-012-9413-4;

Floridi, L. (2013b). The Ethics of Information. Oxford: Oxford University Press;

Floridi, L. (2017). Infraethics—on the Conditions of Possibility of Morality. Philosophy & Technology, 30, 4, 391–394. doi:10.1007/s13347-017-0291-1;

Floridi, L. & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14, 3, 349–379. doi:10.1023/B:MIND.0000035461.63578.9d;

Heersmink, R. (2017). Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems. Science and Engineering Ethics, 23, 2, 431–448. doi:10.1007/s11948-016-9802-1;

Hess, K., Igneski V. & Isaacs T. (2018). Collectivity: Ontology, Ethics, and Social Justice. London: Rowman and Littlefield International;

Honneth, A. (1995). The Struggle for Recognition: The Moral Grammar of Social Conflicts. Cambridge: Polity Press;

Isaacs, T. (2011). Moral Responsibility in Collective Contexts. Oxford: Oxford University Press;

Isaacs, T. & Vernon R. (2011). Accountability for Collective Wrongdoing. Cambridge: Cambridge University Press;

Meacham, D. & Tava F. (2021). The Algorithmic Disruption of Workplace Solidarity: Phenomenology and the Future of Work Question. Philosophy Today, 65, 3, 571–598. doi:10.5840/philtoday2021519408;

Metz, K.H. (1999). Solidarity and History: Institutions and Social Concepts of Solidarity in 19th Century Western Europe. In K. Bayertz (Ed.), Solidarity, pp. 191–207. Dondrecht: Kluwer;

Micheli, M., Blakemore M., Ponti M. & Craglia, M. (2018). The Governance of Data in a Digitally Transformed European Society. Second Workshop of the DigiTranScope Project, European Commission, JRC114711. Retrieved from https://ec.europa.eu/jrc/communities/sites/jrccties/files/jrc_digitranscope_report_-_oct_2018_data_governance_workshop_1.pdf;

Prainsack, B. (2020). Solidarity in Times of Pandemic. Democratic Theory, 7, 2, 124-133. doi:10.3167/dt.2020.070215;

Prainsack, B. & Buyx, A. (2011). Solidarity as an Emerging Concept in Bioethics. London: Nuffield Council on Bioethics;

Prainsack, B. & Buyx, A. (2017). Solidarity in Biomedicine and Beyond. Cambridge: Cambridge University Press;

Ritchie, F. (2014). Access to Sensitive Data: Satisfying Objectives Rather than Constraints. Journal of Official Statistics, 30, 3, 533–545. doi:10.2478/JOS-2014-0033;

Ritchie F. (2016). Can a Change in Attitudes Improve Effective Access to Administrative Data for Research? Working Papers in Economics No. 1607, University of the West of England, Bristol. Retrieved from http://eprints.uwe.ac.uk/29646/1/1607.pdf;

Ritchie, F. & Green, E. (2016). Data Access Project Final Report. Australian Department of Social Services. Retrieved from http://eprints.uwe.ac.uk/31874/1/Data%20Access%20Project%20 Final%20Report%20v2.00%20Final%20DSS.pdf;

Scholz, S. (2008). Political Solidarity. University Park, PA: Penn State University Press;

European Group on Ethics in Science and New Technologies (2020). Statement on European Solidarity and the Protection of Fundamental Rights in the COVID-19 Pandemic, 2 April 2020. Retrieved from https://ec.europa.eu/info/sites/info/files/research_and_innovation/ege/ec_rtd_ege-statement-covid-19.pdf;

Stjernø, S. (2004). Solidarity in Europe: The History of an Idea. Cambridge: Cambridge University Press;

Taddeo, M. (2009). Defining Trust and e-Trust: Old Theories and New Problems. Journal of Technology and Human Interaction, 5, 2, 23–35. doi:10.4018/jthi.2009040102;

Taddeo, M. (2010). Modelling Trust in Artificial Agents, a First Step toward the Analysis of e-Trust. Minds and Machines, 20, 2, 243–257. doi:10.1007/s11023-010-9201-3;

Turilli, M., Floridi, L. (2009). The Ethics of Information Transparency. Ethics and Information Technology, 11, 2, 105–112. doi: 10.1007/s10676-009-9187-9;

Wagenaar, H. & Prainsack B. 2020. The New Normal: The World after COVID- 19. A Blog in Four Parts. Retrieved form https://medium.com/@hendrik.wagenaar/the-new- normal-the-world-after-covid-19-201189e22545.

Top of page

Notes

1 By justice and injustice, I do not mean here any specific theory of justice or juridical system but both the perception of such phenomena that individuals and groups may have as part of their lifeworld experience and the meanings and values that they form from this perception.

2 This report was prepared for the Australian Department of Social Services by Elizabeth Green and Felix Ritchie of Bristol Economic Analysis at the University of the West of England, Bristol.

Top of page

References

Bibliographical reference

Francesco Tava, “Solidarity and Data Access: Challenges and Potentialities”Phenomenology and Mind, 20 | 2021, 118-126.

Electronic reference

Francesco Tava, “Solidarity and Data Access: Challenges and Potentialities”Phenomenology and Mind [Online], 20 | 2021, Online since 01 May 2022, connection on 27 March 2025. URL: http://journals.openedition.org/phenomenology/450

Top of page

About the author

Francesco Tava

University of the West of England, Bristol (UK) – francesco.tava@uwe.ac.uk

Top of page

Copyright

CC-BY-4.0

The text only may be used under licence CC BY 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search