Ethical, responsible artificial intelligence for the common good

Illustration © Romualdo Faura

When artificial intelligence is used in scenarios that impact millions of people, governance challenges and ethical dilemmas arise. How can we be sure that algorithmic decisions are applied subject to social control? Many of the proposals regarding ethical principles are grouped under the acronym FATEN: fairness, autonomy, trust, equity and non-maleficence.

We live in a time of prosperity, but we also face huge global challenges that put our very existence as a species in jeopardy. We understand that we need artificial intelligence (AI), and specifically AI methods established on data-based learning, to help us to tackle these challenges. There is immense potential.

However, to fulfil this potential we must address multiple barriers related to accessing and analysing valuable data – frequently privately owned –, and to a lack of well-defined ethical principles, legal and regulatory constraints, technical challenges, clear commercial interests not necessarily aligned with the common good, and the sizable carbon footprint of the current AI systems.

When AI methods are used in scenarios that bear an impact on the lives of millions of people, such as smart cities, major governance challenges and ethical dilemmas arise. How can we be sure that applying AI widely will not bring about negative repercussions for society? Will algorithmic decisions be beyond our control? Who is responsible for such decisions? Will these systems include the necessary security mechanisms to prevent cyberattacks? What about the malicious use of data or the generation of untruthful content to serve commercial, political or criminal interests? These are issues that are both necessary and complex to address, making it essential to equip ourselves with ethical principles and governance standards.

In the last decade, many proposals have been published on the ethical principles to be respected if artificial intelligence is widely used, including The Menlo Report;[1] the ethical principles included in the national AI strategies of more than 50 countries around the world; the European Commission’s guidelines for trustworthy artificial intelligence;[2] the OECD principles for the development of artificial intelligence;[3] and various professional proposals, such as those of the Institute of Electrical and Electronics Engineers (IEEE)[4] and the Association for Computing Machinery (ACM).[5] Most of the principles outlined above could be grouped under the acronym FATEN, which is broken down below and is an extension of the four basic principles of medical ethics.[6]

F for fairness

That is, devoid of discrimination. AI systems based on learning from data can discriminate due to a number of reasons, including biases in the data used to train the algorithms, an inappropriate choice of algorithm or model, and a biased interpretation of the results. In the last seven years, many high-profile cases of algorithmic discrimination have been made public in areas such as criminal justice,[7] credit granting,[8] human resources and hiring,[9] education,[10] and health care.[11] The discovery and measurement of algorithmic biases and the development of fair machine learning algorithms are fertile areas of research, as illustrated by the recently established ACM FaccT Conference on Fairness, Accountability and Transparency,[12] the ELLIS research programme on human-centric machine learning and the newly established Institute for Human-Centric Artificial Intelligence,[13] which is one of the 39 units in the ELLIS network of scientific excellence in artificial intelligence.

A for autonomy, accountability and intelligence augmentation

The principle of autonomy is at the heart of Western ethics. According to this principle, each person should be free to choose their own thoughts and actions. However, we can build AI computer models of our personalities, interests, tastes, needs, strengths/weaknesses and behaviours that are – and probably are – used to subliminally influence our decisions, choices and actions.

The letter A also stands for accountability, i.e. clarity in the ascription of responsibility related to the consequences of the use of AI methods. Lastly, A also stands for intelligence augmentation, rather than its replacement: AI systems should be used to support and complement human decision-making, not to replace humans.

T for trust and transparency

Trust is a fundamental pillar in our relationships and comes about when three conditions are met: competence, which is the capacity to successfully execute the task undertaken; reliability, understood as competence maintained over time; and honesty and transparency. Therefore, the T in FATEN is also for transparency. A data-based decision-making system is transparent when non-experts can easily observe and understand it. Transparent, interpretable and explainable AI models are needed especially in those cases of application with social impact.

E for bEneficence and equity

The principle of bEneficence is about optimising the positive impact of AI with sustainability, diversity and truthfulness. We cannot ignore the environmental cost of technological development, particularly when it comes to deep learning algorithms, given their need for large amounts of computation and data to learn from.

Diversity is also of paramount importance. Firstly, in the composition of teams developing AI systems to optimise the likelihood of finding innovative solutions to the immense challenges we face and to develop[14] inclusive solutions that are relevant to the communities where they will be implemented. Secondly, we need to incorporate diversity criteria into algorithms to minimise the prevalence of filter bubbles and echo chamber effects, which contribute, at least in part, to the polarisation of public opinion.

We must also ensure the “veracity” of data. Today, we can algorithmically generate fake texts, audios, photos and videos through deep neural network generative models (known as deep fakes) that for humans are indistinguishable from real content. This fake content, shared by millions of people, can impact the formation of public opinion on issues of great social relevance, such as vaccines or electoral processes.

E also stands for equity. The principles of universal access to knowledge and the democratisation of technology are in jeopardy, due to the extreme dominance of technology giants in the United States (Apple, Amazon, Microsoft, Facebook and Alphabet/Google) and China (Tencent, Alibaba and Baidu). This market dominance leads to data dominance. In fact, most of these tech giants are data companies that earn billions of dollars by analysing and monetising the information they collect about their users. A significant share of valuable human behavioural data that could be used for social good is generated and captured by these tech companies.

Moreover, the 21st century is witnessing a polarisation in the distribution of wealth. According to Credit Suisse’s Global Wealth Report, the 100 richest people in the world accumulate more wealth than the poorest 4 billion people. This accumulation in the hands of the very few has been attributed, at least somewhat, to technology and the Fourth Industrial Revolution. One could argue that data, and especially the capacity to harness and make sense of data, is the asset that generates the most wealth (data economy). Therefore, if we seek to maximise the positive impact of this abundance of data, we must develop and promote new models of data ownership, management, exploitation and regulation.

N for non-maleficence

That is, minimising the negative impact that could result from the use of AI methods. Included within this principle are “prudence” in the development of AI-based systems; guarantees of reliability, robustness and reproducibility; data security and preservation of “privacy”. Safety verification and thorough evaluation of AI models before deployment in the real world, particularly of deep neural networks, are imperative.

In light of the complex nature of the challenges faced, a pooling of experts from different disciplines is necessary to ensure the projects’ success. This multidisciplinary nature adds complexity, but is necessary and particularly beneficial, as teams would include ethicists.

External monitoring bodies are also necessary to ensure compliance with ethical principles. In recent years, the creation of data stewards, responsible for data quality and governance in each organisation; external oversight ethics committees; and/or the appointment of an advisor with oversight responsibilities to ensure that projects with social impact are aligned with FATEN principles have been proposed.

An alternative is to require the use of open processes, codes and systems, by deploying regulations that require compliance with ethical principles and/or by encouraging knowledge sharing. Furthermore, it is essential to understand the cultural and social characteristics of the societies in which projects are rolled out. Therefore, working with local institutions and civil society in the countries and regions where projects will be implemented is absolutely necessary.

It is time to take action as a society, collectively defining the ethical and political horizons of AI, because we are talking about science and technology, but also about rights, economy, democracy, equality, inclusion, citizenship, peace and power.

It is time to educate, to learn, to rise above apocalyptic sensationalism, to own our fate, to regulate AI wisely, and to focus on ending the abusive practices and social harm caused by the companies behind the advances in artificial intelligence, which in the last decade have amassed unprecedented power and have contributed to social inequality.

It is time to invest in artificial intelligence that contributes to progress, leaving no one behind and not destroying the planet in the process. To quote Theodore Roosevelt, “a revolution is sometimes necessary”. Since there is no planet B, I invite you to join the AI revolution for and by the people and the planet.

[1] Dittrich, D. i Kenneally, E. The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research. Departament de Seguretat Nacional dels Estats Units, 2012.

[2] Comissió Europea. Ethics guidelines for trustworthy AI. Unió Europea, 2019.

[3] OCDE. Principios de la OCDE sobre IA. OCDE, França, 2019.

[4] Institute of Electrical and Electronics Engineers. Diseño éticamente alineado. Piscataway, NJ: IEEE, 2017.

[5] Association for Computing Machinery. Código de Ética y Conducta Profesional. ACM, Nova York, 2018.

[6] Gillon, R. “Medical ethics: four principles plus attention to scope”. British Medical Journal. Regne Unit, 1994.

[7] Angwin, J., Larson, J., Mattu, S. i Kirchner, L. Machine Bias: There’s software used across the country to predict future criminals and it is biased against blacks. Pro Publica, Nova York. 2016.

[8] Blattner, L. i Nelson, S. How Costly is Noise? Data and Disparities in Consumer Credit. 2021. http://ow.ly/WpF250OkhuO

[9] Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. 2018. http://ow.ly/vRcB50Okhro

[10] Perry, A. M. i Turner, N. AI is coming to schools, and if we’re not careful, so will its biases. Brookings, 2019. http://ow.ly/wgXN50OkhsJ

[11] Ledford, H. Millions of black people affected by racial bias in health-care algorithms. 2019. http://ow.ly/nQiH50Okhti

References

Oliver, N. “Artificial Intelligence for Social Good: The Way Forward”, capítol 11 del SRIP Report 2022. SRIP Report of the European Commission, 664-707. 2022.

Oliver, N. “Governance in the Era of Data-driven Decision-making Algorithms”. Women Shaping Global Economic Governance. Center for Economic Policy Research Press (CEPR), 2019.

Geschke, D., Lorenz, J., i Holtz, P. “The triple-filter bubble: Using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers”. British Journal of Social Psychology, 58, 129-149. 2019.

Shorrocks, A. i Hechler-Fayd’herbe, N. Global Wealth Report 2019: Global wealth rises by 2.6% driven by US & China, despite trade tensions. Credit Suisse, Zúric, 2019.

Verhulst, S. G. “The Three Goals and Five Functions of Data Stewards. Data Stewards: a new Role and Responsibility for an AI and Data Age”. Medium i The Data Stewards Network, Nova York, 2018.

The newsletter

Subscribe to our newsletter to keep up to date with Barcelona Metròpolis' new developments