News

Inici » News » Algorights: "We want a more democratic artificial intelligence"

Algorights: "We want a more democratic artificial intelligence"

20 January 2022

Technology and society. Drets de Ciutadania interviews this collective to talk about the role of society in the development of artificial intelligence.

Drets de Ciutadania has interviewed Algorights, a group of people from different fields and with varied knowledge who believe that civil society must play a key role in the development of artificial intelligence (AI). They do this through critical thinking, collaboration, and the creation of spaces of care; spaces with and for people.

How and when was Algorights born?

Algorights is a multidisciplinary, diverse and plural space. We debate around different topics to learn and discuss different visions as a tool for the empowerment of all members and the community itself.

It started as a learning group (it all began with about twenty people doing the “Elements of AI” course together in the summer of 2020). This original group grew and incorporated more than 50 people from different disciplines and sectors. We had been meeting each other in different forums on AI and human rights.

What (or why) are you fighting for?

The goal of the community is to promote the democratisation of access, creation and application of AI knowledge, from a human rights perspective.

We aim to ensure human rights compliance in the development and application of AI technologies. We strongly believe that civil society needs to be not only in the debate about the creation of these technologies, but also in the decision-making processes in terms of design, implementation and evaluation.

What exactly is AI?

This is one of the “trick” questions. AI is a line of thought and work that brings together different disciplines such as philosophy, neuroscience, mathematics and computer science. It is about trying to reproduce human intelligence with a machine. This very approach has different problems, such as the fact that we do not have a concrete definition of intelligence. The consensus is therefore the ability to “reason”. But we don’t know for sure how the brain works either!

So we don’t really have AI today. We have machines with very large processing capacities that are able to “learn” very quickly to identify patterns and, according to this processing and identification, to make predictions for a given problem.

This means that we also do not have artificial general intelligence capable of doing anything, but what we have are AI systems that are experts in a particular subject. For example, the content recommendation system of video platforms takes a lot of data about a person (not only what they have watched, but also when they watched it, how long they watched it, what people around them or people they are connected to have watched, even what they have bought recently!) Based on this, they make a prediction of what you might like and recommend specific content. But these systems, for example, are not able to identify objects in a photograph, they are not designed to do so.

What do algorithms and AI have to do with human rights?

With the above example, we have explained in a very simple way how one of the algorithms (AI systems are) that recommend content to us works. We are not always aware that virtually all the digital services we use have one degree or another of application of these technologies. They decide what job offers we see, what route we take to get to a place, what content we are offered… These applications may seem innocuous to us or we may even think that they improve the interaction and experience with the services we use, but there are cases in which the decisions they make may violate our rights and leave us in a state of unprotected and inferiority. We have crazy examples, such as the case of Facebook’s interference with different elections or facial recognition algorithms that do not recognise racialised people. At other times, what algorithms do is reproduce and maximise structural social problems, as in the cases of crime prediction or aid allocation.

Do algorithms discriminate and do you have examples in Spain or Catalonia?

We have a variety of cases that show that automated processes are in danger of replicating and amplifying the oppressions and power dynamics of our societies. In Spain we have the example of the Bosco programme, used by the Spanish government to grant the electricity social voucher. The organisation CIVIO denounced that people who were entitled to receive the subsidy were being denied it and took the case to the Transparency Council. It was denied access to the source code of the algorithm, on the grounds of intellectual protection. This opacity is repeated in other algorithms operating in our society, such as Veripol, the police algorithm for identifying false reports, or Viogén, which assesses the risk of repeated assaults on victims of gender violence. It is very important, in order to ensure that rights are not violated, that these systems are analysed in audits and the results are published for transparency and accountability.

You explain on your website that you work to apply a human rights-based approach to AI technologies. What does that mean in practice?

We focus on four aspects: transparency, accountability, social justice and citizen participation. For example, we see that currently most options for making AI fairer are approached from techno-solutionist perspectives. That is, they attempt to solve social problems through technology, leaving aside the structural problems in society that cause them. We advocate for broadening and improving citizen participation in the design, conceptualisation, development and evaluation of automated and/or AI-based processes. We want a more democratic AI.

What is the main challenge you face right now?

In a society marked by the hands of the clock, starting and consolidating a community of people who dedicate part of their time to building a collective space is already a victory in itself. We claim the need not to run, to be inclusive, to seek diversity and differences with respect. We want to build proposals to improve our society from a perspective where care is at the heart of the community. Everyone, with a lot or a little, contributes to making this space a reality where we have different knowledge and from where we can build knowledge in a collaborative way.

 

More info