Artificial intelligence and equality
The Non-Discrimination Act also applies to the use of artificial intelligence. The use of artificial intelligence systems and algorithmic decision-making are on the increase and thus more consideration should also be given to equality issues in their utilisation. Use of artificial intelligence may have discriminatory impacts even if this was not the intention of those designing and ordering the systems. At the same time, however, artificial intelligence also offers opportunities for promoting equality.
What makes artificial intelligence potentially discriminatory?
There are many reasons why artificial intelligence, such as automatic algorithmic decision-making, is potentially discriminatory:
- errors and inadequacies in algorithmic teaching data
- poorly selected algorithmic forecast variables and sorting criteria
- the algorithm is designed to give meaning to a ground for discrimination, such as age, language or gender
For example, algorithms used in facial recognition have been reported to better identify white people, since the artificial intelligence system in question has been taught using white faces. Consequently, dark-skinned people are more often subjected to false suspicions and unnecessary measures on a discriminatory basis.
In addition, even if artificial intelligence was not allowed to give direct significance to specific discrimination criteria, and this information was removed from the data, artificial intelligence might still find from the data personal information that is strongly associated with the ground for discrimination. For example, information on language and place of residence may in some situations highly correlate with origin. Thus, without the authors or users intending or wishing it, artificial intelligence may still indirectly end up producing discriminatory forecasts and conclusions by combining personal data.
It is the responsibility of humans to ensure that artificial intelligence does not cause discrimination
The parties responsible for artificial intelligence systems and the parties using them, such as public authorities, service providers and employers, are always responsible for ensuring that their activities are in accordance with the Non-Discrimination Act. Thus, from the perspective of the responsibility for discrimination, it is irrelevant whether the decisions are made by humans or an algorithm.
To prevent and identify discrimination, artificial intelligence applications must be monitored and tested on a regular basis. Assessment of equality impacts (how using artificial intelligence impacts different population groups and members of minorities) should be done before an artificial intelligence system is put into use. Transparency of artificial intelligence systems is key to preventing and monitoring discriminatory effects.
Under the Non-Discrimination Act, public authorities, private actors performing public administrative functions, employers, education providers and early childhood education and care organisers and service providers must promote equality. The obligation also applies to the use of artificial intelligence. In other words, an impact assessment must be prepared from the perspective of equality already when the use of artificial intelligence is in the planning stage. This obligation also applies to cooperation and outsourcing of the activities to private actors. For example, public authorities must ensure that they do not purchase or use privately designed discriminatory algorithms.
From the perspective of equality and non-discrimination, the essential question is not only whether a specific method or context of using artificial intelligence is discriminatory by law. Artificial intelligence must also benefit everyone equally. The use of artificial intelligence could reinforce existing power structures and inequality in society. Its impact must be examined, especially by the authorities, from a broader perspective of promoting equality, rather than solely through the lens of the legal definition of discrimination.
The equality impacts of artificial intelligence must also be considered as part of equality planning.
Non-Discrimination Ombudsman supervises compliance with non-discrimination provisions in the use of artificial intelligence and algorithms
The Non-Discrimination Ombudsman also supervises compliance with non-discrimination provisions in the use of artificial intelligence and algorithms. If any suspicions arise about discrimination in the use of artificial intelligence, a complaint may be submitted to the Ombudsman or the Ombudsman can investigate the matter on its own initiative.
The Non-Discrimination Ombudsman took a case concerning automated decision-making in lending to the National Non-Discrimination and Equality Tribunal in 2017. The automation was based on a system in which loan applicants were scored on the basis of their place of residence, gender, mother tongue and age. In its decision, the Tribunal concluded that the practice was discriminatory and imposed a substantial conditional fine on the party found guilty of discrimination.