Among the prominent features of artificial intelligence and machine learning, which are now being used in many sectors, the most important one is the ability to analyze data much faster from programmatic tools and from human beings as well as to learn how to manipulate data on its own.
In recent years, profiling and automated decision-making systems, which are frequently used in both public and private sectors, have brought benefits to individuals and corporations in terms of increased productivity and resource conservation, as well as bringing about risks.
The decisions made by these systems which affect the individual and the complex nature of the decisions, cannot be justified. For example, artificial intelligence can lock a user into a specific category and restrict it to the suggested preferences. Hence, this reduces their freedom to choose specific products and services, such as books, music or news articles. (Article 29 Data Protection Working Party, WP251, p.5)
GDPR, which will come into force in Europe in May, has provisions on profiling and automated decision making, to prevent from being used in such a way as to have an adverse effect on the rights of individuals.
GDPR defines profiling in Article 4 as follows: Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural persons performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.(WP251, p.6)
Profiling is used to make predictions about people, using the data obtained from various sources on those people. From this point of view, it can also be considered as an evaluation or classification of individuals based on characteristics such as age, gender, and weight.
Automated decision-making is the ability to make decisions with technological tools (such as artificial intelligence) without human intervention. Automated decision-making can be based on any type of data.
For example, data provided directly by the individuals concerned (such as responses to a questionnaire); data observed about the individuals (such as location data collected via an application); derived or inferred data such as a profile of the individual that has already been created (e.g. a credit score).
There are potentially three ways in which profiling may be used:
- (i) general profiling;
- (ii) decision-making based on profiling; and
- (iii) solely automated decision-making, including profiling (Article 22)
The difference between (ii) and (iii) is best demonstrated by the following two examples where an individual applies for a loan online: a human decides whether to agree the loan based on a profile produced by purely automated means(ii); an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any meaningful human input (iii). (WP251, p.8)
The important questions to be encountered here are:
- How does the algorithm access this data?
- Is the source of data correct?
- Does the decision of the algorithm cause legal effects on the person?
- Can the individuals have some rights over the decision based on automated process?
- What measures should the data controllers take in this case?
Nowadays, most companies are able to analyze their customers’ behaviors by collecting their datas. For example, an insurance company can determine insurance premiums by tracking driving behavior of the driver, through automatic decision making.In addition, profiling and automatic decision-making systems, especially in advertising and marketing applications, can cause effective results on other individuals.
Hypothetically, without relying on his/her own payment history, a credit card company can reduce a customer’s card limit by analyzing other customers who live in the same area and who shop at the same store. Therefore, this means that, based on the actions of others, you are deprived of a chance.
Data controller will be held responsible because of mistakes
It is important to note that, mistakes or prejudices in the collected or shared data may lead to evaluations based on incorrect classifications and uncertain outcomes in the automated-decision making process and may have negative effects on the individual.
Decisions can be based on out-of-date data or outsourced data can be misinterpreted by the system. If the data used for automated-decision making is not correct, then the resulting decision or profiling will not be correct.
In the face of possible mistakes that may arise in such systems where artificial intelligence and machine learning are used, some obligations will occur for “data controllers. Data controller should take adequate measures to ensure that the data used or indirectly obtained are accurate and up-to-date.
In addition, the data controller should also take steps to ensure long-term data retention, as data retention times may be incompatible with accuracy and update, as well as with proportionality. Another important issue is that the special categories of personal data are processed and used in these systems.GDPR is seeking the explicit consent in the processing of special categories of personal data.
However, in this case, the data controller should remember that the profiling can create special categories of personal data by combining non-special categories of non-personal data with non-special categories of personal data. For example, when a person’s health status can be obtained from food purchase records, food quality and energy content data. (WP251, p.22)
GDPR also mentions that people, who are affected by automated-decision making with the data used, have certain rights on this situation. Given the transparency principle underlying GDPR, according to Articles 13 and 14, the data controller should clearly explain how the process of profiling or automated-decision making works for the individual.
Profiling may include an estimation component that increases the risk of mistakes. Input data may be incorrect or irrelevant or may be out of context. Individuals may want to query the validity of the data and grouping used. At this point, according to Article 16, the data subject will also have the right of correction.
Similarly, the right to delete in Article 17 may be claimed by the data subject in this context. If there is a given consent to the basis of the profiling and the consent is subsequently withdrawn, the data controller has to delete the personal data of the data subject as long as there is no other legal basis for profiling.
Importance of personal data of children
Another point that needs attention about profiling and automated-decision making is, the use of children’s personal data. Children can be more sensitive and more easily affected, especially in online media. For example, regarding the online games, profiling can be used to target players, who are more likely to spend money in their game, as well as to be offered more personalized advertising. GDPR does not distinguish whether the processing is related to children and adults in Article 22.
Nevertheless, as children can be easily affected by such marketing efforts, the data controller must be sure to take appropriate measures for children and ensure that they are effective in protecting children’s rights, freedoms and legitimate interests.
As a result, profiling and automated-decision making based on systems such as artificial intelligence and machine learning can have important consequences for the individual. Data collected in connection with this technology, must be collected by the consent of the persons or must be set on a legal ground.
Also, it is important to subsequently use these data in connection with the purpose for which they are collected. If the system suddenly starts to make unusual decisions, the data controller should take the necessary precautions and guard the rights and the freedoms of the persons involved, including what roadmaps to follow.