Artificial Intelligence, Data Security and GDPR

August 9, 2018

Among the prominent features of artificial intelligence and machine learning, which are now being used in many sectors, the most important one is the ability to analyze data much faster from programmatic tools and from human beings as well as to learn how to manipulate data on its own. In recent years, profiling and automated decision-making systems, which are frequently used in both public and private sectors, have brought benefits to individuals and corporations in terms of increased productivity and resource conservation, as well as bringing about risks. The decisions made by these systems which affect the individual and the complex nature of the decisions, cannot be justified. For example, artificial intelligence can lock a user into a specific category and restrict it to the suggested preferences. Hence, this reduces their freedom to choose specific products and services, such as books, music or news articles. (Article 29 Data Protection Working Party, WP251, p.5) GDPR, which will come into force in Europe in May, has provisions on profiling and automated decision making, to prevent from being used in such a way as to have an adverse effect on the rights of individuals. GDPR defines profiling in Article 4 as follows: Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural persons performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.(WP251, p.6) Profiling is used to make predictions about people, using the data obtained from various sources on those people. From this point of view, it can also be considered as an evaluation or classification of individuals based on characteristics such as age, gender, and weight. Automated decision-making is the ability to make decisions with technological tools (such as artificial intelligence) without human intervention. Automated decision-making can be based on any type of data. For example, data provided directly by the individuals concerned (such as responses to a questionnaire); data observed about the individuals (such as location data collected via an application); derived or inferred data such as a profile of the individual that has already been created (e.g. a credit score). There are potentially three ways in which profiling may be used: (i) general profiling; (ii) decision-making based on profiling; and (iii) solely automated decision-making, including profiling (Article 22) The difference between (ii) and (iii) is best demonstrated by the following two examples where an individual applies for a loan online: a human decides whether to agree the loan based on a profile produced by purely automated means(ii); an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any meaningful human input (iii). (WP251, p.8) The important questions to be encountered here are: How does the algorithm access this data? Is the source of data correct? Does the decision of the algorithm cause legal effects on the person? Can the individuals have some rights over the decision based on automated process? What measures should the data controllers take in this case? Nowadays, most […]

Read more