The use of Artificial Intelligence (AI) must be advanced responsibly and develop new value-added potential for the benefit of society. That’s why the General Data Protection Regulation (GDPR) comes into play since data is the key ingredient for AI applications. The GDPR impacts any law globally in terms of creating a more regulated data market and especially Article 22 should be looked at since it generally restricts automated decision making and profiling.
The dilemma is that the amount of data to be processed by AI must be high to gain proper results, but at the same time the protection of personal data must be guaranteed. Evidently, GDPR is only a first step in this regard, because in the future national laws should specifically regulate the use of personal data in the case of AI—without neglecting data protection.
Currently, China and USA are seen as the leaders in the use of personal data for AI purposes, which is attributed, among other things, to the large amount of data available and collected. Both countries make the processing of personal data legally easy: there is massive data collected by the U.S. government and by the Chinese government due to their “social scoring”.
However, the GDPR forms a reliable legal framework for innovative technologies and applications, including AI. It contains regulations for the protection of individuals when processing personal data as well as the free movement of such data. The revision of the e-privacy regulation is intended to round off this protection concept.
In everyday business life, AI is particularly relevant in connection with machine learning and process optimization. In 2016, for instance, the self-learning app AlphaGo attracted attention after defeating a human grandmaster by analyzing millions of board games. The fact is that with slight modifications, the app could also be used outside of games to improve processes in companies.
However, the GDPR introduces the right to data portability as a completely new right, enabling those affected to request the complete transfer of their personal data to another company. Consequently, the question arises how this right can be implemented in the case of machine learning: meaning, self-learning apps cannot completely "transfer" previously used data because the data has already become the basis of the self-learning processes of the apps and is still present in the system even when the output data is transferred. Here it depends on the technical possibilities of the company as well as on concrete guidelines for the implementation of this right. These guidelines must reflect the (protected by the GDPR) economic interest of companies in the processing of personal data and the right of the data subject to informational self-determination as well as the principles of bringing data economy and transparency to a fair balance.
The same problem arises regarding the right to deletion since it can also be difficult for companies to guarantee this in the case of machine learning. It is therefore necessary to develop technical possibilities and requirements through jurisprudence that protect both the right to information, and the transparency requirement. as well as the economic interests of companies—always considering that AI is based on the use of big data analysis and automated decision-making.
Another central concern of the GDPR is the protection of transparency and information obligations of those affected. If the person concerned agrees to the processing of personal data using AI, the following principles must be observed: in general, the person concerned must be informed about all data subject rights, such as the right to information, correction, deletion, restriction, objection, and data portability; in addition, the person concerned must be informed of the extent to which the decision-making is based exclusively on automatic data processing—especially profiling. In any case, jurisdiction and companies must develop feasible solutions.
But while the implementation of the rights of data subjects and the ban on discrimination of the GDPR in connection with AI can be a challenge, the GDPR offers opportunities, too. It can be looked at as an assessment of the data protection and economic risks associated with using AI during the planning phase (e.g., software development). In addition, the data protection impact assessment reveals risk that result from the design and technical defaults of products or applications. This means that the requirements of the GDPR on ‘privacy by design’ and ‘privacy by default’, i.e., data protection-friendly design and data protection-friendly technical defaults, can also be recognized and used accordingly.
Despite the challenges, especially in data subject rights, automatic decision-making, and transparency, the GDPR is a benefit for the AI-using industries, when considering data scandals such as the ones with NSA, Facebook and Cambridge Analytica that have shown us that user confidence in AI can quickly decline and the amount of data available decreases.
Trust in the secure handling of personal citizen data based on the GDPR can therefore represent a significant competitive advantage in the future. The lead of China and the USA, regarding the amount of data available, could be compensated for by increasing the quality of data in the market due to greater security.
Just like in real life, we need arbitrators with absolute trustworthiness, since AI has the power to endanger the social order. We have worked hard for the principles and laws how civilization can succeed in the long term and if we want to keep this society, then we should be on the watch.
Kalliopi Spyridaki, Chief Privacy Strategist, SAS Europe, puts it in a nutshell: “The GDPR and AI are neither friends nor foes. The GDPR does in some cases restrict (or at least complicate) the processing of personal data in an AI context. But it may eventually help create the trust that is necessary for AI acceptance by consumers and governments as we continue to progress toward a fully regulated data market. When all is said and done, GDPR and AI are lifelong partners. Their relationship will mature and solidify as we see more AI and data-specific regulations arising in Europe, and globally.”
By Daniela La Marca