The age of digitization pervades almost all areas of life - from the world of work to road traffic and the health sector to communication.
Artificial intelligence is fundamentally changing our society, our economy, and our everyday life, and provides great opportunities for our coexistence. Current forecasts assume that the number of AI applications will grow exponentially in the next few years. McKinsey, for example, predicts up to $ 13 trillion globally in additional value through artificial intelligence by 2030. At the same time, it becomes clear that a careful design of AI applications is necessary so that we can use the opportunities of AI in harmony with our social values and ideas.
To ensure that developments based on artificial intelligence (AI) are technically, ethically, and legally justifiable, an interdisciplinary team of experts from the Universities of Bonn and Cologne, under the leadership of the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, is developing a test catalog for the certification of AI applications and presents the fields of action from a philosophical, ethical, legal, and technological point of view in a whitepaper.
The certificate is intended to attest a quality standard that allows technology providers to make AI applications verifiable, technically reliable, and ethically acceptable.
"With the certification, we want to help set quality standards for 'AI Made in Europe', ensure that the technology is used responsibly, and promote fair competition between different providers," says Prof. Dr. Stefan Wrobel, director of the Fraunhofer IAIS and professor of computer science at the University of Bonn.
In addition to technical suitability, fundamental philosophical and ethical aspects, legal issues must be clarified as well. Hence, to ensure that people are always at the center of this development, a close exchange between computer science, philosophy and law is necessary. Several fields of action are emerging from this interdisciplinary approach from an ethical, legal, and technological point of view.
The criteria for certification should be fairness, transparency, autonomy and control, data protection as well as security and reliability. Information on the correct use of the AI application should be available, and the interpretability, traceability, and reproducibility of results from artificial intelligence should be possible for the user. Conflicting interests - such as transparency and confidentiality of trade secrets - would have to be weighed against each other.
A first version of the test catalogue is in the making to certify the first AI applications and the project managers work closely with the Federal Office for Information Security (BSI) on this, which has many years of experience in the development of secure IT. However, since AI is constantly evolving, the test catalog will always be a "living document" that needs to be updated constantly. (Source: Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS)
By Daniela La Marca