What are the best artificial intelligence certifications

Artificial Intelligence

Artificial intelligence (AI) and learning systems will change our everyday life and our economy - and are already doing it today. They help us manage our deluge of digital images, improve vaccines against COVID-19, and help us navigate while driving. In the future, intelligent systems will be able to optimize traffic, detect diseases or help robots in care or rescue operations. Precisely because the application possibilities of AI systems are so diverse, it is essential that they are protected from attacks, ethically harmless and that their smooth functioning is ensured.

According to a recent survey by Bitkom, a large majority of Germans want secure AI and require AI systems to be checked particularly thoroughly before they are used. The extent to which the benefits of AI systems come into play therefore depends on the trust that people place in them. One way to trustworthy artificial intelligence can be the certification of AI systems. It is usually a temporary confirmation by independent third parties that specified ethical and technical standards, norms or guidelines are being adhered to. Certification is a way of ensuring the quality of AI systems.

AI certification with a sense of proportion

Nevertheless, the tool of certification of intelligent software and machines should be used with a sense of proportion. Criteria for certification that are too strict mean excessively high hurdles for developers and users and can inhibit innovations. Not every AI application needs to be certified. While a majority of AI systems, such as algorithms for identifying spam mail, should be safe, there are applications that should be scrutinized. This certainly includes medical assistance systems that make diagnoses and recommendations for treating patients or software for autonomous vehicles. A central challenge is therefore to agree on a certification that guarantees security and at the same time maintains our innovative strength.

Reading tips:

Only secure AI systems create trust

Autonomous driving - everything about the future of mobility

The Learning Systems platform takes on the challenge of certifying AI. It was founded by the Federal Ministry of Education and Research (BMBF) and acatech - German Academy of Science and Engineering in 2017 and deals with the question of how artificial intelligence can be designed in the interests of humans - and what business, society and politics can contribute to it.

The EU Commission's White Paper on Artificial Intelligence provides a good basis for developing certification procedures for AI. The proposals published in early 2020 follow a risk-based approach:

  • High-risk AI systems should be subject to specific requirements, for example with regard to discrimination and data protection.

  • Voluntary labeling should be made possible for AI systems without high risk.

In a statement, the Federal Government largely supports the approaches of the EU Commission.

Note risk potential and application context

The Learning Systems platform extends this approach to include the concept of criticality. This means that AI systems are assessed according to their risk potential in order to assess whether and to what extent regulation is necessary. The higher the criticality, the easier it is to justify strong regulation.

For the assessment of criticality, on the one hand, the possible immaterial or physical damage must be taken into account, such as the endangerment of people or their personal rights, but also other hazards such as for the environment. On the other hand, people's options for action must be examined. This means, for example, what control options people have or how well the individual can evade the respective application, for example by using other products. On this basis, state test centers can determine which products or processes need to be certified - and which do not. In non-regulated areas, the company can opt for voluntary certification.

The application context is decisive for the criticality. The same system can be unproblematic in one application context and highly critical in another. A vacuum cleaner robot, for example, can initially be regarded as comparatively unproblematic despite its high degree of autonomy, but if it collects data that it makes available to its manufacturer, the assessment can be more critical.

In addition, the question arises as to which criteria should the certification be based on. A catalog of minimum criteria and a catalog for additional criteria were developed in the impulse paper of the platform learning systems. To check the minimum criteria, AI systems should be examined for their transparency and verifiability, security and reliability, among other things. Unintended consequences on people, other systems or the environment must be excluded. Privacy and personality must be protected under all circumstances, and equality and freedom from discrimination must be preserved. Furthermore, humans must be able to use the AI ​​system in a self-determined manner.

Additional criteria, in the sense of a “certification plus”, include human-centeredness or the user-friendliness of an AI system, as well as system operability or the limitation of system functionality. Sustainability could also be another criterion.

Reading tip:Sustainability in companies

Challenge: Dynamics of AI systems

The ability of learning systems to continuously develop independently represents a particular challenge for certification. Certification should be carried out before the AI ​​system is used in practice. The AI ​​system may no longer meet the certification criteria some time after it has been put into operation. Therefore, AI systems should be recertified at regular intervals.

The question arises, however, when a certification has to be repeated. You should avoid having to recertify the system after every update or further development. Existing certification systems for information technology are often too sluggish. As a result, IT systems are sometimes not developed further because the associated renewed certification is too time-consuming. Learning AI systems do not only change with updates. A good certificate for AI must take these dynamics into account and retain its validity regardless of technological progress. For example, the re-certification could be limited to individual parts or modules.

Certification should be a permanently open process so that it is possible to react to technological developments. To achieve this, feedback mechanisms that can deal with further development are required; in addition, the certification authorities must be written dynamically. It is also helpful to document the test procedures so that empirical values ​​can be collected and new trends and developments can be identified at an early stage.

Create trust, promote innovation

AI systems can only develop their full benefit for society and individuals if people have confidence in them. The certification of AI systems can therefore help to use the opportunities of artificial intelligence safely and in a public interest-oriented manner. In order for this to be successful, the certification must keep an eye on both ethical and economic requirements and must not hinder innovations.

A review of the criticality of an AI system enables certification with a sense of proportion. Because harmless applications do not have to be certified. The certification must also take into account the dynamic development typical of AI systems. Certification can then be a way of ensuring the quality of AI systems and promoting Germany's innovative strength. (bw)