Safeguarding of AI algorithms
How safe is artificial intelligence (AI)?
The IABG innovation project safeAI develops solutions for safeguarding artificial intelligence, taking into account not only the state of scientific research, but also current developments from national and international standardisation bodies. These new developments are combined with many years of experience with certification processes in the aviation, automotive and railtech industries.
The challenge – safeguarding artificial intelligence
The increasing use of AI in safety-critical applications such as driving a vehicle, landing a drone, automated damage detection, monitoring of infrastructures or condition-based maintenance requires a quantitative reliability assessment.
However, due to the complexity of artificial neural networks the decision-making process is no longer analytically transparent. Furthermore, representative training and test data (especially for safety-relevant scenarios) is not available in sufficient quantity.
Current valuation methods are still under research. Therefore national or international standards and norms are not yet available, which makes the certification of AI-based systems more difficult.
We make the safeguarding of AI quantitatively measurable
- With safeAI we are developing a method that can quantitatively measure the reliability of AI.
- For safeAI, realistic virtual simulation environments are used to assess the performance and reliability of algorithms whose properties can be controlled.
- safeAI focuses on reliable image and object recognition.
- Active participation in standardisation committees on the subject of reliability, quality and certification of AI (DIN / ISO / IEC, EUROCAE) ensures that our developments
conform to future standards.
- Other IABG activities such as safeHAD (SOTIF – Safety of the Intended Functionality) and safeHumanFactors will be incorporated with safeAI to form a framework for safeguarding autonomous systems.
Approach and added value of the safeAI method
- safeAI supports and accompanies the development process in order to fulfil the requirements for safe AI. From the selection and evaluation of training data to the generation of application-specific test data and detailed performance analysis of the algorithm implemented.
- Development of evaluation criteria for AI reliability.
- Certifiability of your AI solutions.