Organisations need reliable evidence of whether an AI system is trustworthy and future-proof, under which conditions it can be deployed, and how uncertainties, errors or operational changes should be managed.
AI Assurance is a structured approach to the assessment, assurance and verification of AI systems in real-world operation. It provides the foundation for making informed decisions about the deployment of AI by creating transparency around risks and demonstrating the trustworthiness of a system in a traceable and verifiable manner.
The focus is not solely on the model itself, but on the actual behaviour of the AI system in interaction with data, processes, people and organisations. We assess AI systems consistently within their specific operational context, combining technical evaluation, organisational integration and regulatory requirements into one integrated overall assessment..
AI Assurance is not a one-off test, but a structured, lifecycle-oriented process. Its goal is to identify risks at an early stage, define clear requirements and systematically demonstrate the trustworthiness of an AI system.
Our approach consists of five key steps:
We analyse the operational context, risks and relevant boundary conditions, including regulatory requirements, standards and domain-specific constraints. These are consolidated into one consistent set of requirements.
Your benefit:
Based on the operational context, we define concrete and verifiable requirements, including metrics, thresholds and release criteria. These are operationalised in a way that makes them both technically testable and organisationally applicable.
Your benefit:
AI systems are systematically assessed − from simulations and controlled test environments to realistic operational scenarios. We deliberately analyse rare and critical situations, as well as system behaviour under uncertainty.
Your benefit:
All results are structured and translated into auditable evidence. At the same time, we support release and alignment processes with internal and external stakeholders.
Depending on the outcome, this may result in:
Your benefit:
AI Assurance does not end with the initial assessment. Even after release, the system remains under continuous review: we monitor system behaviour, detect changes and perform targeted re-evaluations. This ensures that AI systems remain reliable, traceable and responsibly deployable over the long term.
Your benefit:
AI systems are often deployed as part of complex cyber-physical and mechatronic systems. For this reason, we do not consider AI Assurance in isolation, but always in interaction with sensors, software, hardware, people and operational processes.
AI Assurance provides the foundation not only for technically assessing AI systems, but also for making informed decisions about their deployment. Risks become visible at an early stage, requirements remain traceable and evidence becomes robust − even in complex and regulated environments.
To ensure that AI Assurance goes beyond abstract principles, we translate requirements into concrete assessment and verification procedures. The safeAI Kit supports this process as a modular toolbox for technical analysis, evidence generation and monitoring.
The safeAI Kit combines standardised methods, established best practices and proprietary approaches to systematically assess key characteristics such as robustness, uncertainty, explainability and system limitations. Our methodological work is based not only on practical experience, but also on the active development of standards and guidelines. This creates robust and traceable evidence that is both technically sound and suitable for decision-making and audits.
We combine current insights from AI research, standardisation and regulation with many years of experience in testing, analysis and certification processes.
Through our active participation in national, European and international standardisation committees, we contribute to the further development of requirements and assessment approaches for AI systems.
With the initiation of DIN SPEC 92005 on the “Quantification of Uncertainties in Machine Learning”, we have made a concrete contribution in this field. This work also served as the foundation for the ISO/IEC standardisation project SO/IEC TS 25223 on uncertainties in AI systems. The standard is expected to be developed under IABG leadership until early 2028.
You can also read the interview with our standardisation expert Dr Lukas Höhndorf: “Normung macht den AI Act greifbar” (DIN e.V., 11 December 2025).

Please fill in the form and we will get in touch with you as soon as possible.
This section contains third-party content that you can view with a single click.
By loading the form, personal data may be transmitted to the third-party provider. You can find more information in our privacy policy