Christian Agrell


Christian has 10+ years of experience developing machine learning solutions for various high-risk applications. In DNV, he works to make sure that systems with AI are sufficiently safe, reliable, fair, transparent, explainable, and generally trustworthy for the affected stakeholders. Christian holds a Ph.D. in probabilistic machine learning, which is about making AI that can represent uncertainty correctly and say “I don’t know” in situations where accurate predictions cannot be made.



Assurance of AI-enabled systems



As AI plays an increasingly important role in decision-making processes that impact both people and the environment, ensuring that AI is trustworthy and managed responsibly is a priority. Building trust in AI is essential for many businesses to attract customers, and the upcoming EU AI Act will likely set a de facto global standard for how to regulate the use of AI.


In this talk, I will present DNV’s upcoming recommended practice for assurance of AI-enabled systems. We will see how to build warranted trust in a system containing AI, by showing compliance with ethical norms and sufficient capability with respect to technical characteristics such as accuracy, robustness, bias, explainability, interpretability, and more.