VIDEO: The human understanding of machine learning

– We live in a marvellous time where machines are solving our problems, states Inga Strümke, PhD, from the Norwegian University of Science and Technology (NTNU) and the Norwegian Open AI Lab.

– But what I find the most interesting is when machines solve problems that we cannot solve ourselves or when we don’t know what machines have learned, she continues.

Inga Strümke was one of the speakers at last year’s AI+ conference. From the stage in Halden, she led the audience through some examples of her fascination with AI – and some of the challenges.

– The better we make machines at explaining themselves, the better we will make them in justifying themselves. I don’t want to enter a future where machines are excellent at justifying bad decisions.

Magnus Carlsen and SHAP

She also gave a glimpse of the complexity of explainable AI (XAI).

– You cannot test your way to understanding a sizeable machine-learning model because of the Curse of Dimensionality, which refers to a set of problems that arise when working with high-dimensional data.

Strümke also presented a vast problem that, although she has been researching it for several years, she considers as a possible unsolvable problem: Machines will model non-human concepts.

In addition, chess players like Magnus Carlsen and Garry Kasparov, stop signs and speed limit signs, and SHAP (a mathematical method for explaining predictions of machine learning models) are among the elements in Strümke’s presentation.

Why attend AI+ 2023?

The AI+ Media Team asked Dr Strümke a few questions after her speech. One of them was “Who would you like to see on stage or in the audience at AI+ 2023?”, and this is what she answered: