Glenn Henriksen argued in his speech that we expect a higher level of precision from machines and computers than we do from humans. Is this fair, or are we forsaking efficiency for no good reason?
By Lars Johannessen, May 12, 2022
Head of Technology and System Development at Justify, Glenn Henriksen gave us an interesting look into the use of artificial intelligence in processing legal forms and documents. He set the scene by proclaiming that Justify aim to make law accessible for more people. In terms of being able to understand, but more importantly by making it cheaper. The way to do this, according to Glenn, is by scaling without adding more humans to the tasks, and this is where AI comes in. Software and machine learning have the potential to free up a lot of time that is spent on reviewing standardized forms and documents. There is however a challenge connected to this, and that is that humans can make errors and fill out forms wrongly in an almost infinite number of ways.
This poses a challenge for AI and machine learning because there can suddenly be an error in a document that is unlike anything the software has seen or trained on. Glenn went on to explain that perhaps this is not the greatest issue after all.
– Ok, so let us say it (the software) discovers an anomaly. We will have it flag that document and send it over to a human who can go through it. The software goes back to processing thousands of documents, and the human resource will go through the few ones that are flagged, Glenn explained.
This led to the point that Glenn was making, we want our software to be without flaw, when we know that humans are not. If we expect ai and machine learned software to be perfect right out the gate, we might be let down. Glenn summarized it by saying “Ai is overestimated in short term, and underestimated long term.”. Ai and machine learning works best over time, with substantial amounts of data to train on, so the best way to do this is to start using them.
Day two of AI+ started with two parallel sessions taking place simultaneously – AI for beginners at Brygga kultursal, and AI for experts at Institute For Energy technology (IFE). Ruth Astrid Sæter, the hostess of this years AI+ conference opened the session at Brygga kultursal and introduced Karl-Magnus Haugen, CEO at airMont to the stage for his talk on how simple, stupid data becomes intelligent with AI.
At the same time in IFE’s auditorium, Division Director for Digital systems at IFE, Tomas Norlander welcomed the audience who joined the AI for experts session welcome and gave a short introduction to his background in AI and IFE’s history in regards to nuclear science and use of AI. Tomas then invited Alexandre de Oliviera e Sousa, Solutions Manager at Cognite AS to take the floor for his talk on AI in industrial context.
The two sessions the continued in parallel until lunch was served. The AI for beginners crowd got to hear Jan Erik Gausdal, senior advisor at Eye-share, talk about “How can AI simplify accounting tasks in your business?”, followed by Associate Professor at Østfold University College, Lars Vidar Magnusson explain AI in “Top-down image analysis”. Elisabeth Haugsbø, Head of Data at HUB Ocean, rounded of the session by explaining how “Data grooming for AI” is done.
The audience at AI for experts was treated to affiliated professor at Arcada University of Applied Sciences, Roberto V. Zicari’s talk on “Trust and AI”. He was followed by Jørgen Torgersen, CTO of Railway Robotics, and Christian Svalesen, Data Scientist at BearingPoint, talk about their project “AI for railway maintenance”. Andreas Risvaag, full-stadd developer at Heimdall Power finished the expert session with his talk on “A startup approach to AI for power grid efficiency” and how AI is used in preventing breaks and issues in the power grid.
Post lunch was opened by Henrik Fagerholt, Product Manager at Gyldendal Rettsdata, talk about “Lawyers, Law Tech and AI”. Henrik was followed by Dr. Inga Strümke, XAI researcher, TEDx speaker, and Particle Physicist at NTNU deliver an entertaining and somewhat unnerving talk on explainable AI. Inga stated that “Machines might be, and probably are, modelling non-human concepts”, and that self-driving cars have been proven to be mislead by pieces of tape strategically placed on stop signs, making them read it as a speed limit signs instead. Following Inga was Glenn Henriksen, focusing on the possibilities of AI in processing legal documents and regulations for AI.
As the last speaker, Torgeir Andrew Waterhouse, Founder and Partner at Otte, took the stage to deliver his views on “Security and society” and AI. Torgeir gave an enthralling speech on how we could merge democracy, economy, autonomy, and technology into our daily lives. He also pointed out that the need for cyber security has never been greater, and that we as society need to be more aware of this.
– One in four leaders think that they cannot be exposed to cyber-attacks. One in four leaders needs to leave their job, Torgeir proclaimed.
The day ended with a panel debate consisting of Torgeir Waterhouse, Elisabeth Haugsbø, Henrik Fagerholt, and Dr. Inga Strümke. The panel debated issues related to how AI will affect society, how we as humans should relate to AI and machine learning software and answered questions from the audience. A long day of captivating talks, inspiring ideas, and some possible spine-chilling outcomes of AI marked the end of this year’s AI+ conference. We hope to see you again in Halden for next year’s conference!