Sometimes a Gambler Will Do

A customer recently asked me if I thought AI was going to change the direction of the insurance industry and they were surprised when I said "Not any time soon." When we dug a little deeper, I realized they didn't understand the difference between deterministic and probabilistic approaches and their inherent strengths and weaknesses. I worry that many people that are excited about AI may not know the difference either.

Deterministic programming using a pre-defined algorithm is limited to problems within the boundaries of the algorithm but will generally get the correct answer and the same answer every time queried.
When a customer goes to your app to ask what premium is due they want a definite answer and the insurance department wants to know you can show your math.

Probabilistic approaches using machine learning have much wider boundaries and can solve problems not conceived of by the designer but there is always a level of uncertainty with any answer. It is not a good choice to calculate an insurance bill but it can be an assistant to the actuary doing pricing research in collating lots of data to price a new insurance product. It is also can helping you think out of the box when brainstorming. As long as you don't take the answer as necessarily correct or ask it to show it's math.

AI hallucinations have been largely mitigated at least in the press, but I recently found two examples of AI veracity issues. The first were AI generated meeting minutes that assigned imaginary tasks to imaginary people in the session. This was easily picked up by the meetings owner who then had to read the summary and compare it to their own notes.

The second was a journalist using the latest Gemini version for research on an article and when she fact-checked she found it had created cases that backed her position. When she asked where Gemini got these cases, it replied "I made them up". At least it's honest when challenged but for a reporter or an attorney where precise language and reputation are critical, fact checking your tool could be more risk and work than doing your own research.

I'm not saying we need to stop using AI, but I am saying like a lot of tools we use, AI still requires supervision and critical thinking.

Sometimes you need a scientist, sometimes a gambler will do.

Previous
Previous

Separating AI Sauce from Substance in Life, Annuity & Benefits Technology

Next
Next

You Don’t Need to be a “Platform” to be a Leader