Data science, machine learning, big data…

Actuaries are used to analyse large historical data bases to identify the factors influencing the risk borne by the insurance provider. Whereas the number of policyholders remain roughly stable over time, the amount of information available about each of them dramatically increased in the last decade. Indeed, web navigation and tools of the digital revolution such as smartphones and wearable devices provide actuaries with more data than ever before. Mining this increasing volume of data and developing innovative techniques to evaluate risks certainly offers great competitive advantages.

Besides technical aspects, the availability of such massive data sets also raises interesting ethical questions. Some authors have even predicted the end of insurance markets because randomness only reflects the lack of information and would disappear.

 

What Detralytics can do for you

At Detralytics, we firmly believe that these massive data bases offer huge opportunities to the insurance industry. Today, premiums often depend on a limited number of easy-to-observe policyholder’s characteristics and do not account for behavioural aspects, opening the door to statistical discrimination.

Mining these new massive data bases can only offer great opportunities

  • In terms of prevention;
  • In terms of matching insurance offers to the real policyholders’ needs;
  • In terms of customer-centric, multi-products approach.

Of course, this requires new tools as the actuaries’ favorite GLMs, in their standard form at least, cannot cope with such data volumes.

 

Detralytics’ experience

Detralytics actuaries have reviewed the underlying methodologies of pricing models handling machine learning techniques such as regression trees, bagging, random forests and gradient boosting methods.

We have also developed trainings on those techniques, including random forests and gradient boosting methods, with R code and applications on real datasets.