“Software will eat the world.” Marc Andreessen said this famously in a Wall Street Journal article. Of course, he is right. Software changed almost every aspect of industry and business forever.
Insurance was no exception and was eaten by software a long time ago. While many like to pretend that insurance is a backwards industry – certainly it is slow to adapt – the truth is that insurance has been using technology, software and even machine learning (albeit in lite form) for a long time.
Actuarial tables are a cornerstone of insurance and are a very basic form of machine learning. Take, for instance, mortality tables. These tables produce highly specific and shockingly accurate projections of how people born within a certain period will live. These projections don’t just materialize out of thin air. They are formed and refined by millions of data points and real life examples inputted into the tables and then evaluated on an aggregate level. To overly-simplify: if 1,000 people died when they were 65, the insurance industry will project that out to the population (apologies to all of you actuaries for such a terrible example).
In this example, all of those data point ages were entered into software with one specific purpose: to crunch the numbers and predict at what age John Doe will probably die.
Software was designed to do just that, to take variables, do the “math” and spit out an answer. Software is an efficient, electronic mouse trap; software is basically a systematic way to collect data and efficiently process the data for a specific, linear purpose. For example, if Bob Smith from Kansas wants to apply for car insurance through an online application, software makes sure his information from the application flows to the correct place.
But there’s a problem with software. It’s incomplete. When software ate the world, it resulted in an explosion of data. Every day, digital information platforms produce eight times more data than the amount of information held in all of the libraries across the United States. By 2020, data will be created at a pace of 1.7 megabytes of new information every second for every person on earth. After software was created and everyone started using it for all sorts of purposes, people realized that there was a lot of valuable information and data that the software had happened to collect that was just sitting out there, ignored and unutilized.
Let’s go back to the car insurance example. When Bob Smith and millions of others enter information on a car insurance application, that information has historically been used to create and bind a policy. Then Bob Smith’s information gets ignored. The insurance company owns all of this information, but existing solutions and software haven’t given companies the keys to unlock insights from this information.
Machine learning allows companies to not only harvest ancillary data, but it provides the tools to aggregate and process that data from its raw form to insightful and useful information.
Google is one of the best examples of creating a business model for transforming software usage to business insights through machine learning. Google created Gmail so that your email – the data that you type – could be turned into actionable insights. Google provides a free and convenient service and, as a happy (for Google) byproduct, gets to use a vast amount of your data, harvested from your emails and user habits. The software is doing its job (letting you email mom), while the machine learning component is taking notes that you’ll be meeting her and dad next weekend for Mexican food. And voila, when next weekend comes around, you get served up an ad for a Mexican restaurant.
The insurance industry should pay attention. Insurance companies house an overwhelming amount of raw data. Unfortunately, for many companies, it’s just stuck. Bob Smith’s car insurance application resulted in car insurance, but what about other uses for the information on Bob’s application? What if there are a million Bob Smith Car Insurance Applications stuck in a database somewhere, containing valuable information? Doesn’t anyone want to use that information to chart trends? What if all of that information could be used to predict and prevent fraud? Or to provide more cost-effective products?
This is the part where machine learning will eat software and the data contained in these applications.
Right now, there are only a handful of insurance technology systems, and they dominate the market. They all rely on the old way of doing software: input data, data gets chewed up, an answer gets spit out (leaving behind all sorts of valuable information). The old way of doing software was created to simply drive efficiency and complete workflows. In reality, though, it is inefficient and leaves vast amounts of valuable data and information untouched because there is no machine learning component at its core to harvest or utilize that information for purposes other than the original intent.
New solutions are emerging (ahem, including RiskGenius) that deliver efficiency and completed workflows through machine learning. I came across a great example recently called Digital Genius (no relation to RiskGenius). The data scientists behind Digital Genius have figured out how to take millions of chat logs from customer support software (like Salesforce) and turn the data into robo-customer support.
Here’s another self-serving example: RiskGenius first breaks an insurance policy down into categorized clauses and buckets of information. This information is now organized and nicely packed for delivery to insurance professionals. They receive it in a form that’s manageable and easy to work with for efficient review in our software application.
The question remains, will the big software players in the industry recognize the need for machine learning before it’s too late for them to rebuild and integrate machine learning features, or will they be pushed out of the market by new, nimble and efficient solutions like RiskGenius?
I have my opinion. But that’s for another blog post and the comments section!