We’re just a few days away from Independence Day and what most would consider the height of the summer. When many Americans think of this time of year, their minds are filled with images of backyard barbeques, fireworks displays, dips in the pool and trips to the beach. Those are all great things – but the summer does come with some not-so-great things, as well.
The hot summer months are the start of the hurricane season – which begins in June and extends all of the way to the end of November. The hot, dry weather that we experience in the summer months also creates another problem – it can contribute to wildfires that damage property and possessions.
Summer’s two distinct and disparate faces probably leave insurers a bit conflicted. While they most likely enjoy a beach vacation and fresh summer produce as much as the next warm-blooded American, they’re invariably glued to the Weather Channel, anxiously waiting for the other shoe to drop. But they don’t have to be.
A new white paper released by Forbes Insights and Pitney Bowes looks at the importance of location data in insurance underwriting. The white paper shows that more accurate location intelligence could not only take the guesswork out of underwriting, but also help ensure that insurance companies are properly assessing risk and – subsequently – pricing their policies accurately.The impact of poor policy pricing
According to the white paper, location is an essential factor in determining risk, but insurance companies aren’t always getting location right:
“…studies found that around 5% of homeowner policies and a portion of auto policies—as many as 10% when looking at zip-code level data—could be priced incorrectly because of imprecise location data. Crucially, the research discovered that the range of incorrect pricing—in both under- and overpriced premiums—could vary significantly.”
This is compounded by the fact that companies are most likely to underprice their policies than overprice their policies. According to the white paper, “…when the underlying data leads to policies that are priced at a lower rate than they should be, not only does it open an insurer up to paying out on a policy it hasn’t received adequate premiums for, but underpriced policies may also end up constituting a larger and larger portion of the overall book. This is essentially adverse selection.”
This is a major issue for insurers since it could result in them underpricing higher risk policies and losing business on lower risk policies because they’re overpricing them. It also leaves insurers exposed to massive losses should a natural disaster – such as a hurricane or wildfire – ravage an area that was at higher risk than they anticipated. In that situation, policies were most likely underpriced, which would result in immense losses for the company.
How did we get here?
If location is so essential to identifying risk and pricing policies, how did insurance companies allow themselves to continue doing business on bad data? The most obvious reason is that it’s how business has always been done, and it was the best alternative available at the time.
Insurance companies have traditionally identified location data for homeowner policies via interpolated street data. This involves streets being split into segments of varying length – homes within those segments are then priced at the same risk. But there can be a lot of variation in a segment.
The more precise method is to use latitude and longitude measured in the center of the parcel, where the house is. When compared to each other, interpolated street data and geolocation yield vastly different results. According to Jay Gentry, insurance practice director at Pitney Bowes, the difference “can be a few feet…or it can be a difference of 500 feet, a mile or more.”
When you put it that way, it’s easy to see how location could impact risk assessments. Two places a mile apart could have vastly different risk profiles. In a natural disaster situation, one mile could be the difference between properties that are total losses and those that survive unscathed.
So, why is this still a problem? Now that the technology exists to have better locational awareness and location intelligence, you would think that insurers would be clamoring to embrace it. But that hasn’t been the case. The white paper does a good job of identifying some of the reasons for this, but it ultimately comes down to a few factors. First, insurers may not even know they have a problem. And the ones that do may not know how to approach it.
As Michael Reilly of Accenture explained, “…[insurance] carriers…don’t realize it, don’t realize there is an opportunity for better location [data] and how much that better location [could] actually contribute to their pricing.”
And even if they do realize it, change can be hard.
“The other piece about this is the fact that replacing a geospatial location is not [a matter of] taking one vendor [out] and plugging in a new one, right?” Michael asked. “We do have all these policies that are on the books, and we’ve got to figure out how do we handle that pricing disruption so I don’t lose customers that are underpriced.”
To learn more about the importance of accurate location data in insurance, and for an excellent list of considerations for embracing a location intelligence solution within the enterprise, click HERE to download a complimentary copy of the white paper.