Publicized warnings about the consequences of filing false insurance claims haven’t changed many fraudsters’ plans to do exactly that—or so it seems. Just do the math: More than 60 percent of insurers saw increases in false insurance claims from 2013 to 2016, according to research released last year by the Coalition Against Insurance Fraud (which expects this figure to increase once again in 2017). All told, the research indicates, fraudulent claims cost insurers a “conservative” $80 billion annually.
But change is occurring when it comes to investigating potentially fraudulent insurance claims. One new approach involves using web-based software tools that harness artificial intelligence (AI) and machine learning to take a deep dive into claimants’ social media activities and online presence. This leaves insurers with legally defensible information they can consider in determining whether their company will show claimants the money—or the door to the prosecutor’s office.
The ABCs of Artificial Intelligence
Traditional investigations of suspicious insurance claims—those that require manual trolling through online data and even covertly following claimants in an attempt to discredit their “story”—often fail. The culprit is a lack of enough hours in the day to get the job done. Hiring more investigators isn’t a viable solution because it adds to insurers’ costs and leaves them with a team whose members are well paid for “grunt work”—i.e., trolling the Internet for data, including the ever-increasing volume of personal data now being shared on social media.
When AI and machine learning are in the picture, however, insurers can more effectively marshal their investigator resources at a lower price. Smaller groups of these professionals can focus on areas where information that may signal fraud isn’t clear-cut or easily located, with AI-based social media and web investigation tools as a linchpin.
This is how it all unfolds: Salient data from Facebook, Instagram, and other social media platforms are first extracted to build a timeline of events that may discredit (or support) a claim. The artificial intelligence component cross-references social media profiles and aggregates the data, using algorithms to draw correlations between data elements. The software also instinctively looks for clues that indicate fraudulent activities and behavior—clues it has “learned” as more and more information about false claims is codified in the system to supplement pre-programmed fraud indicators.
To clarify, here are examples of AI-based web and social media investigation tools at work. Suppose a policyholder files a claim stating that he had been badly injured in a car accident on a Tuesday. The claim appears valid, until an extraction and cross-indexing of data from the “accident victim’s” Facebook and Instagram accounts reveals that he had gone dancing at a club two days afterward. At this point, the investigator would have reason to question the validity of the claim, and the insurer would have to seriously consider its next course of action.
Alternatively, say the insured party reports that the accident occurred in New York City, but data drawn from his social media put him squarely in Philadelphia on the Tuesday in question. More fodder for declining to accept the claim at face value. An investigator may be able to draw the same conclusions, but at great expense from a time and money standpoint considering the volume of personal data now shared and available on social media.
Applying artificial intelligence and machine learning to investigations of potentially fraudulent insurance claims has other benefits as well. For instance, many people aren’t shy about using social media or other web platforms to discuss or reveal their intention to engage in illegal activities. Filing false insurance claims for monetary gain is no exception. Through algorithms and searches for other clues, AI-based web and social media investigation tools can identify and bring these plans to light.
Similarly, some fraudsters have begun to participate in so-called “jump-in claims.” In these scenarios, friends or relatives of a person who files a claim for injuries sustained in an accident try to trick the system by stating that they, too, were involved in and injured as a result of that same incident. AI-based web and social media investigation tools can be used to determine whether so-called victims’ activities and location support or disprove their claims.
Another benefit of these tools is consistency of claim investigations, despite large volumes of claims waiting to be processed. An investigator may inadvertently neglect to look for one or more “connections” between data points. With AI and machine learning, this isn’t possible, because the tools automatically make identical data correlations and check for the same clues.
These advantages aside, insurers will reap their greatest rewards from a move to AI-based social media and web investigation tools as they continue to charge their most expert investigators with training the systems. The more the systems “learn” about the outcome of suspected false insurance claims, the more effective they will become and the bigger the bang for the technology buck.
This article was written by Mark Williamson, the Co-Founder and CTO of Hanzo, the most trusted platform for legally-defensible capture, preservation and analysis of content for eDiscovery, compliance, and risk. Mark has over twenty years of commercial software-engineering experience. Previously, he wrote one of the first programming editors for Windows, developed AI and educational packages for the Electric Brain Company, and implemented telemetry systems for Williams Formula One. He also served as the technical lead on the British Library Web Archiving program.