Cloud or API deployment - no software layer
Anomaly detection and compliance checks
Risk identification using predictive analytics
Our tool runs through ALL critical claims data. This in turn helps detect low-incidence (0.001%) events
We score every insurance claim to look for signs of fraud
1. TRUE FRAUD/ANOMALY: 68%
2. TRUE NO FRAUD/ANOMALY: 67%
Traditionally, insurance companies use statistical models to identify fraudulent claims.
These models have their own disadvantages. First, they use sampling methods to analyze data, which leads to one or more frauds going undetected. There is a penalty for not analyzing all the data. Second, this method relies on the previously existing fraud cases, so every time a new fraud occurs, insurance companies have to bear the consequences of the first time. Finally, the traditional method works in silos and is not quite capable of handling the ever-growing sources of information from different channels and different functions in an integrated way.
This Statistical Law states that in many naturally occurring collections of numbers, the leading significant digit is likely to be small. Usually, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. If the digits were distributed uniformly, they would each occur about 11.1% of the time. Benford's law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on.
We apply this rule to the claims dataset in order to find strange digit patterns - in order to narrow the list of possible anomalous items and make the whole audit process more manageable.
Our main secret, is that we take relevant subsets of a Benford distributed claim dataset and throught business lens and through bootstrapping we highly improve our results.
We are now pursuing other algorithms to improve our accuracy.
Sign up to hear from us about specials, sales, and events.