fraud detection

Improve fraud detection and optimize fraud sample selection with our powerful screening tool.

OUR UNIQUE VALUE PROPOSITION

  Cloud or API deployment - no software layer


Anomaly detection and compliance checks


Alert investigation


Risk identification using predictive analytics 


Our tool runs through ALL critical claims data. This in turn helps detect low-incidence (0.001%) events


We score every insurance claim to look for signs of fraud

KEY BENEFITS

  • After you deliver the first Dataset (usually the ETL for full data set takes 10 weeks) you have the first results in 24h (in a PoC)
  • In subsequent runs you are fully independent with our cloud solution and have results in 24h on each and every Dataset
  • Automated & Repeatable Analysis
  • Input New Analytics with Ease
  • Remediation Workflow & Resolution Guidelines
  • KPIs (Root Cause Analysis)
  • No setup cost
  • Fast results
  • No IT layer
  • We do not sell a software and a manual
  • We (can) work with a success fee

MAGIC METRIC

1.  TRUE FRAUD/ANOMALY: 68%


2. TRUE NO FRAUD/ANOMALY: 67%

WHY IS DATA XL DIFFERENT

Traditional Aproach

Traditionally, insurance companies use statistical models to identify fraudulent claims.

These models have their own disadvantages. First, they use sampling methods to analyze data, which leads to one or more frauds going undetected. There is a penalty for not analyzing all the data. Second, this method relies on the previously existing fraud cases, so every time a new fraud occurs, insurance companies have to bear the consequences of the first time. Finally, the traditional method works in silos and is not quite capable of handling the ever-growing sources of information from different channels and different functions in an integrated way.

Our Main Approach Is Based In the Benford Law

This Statistical Law states that in many naturally occurring collections of numbers, the leading significant digit is likely to be small. Usually, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. If the digits were distributed uniformly, they would each occur about 11.1% of the time. Benford's law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on. 


We apply this rule to the claims dataset in order to find strange digit patterns - in order to narrow the list of possible anomalous items and make the whole audit process more manageable. 


Our main secret, is that we take relevant subsets of a Benford distributed claim dataset and throught business lens and through bootstrapping we highly improve our results.


We are now pursuing other algorithms to improve our accuracy.

Subscribe

Sign up to hear from us about specials, sales, and events.