Add an overline text

Security & Fairness Analysis

In finance, using machine learning in detecting fraudulent transactions is key to mitigate illegal activities and money laundering. Yet, it’s a big concern to keep these systems secure against the hacking and poisoning activities. Moreover, the inherent bias in the historical data may cause these systems to create false positives especially for underrepresented groups. That is where Validaitor comes into play and enables a comprehensive evaluation of the AI systems used in finance.

Validaitor is the #1 platform that is designed for enabling trust between AI developers and the society.

The Problem

The fraud detection systems are designed to catch up true positives as much as possible by trading-off with false positives. The underrepresented groups that belong to an ethnic group or religion might be affected from these false positives unproportionally. In addition, the so called adversarial and poisoning attacks raise concerns among the security of fraud detection systems.

The Challenge

The bias in AI systems may manifest itself very subtly. The unfair decision making happens when proxy variables hide themselves from the eye of the developers. Similarly, poisonous data or adversarial examples tend to hide themselves from manual scrutiny and inspection. Hence, a structured and systematic approach is needed to mitigate bias and ensure the safety of these systems.

The Solution

VALIDAITOR comes with different forms of bias and adversarial vulnerability tests. Using and automating these tests with a few clicks enable a systematic evaluation of AI models used in the fraud detection. Our platform generates realistic test data using different techniques by considering the use case at hand. Based upon these test data, Validaitor can run a myriad of tests to sketch a realistic picture of the quality and risk profile of the fraud detection models.

Get Free Access to Platform