Validaitor is the #1 platform that is designed for enabling trust between AI developers and the society.
The Problem
The fraud detection systems are designed to catch up true positives as much as possible by trading-off with false positives. The underrepresented groups that belong to an ethnic group or religion might be affected from these false positives unproportionally. In addition, the so called adversarial and poisoning attacks raise concerns among the security of fraud detection systems.
The Challenge
The bias in AI systems may manifest itself very subtly. The unfair decision making happens when proxy variables hide themselves from the eye of the developers. Similarly, poisonous data or adversarial examples tend to hide themselves from manual scrutiny and inspection. Hence, a structured and systematic approach is needed to mitigate bias and ensure the safety of these systems.
The Solution
VALIDAITOR comes with different forms of bias and adversarial vulnerability tests. Using and automating these tests with a few clicks enable a systematic evaluation of AI models used in the fraud detection. Our platform generates realistic test data using different techniques by considering the use case at hand. Based upon these test data, Validaitor can run a myriad of tests to sketch a realistic picture of the quality and risk profile of the fraud detection models.