How unfair are LLMs really? Evidence from Anthropic’s Discrim-Eval Dataset
Artificial Intelligence | Benchmark | Blog | Fairness

How unfair are LLMs really? Evidence from Anthropic’s Discrim-Eval Dataset

Fairness is always an essential criterion for trustworthy and high-quality AI, no matter it’s a credit scoring model, a hiring assistant or a simple chatbot. But what does it mean to have a fair AI? Fairness has several aspects. First, it means all humans should be treated equally. Stereotypes or any other form of prejudice…

Bias in Legal Trials: The Role of Validaitor in Enhancing Judicial Fairness
Artificial Intelligence | Blog | Use Cases | Validaitor

Bias in Legal Trials: The Role of Validaitor in Enhancing Judicial Fairness

Legal trials epitomize fairness and justice, where everyone is treated equally before the law. However, conscious and unconscious biases can infiltrate the judicial process, affecting outcomes and undermining public trust. With the advent of technology, particularly large language models, there’s potential to address these biases, but it comes with its challenges. The Presence of Bias…

The Hidden Challenges of LLMs: Tackling Hallucinations
Artificial Intelligence | Blog

The Hidden Challenges of LLMs: Tackling Hallucinations

Large Language Models (LLMs) have recently gained acclaim for their ability to generate highly fluent and coherent responses to user prompts. However, alongside their impressive capabilities come notable flaws. One significant vulnerability is the phenomenon of hallucinations, where the models generate incorrect or misleading information. This issue poses substantial risks, especially in sensitive fields such…

Introduction to how to jailbreak an LLM
Artificial Intelligence | Blog | Security | Validaitor

Introduction to how to jailbreak an LLM

A detailed instruction on how to build a bomb, a hateful speech against minorities in the style of Adolf Hitler or an article that explains why Covid was just made up by the government. These examples of threatening, toxic, or fake content can be generated by AI. To eliminate this, some Large Language Model (LLM)…

The Nuances of AI Testing: Learnings from AI red-teaming
AI Act | Artificial Intelligence | Blog

The Nuances of AI Testing: Learnings from AI red-teaming

Artificial Intelligence (AI) Testing is a complex field that transcends the boundaries of traditional performance testing. While AI developers are well-versed with performance testing due to its prevalence in the educational system, it is crucial to understand that AI encompasses much more than just performance. In this post, I’d like to list some key principles…