How unfair are LLMs really? Evidence from Anthropic’s Discrim-Eval Dataset
| | |

How unfair are LLMs really? Evidence from Anthropic’s Discrim-Eval Dataset

Fairness is always an essential criterion for trustworthy and high-quality AI, no matter it’s a credit scoring model, a hiring assistant or a simple chatbot. But what does it mean to have a fair AI? Fairness has several aspects. First, it means all humans should be treated equally. Stereotypes or any other form of prejudice…

Bias in Legal Trials: The Role of Validaitor in Enhancing Judicial Fairness
| | |

Bias in Legal Trials: The Role of Validaitor in Enhancing Judicial Fairness

Legal trials epitomize fairness and justice, where everyone is treated equally before the law. However, conscious and unconscious biases can infiltrate the judicial process, affecting outcomes and undermining public trust. With the advent of technology, particularly large language models, there’s potential to address these biases, but it comes with its challenges. The Presence of Bias…

The Hidden Challenges of LLMs: Tackling Hallucinations
|

The Hidden Challenges of LLMs: Tackling Hallucinations

Large Language Models (LLMs) have recently gained acclaim for their ability to generate highly fluent and coherent responses to user prompts. However, alongside their impressive capabilities come notable flaws. One significant vulnerability is the phenomenon of hallucinations, where the models generate incorrect or misleading information. This issue poses substantial risks, especially in sensitive fields such…

Introduction to how to jailbreak an LLM
| | |

Introduction to how to jailbreak an LLM

A detailed instruction on how to build a bomb, a hateful speech against minorities in the style of Adolf Hitler or an article that explains why Covid was just made up by the government. These examples of threatening, toxic, or fake content can be generated by AI. To eliminate this, some Large Language Model (LLM)…

The Nuances of AI Testing: Learnings from AI red-teaming
| |

The Nuances of AI Testing: Learnings from AI red-teaming

Artificial Intelligence (AI) Testing is a complex field that transcends the boundaries of traditional performance testing. While AI developers are well-versed with performance testing due to its prevalence in the educational system, it is crucial to understand that AI encompasses much more than just performance. In this post, I’d like to list some key principles…

Towards Quality Assurance in Machine Learning
| | |

Towards Quality Assurance in Machine Learning

I had a chance to attend the PyConDE & PyData Berlin event this year where I gave a talk on machine learning (ML) testing and validation. Now the recording is available on YouTube and if you’re interested in how to bring “quality management” in machine learning pipeline, you may find the talk interesting. I also…

Model Validation and Monitoring: New phases in the ML lifecycle
| | |

Model Validation and Monitoring: New phases in the ML lifecycle

Validation/testing and monitoring of the ML models might be a luxury in the past. But with the enforcement of the regulations on artificial intelligence, they are now indispensable parts of the machine learning pipeline. In the last decade, machine learning (ML) research and practice have gone a long way in establishing a common framework in designing systems and applications…