Fair Adversarial Networks

Remove bias from your data

How Rosa helps

Rosa removes bias from your datasets with outstanding accuracy and fairness. It stops analysis or Machine Learning producing biased algorithms.

Algorithms replicate human biases like race, gender and other characteristics. Rosa removes these biases with accuracy and fairness. It is easy to implement, understand and act on.

Enforcing fairness boosts your business’ present and future performance. Understand data now and predict future behaviour better.

no-bias-3

I want unbiased and objective data

Get your free trial
people

How Rosa works

Rosa is a De-Bias data solution. It removes bias from any dataset before it is analysed – leaving no opportunity for the analysis or Machine Learning to produce algorithms that are biased, while avoiding negative discrimination that is often damaging.

Algorithms affect our daily lives more and more. However, instead of being an objective arbiter, these algorithms replicate human biases because they are trained on biased data. What should be impartial algorithms become skewed in respect to race, gender and other factors.

Bias by characteristics such as race is illegal under global regulations. It also makes automated decision-making suboptimal.

Current solutions replace discrimination with negative discrimination. This produces more problems than it solves. Rosa eliminates bias without introducing negative discrimination.

How Rosa has helped customers

How Rosa found and lowered racial bias from a criminal risk assessment tool

The challenge

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a 137 question assessment. Its purpose is to determine the likelihood that a convict will reoffend.

It produces a risk score between 1 and 10 - a score of 10 is the highest risk possible. However, there is no guidance on how this translates into actual likelihood of reoffending.

An indicator of bias is the difference in the ratio between False Negative Rates and False Positive Rates for black and white defendants. Even though the overall predictive accuracy is similar for different races, the algorithm makes mistakes in different ways, depending on the race of the defendant.

The solution

COMPAS tends to overestimate the likelihood of reoffending for black defendants. It also underestimates the likelihood of white defendants reoffending. This is clear evidence that the algorithm is biased with respect to race.

illumr was set the same task as COMPAS - to predict reoffending. We used Rosa to de-bias the data.

The result

While all tests suggested significant racial bias, the bias in the Rosa model had a 30x lower effect size than COMPAS. Its predictions were also more accurate.

It is possible to remove bias without seriously compromising predictive performance, avoiding the pitfalls of more common methodologies.

WHY ROSA?

Rosa Parks was an iconic figure in the fight for equality and impartiality. We couldn’t have asked for a stronger name to try to live up to.

This tool is a step towards treating people fairly, today.