Human Rights vs. Biased Algorithms in The Netherlands

By Bianca Pană

For the past few years, Chinese start-ups have built algorithms, used by the government to track and target a largely Muslim minority group. A major ethical leap in technological advances.

In 2021, residents in the Netherlands were scandalized by the false accusation of childcare benefit fraud. The Dutch government charged over 11,000 parents with committing fraud to receive childcare benefits. Thousands had to pay back significant amounts of money due to this awful mistake made by the Tax Office algorithm. Usually, an algorithm register is a procedure used to step-by-step solve a problem, task, or computation. In this situation, the Tax Office used a biased algorithm to perform important tasks such as assigning childcare benefits to families, resulting in making them suffer terrible consequences. The reason is that the algorithmic register used by the Tax Office was unfairly profiling residents living in poor neighborhoods or ethnic minorities because it is biased.

Some parents lost their incomes, savings, marriages, and even their homes. However, people were wondering how this reputable institution, well-known for its fairness, could authorize such a tremendous miscalculation. So far, more than 1,500 children have been removed from parents' custody due to this serious algorithm error. The issue of this matter is the artificial intelligence service used by the Dutch Ministry. The algorithmic register was profiling residents living in poor neighborhoods or ethnic minorities. Discriminations were one of the algorithm problems, but citizens were hunted down by official tax correspondence, reports, and outstanding invoices until it was too late for authorities to repair the damages created.

In 2020, another AI newspaper headline blew people's minds when the Dutch Court blamed the SyRI algorithm for crossing human rights boundaries. In this case, the system risk indication algorithm was utilized for detecting social welfare fraud. Similar to the 2021 child allowance error, this algorithm chased down low-income and minority citizens. Specialists discovered that the filled form contained a polar question regarding people's citizenship. At first, it seemed like a reasonable or innocent question, but it disclosed the wrongful use of nationality in the risk classification model.

Biased algorithms severely impacted thousands of people during the past two years. The examples mentioned above highlight the problem, but the Dutch government still uses many other algorithm registers. From healthcare to education and taxes, algorithms interfere in people's lives and cross human rights by targeting racial and ethnic profiles instead of real threats to the Netherlands' welfare.

Nevertheless, authorities responded to each accusation with positive attitudes toward ideas and actions of improving AI in the Netherlands. However, their reaction raised more questions. Will the Ministry acknowledge and take accountability for its actions? Will the algorithms developers be blamed? Will the new regulations be inclusive and fair? The Dutch Government still has to answer these questions and implement proper solutions.

The Sustainable Media Lab brings together researchers, practitioners, and students to explore the rapidly changing technical environment we find ourself in. These articles were written by students in the Fall 2022 student cohort.

Image by Donald Trung Quoc Don (Chữ Hán: 徵國單) on Wikimedia (CCSA4)