Site navigation

New Report Calls for Greater Transparency on the Use of Algorithms

Michael Behr

,

CDEI algorithmic bias report

With intelligent data usage key to fighting algorithmic bias, the report called for additional clarity around data protection laws.

The Centre for Data Ethics and Innovation (CDEI) has called for more transparency from the public sector over how it uses algorithms.

In a new report, the CDEI analysed four sectors – financial services, local government, policing, and recruitment – to determine the risks posed by algorithmic bias.

It warned that relying on algorithmic systems risks embedding biased decision-making into organisations. However, the report also found that it was unclear whether algorithmic decision-making is more or less biased than human decision-making.

“Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care,” the report said.

To ensure that appropriate care is taken, the report’s main recommendation was the importance of mandatory transparency obligations. These should be enforceable on public sector organisations when they use algorithms to make significant decisions.

In addition, the report also recommended that organisations should be actively looking to identify and mitigate bias in their algorithms and that the UK Government should issue guidance to clarify how the Equality Act applies to algorithmic decision-making.

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases,” said CDEI Board Member Adrian Weller.

“Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.

“Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

There is a famous quote that an algorithm is an opinion embedded in code. As such, an algorithm is only as unbiased as the people who write it. As a core part of its strategy, the CDEI report came with several recommendations to prevent bias entering algorithms when they are created.

Key among these is ensuring sufficient diversity in the workforce. This helps ensure that potential issues of bias and the problems they cause are understood at the point of conception.

Having access to the right data to understand bias in data and models, and the right tools and approaches to then identify and mitigate bias are also crucial.

The report noted that this requires an ecosystem of expert individuals and organisations able to support groups looking to fight bias, as well as governance structures to consider the wider impact of algorithmic tools.

The use of data is one of the key recommendations of the report, as using extensive and accurate data can tackle current and historic bias.

“Data gives us a powerful weapon to see where bias is occurring and measure whether our efforts to combat it are effective; if an organisation has hard data about differences in how it treats people, it can build insight into what is driving those differences, and seek to address them.”

However, it also warned that data usage is a double-edged sword. The report warned that data on protected characteristics is frequently insufficiently available. It these characteristics, such as race, gender, sexuality, and wealth, that can creep into algorithms unnoticed and create bias.

As such, the CDEI called for government and regulators to provide on how data protection laws can enable companies to collect and use data legally and responsibly to monitor and address discrimination.

Recommended

The use of algorithms has been under scrutiny after controversy over how exam results were allocated in both Scotland and England due to disruption from the coronavirus pandemic.

In lieu of exams, pupils across the UK had estimated grades and a comparative ranking with other students fed into an algorithm, which also factored in their school’s previous performance, which determined their final grade.

However, the final grades produced were frequently lower than teacher assessments, with many pupils arguing that the algorithm had unfairly and negatively affected their future job and education prospects.

Michael Behr

Senior Staff Writer

Latest News

%d bloggers like this: