Government Pushing Algorithms to Be More Transparent


The government has set about its mission to make an algorithmic transparency standard due to concerns that more biased algorithms are having an unnecessary impact on the way Britons are being treated by the state.

Little Insight a Big Problem

Algorithms are systems that contain wide ranges of data to provide a singular answer, but concerns that people have very little insight into the logistical process behind generating the answers has left worry over any flaws within the process being completely missed.

Flaws in the process can result in major impacts on individuals’ lives, with biases being detected in algorithms used in advertising and content deciders, as well as within ones for setting insurance rates informing prison sentences.

The biased algorithms have seen a lot of controversy and distrust following thousands of students receiving downgraded A-level results in 2020 following the exams being cancelled due to Covid. Added to this, privacy campaigners have hit the ground hard to campaign that huge databases are being used to predict whether children are at risk of involvement in crime and the profiling of benefit applications.

Used for Profiling

Reports earlier in 2021 on the hidden algorithms of Britain’s welfare state showcased that local authorities issuance of housing benefits was a government laboratory testing the usefulness of predictive models. These predictive systems were deemed no more than glorified poverty profiling systems, satisfying the long-held prejudice that the government authorities hold against the poor sector.

The common thread between all of the automated systems was a complete lack of due attention being paid by councils on the serious risks posed by bias and indirect discrimination.

The new standard of algorithmic transparency set out by the government and developed by the Cabinet Office’s Central Digital and Data Office – along with input from the Centre for Data Ethics and Innovation (CDEI) – follows the review into bias within algorithmic decision-making.

Mandatory Planning

CDEI recommended that the powers of the UK government must place a mandatory transparency obligation into public sector organisations that use algorithms to support decisions that can affect individuals. This standard will require those organisations in the public sector to declare which tools they use, why and how and fill out a table with full explanations on how their algorithms work. It will also see these organisations share the data used to train their machine learning models along with a description of the categories used for training – all crucial steps into detecting bias.

The standard will be piloted by several government departments and public sector bodies over the coming months.

For more information on public sector IT and any upcoming IT Security events 2022, check out the upcoming events from Whitehall Media.