The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the algorithms and grades were a reflection of a broken educational system.
The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.
Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.
And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.
Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.
Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.
The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.
- First published in AIAAIC Alert
- Watch/listen to the Ada Lovelace Institute’s webinar on the OSR review