Go to main navigation Navigation menu Skip navigation Home page Search

Justice blind to algorithms? SSE project ranks among most promising

Algorithms increasingly guide public sector decisions, yet their impact on fairness is often overlooked. Researchers from SSE have examined how public institutions may ignore the injustices of algorithmic decision-making. The research project has been selected by the Royal Swedish Academy of Engineering Sciences (IVA) as one of the most promising projects this year.

The integration of algorithms in public administration promises efficiency, but there’s a hidden cost when the technology falls short on fairness. Researchers Anna Essén and Magnus Mähring from the Stockholm School of Economics, along with affiliated researcher Charlotta Kronblad from Gothenburg University, are investigating a pressing question: What happens when algorithms make decisions that cause harm?

Their study highlights how public agencies sometimes fail to address, or even recognize, the adverse effects of algorithmic decision-making (ADM) systems, ultimately risking social and legal justice. The research’s significance earned it recognition by the Royal Swedish Academy of Engineering Sciences (IVA) as one of the top 103 most promising projects in terms of having the potential to create value through business and method development or societal impact.

The work goes beyond examining technical biases, focusing instead on how institutions might "blackbox" the operation and consequences—shielding problematic ADM systems from scrutiny. "Our study reveals that organizations may ignore or dismiss the impacts of ADM errors on social justice, which can lead to widespread uncorrected injustices," explains Associate Professor Anna Essén. For instance, they reference cases where ADM placed children in public schools that breached local regulations, an oversight that went largely unaddressed by public authorities.

"Organizational ignoring"

The research team coined the term "organizational ignoring" to describe a pattern where public institutions prevent themselves and others from seeing algorithmic faults rather than addressing them. This problem has wide implications, resulting in blackboxing in multiple layers aas well as social and legal injustice. Social injustice in terms of the misallocation of public resources, and legal injustice in terms of blocking public service recipients from legal recourse and restitution. The team's framework aims to help public institutions assess and respond to ADM's legal and social implications.

"Our hope is that this framework can help create more accountable practices and guide legal protections to counter algorithmic injustices," says Professor Magnus Mähring. “It is about time that we update our legal infrastructure for the digital reality,” Charlotta Kronblad adds. Thus, the research suggests, institutions can move toward fairer, more transparent ADM systems that prevent long-term harm and present a legal framework to help in such endeavors.

The researchers' work is expected to pave the way for more robust standards in public sector ADM, combining insights from social justice and legal frameworks. Their framework proposes proactive steps, including legal recourse options, to ensure algorithmic decisions respect citizen rights.

For more information, please contact:

Charlotta Kronblad
Email: charlotta.kronblad@ait.gu.se

Magnus Mähring
Email: magnus.mahring@hhs.se

Anna Essén
Email:
anna.essen@hhs.se

Authors and affiliations:
Anna Essén (Stockholm School of Economics)
Charlotta Kronblad (Gothenburg University)
Magnus Mähring (Stockholm School of Economics)

Link to study:
When justice is blind to algorithms: Multilayered blackboxing of algorithmic decision-making in the public sector

SSE House of Innovation Misum Artificial intelligence Technology