Artificial intelligence

Scott Dylan – The Promise and Peril of AI in Britain’s Prisons

AI-powered surveillance and inmate monitoring in a modern UK prison. Scott Dylan Investigates.

The UK government’s AI Action Plan for Justice, unveiled on 31 July 2025, promises to harness artificial intelligence across prisons, probation and courts. At its core is a bold goal: use AI to predict and prevent violence in prisons before it happens. Lord Chancellor Shabana Mahmood the former Secretary of State for Justice at the time before being replaced by David Lammy heralded the plan as transformative, insisting these ai tools will help fight prison violence, track offenders, and free up staff “to focus on what they do best” in cutting crime.

Lord Chancellor and Former Secretary of State for Justice, Shabana Mahmood, at the time said:

“Artificial intelligence will transform the justice system. We are embracing its full potential as part of our Plan for Change.”

“These tools are already fighting violence in prisons, tracking offenders, and releasing our staff to focus on what they do best: cutting crime and making our streets safer.”

As a prison reform advocate now having experienced the system, I find this an exciting moment. In this piece, I’ll examine how AI could improve safety and efficiency in British prisons, while also weighing the ethical and legal safeguards needed to uphold human rights and public trust.

AI as a Guardian: Predicting and Preventing Prison Violence

A flagship initiative in the plan is an AI-powered “violence predictor” to identify prisoners at risk of causing harm. By analysing factors like an inmate’s age and past involvement in incidents, the system can assess threat levels and flag those who might turn violent. The intent is for prison officers to intervene early – for example, relocating a volatile prisoner or increasing supervision on a wing – so that fights or attacks are defused before they escalate. This kind of proactive risk assessment could be a game-changer in reducing the roughly 20,000 annual assaults recorded in UK prisons (a persistent problem in recent years), making daily life safer for both inmates and staff.

Another AI tool will tackle the dangerous communications that fuel prison crime. The Ministry of Justice (MoJ) revealed plans to scan prisoners’ illicit phone messages using AI, automatically spotting coded language about gang activity, escape plans, violence or contraband smuggling. Notably, trials of this language-analysis system have already sifted over 8.6 million messages from 33,000 seized phones, uncovering threats and criminal plots that human officers might have missed. Mobile phones are a major source of illicit coordination in prisons, so intercepting them with machine efficiency could prevent stabbings, drug inflows and escape attempts.

Officials describe these AI measures as a clampdown on violence and contraband, part of a broader “Plan for Change” to cut reoffending. Indeed, if such systems prove accurate, they could save lives – stopping a revenge attack or riot before it starts. However, treating AI as a quasi-“guardian” also raises questions: How reliably can an algorithm distinguish genuine danger from everyday inmate tensions? There is a fine line between prudent prevention and over-surveillance. Prisons in Singapore already use AI monitoring on CCTV feeds to detect fights and unusual behavior, and while it has improved response times, inmates report feeling constantly watched and “dehumanised” by unblinking digital eyes. The UK’s violence predictor must therefore be deployed with care, transparency, and human oversight, or it could create an atmosphere of mistrust on the wings even as it aims to make them safer.

Beyond Bars: AI’s Broader Role in the Justice System

The AI Action Plan for Justice is not limited to prison yards – it envisions a justice system upgraded at every level. One key reform is a single digital offender ID that uses AI to link records across police, courts, prisons and probation. Today, an offender might be listed under slightly different names or details in separate databases; the new system will use machine learning (via the MoJ’s open-source Splink algorithm) to deduplicate and connect these files. This means judges and caseworkers could finally see an individual’s full history at a glance – reducing errors like missed prior offenses and ensuring sentencing and supervision are informed by complete information. Greater data integration, done responsibly, translates to smarter tracking of offenders and hopefully fewer cracks for high-risk individuals to slip through.

Meanwhile, AI is slated to ease administrative burdens that bog down frontline staff. In probation services, for example, pilots of an AI note-taking assistant showed a 50% reduction in time officers spent on writing up reports. By automatically transcribing and summarising case notes, the tool frees probation officers to spend more time on “risk management, monitoring and face-to-face meetings with offenders”. The government plans to roll out such productivity aids to all 20,000 probation staff, and even to prison officers and court clerks, allowing humans to focus on complex work while AI handles routine paperwork. Imagine prison officers spending less time filling forms and more time mentoring prisoners – that’s the opportunity.

The courts will also get a tech upgrade. The Action Plan highlights digital assistants for the public, including one under development to help families resolve child custody disputes without going to court. This AI assistant, akin to an online mediator, could guide parents through common arrangements and suggest fair solutions, reducing the backlog in family courts. The Law Society has even urged government to create a free “NHS 111 for law” – an AI-driven legal help service to direct people to the right support for issues like divorce or housing. Such tools could democratise access to justice by helping people navigate the system themselves. Additionally, AI could streamline court scheduling and case management behind the scenes (for instance, optimising listing of cases to make best use of courtroom time), which would help tackle chronic delays. From digital case file triage to AI-assisted transcription of hearings, the potential efficiencies across the justice system are considerable.

Importantly, ministers stress that innovation will be “responsible and proportionate”. The plan comes with guardrails: a new Justice AI Unit led by a Chief AI Officer will oversee implementation, and the framework was developed with input from judges, regulators and even trade unions. This suggests the MoJ recognizes the need to bring practitioners on board. Prison and probation unions will be watching closely – if AI genuinely reduces workloads and improves safety, staff may welcome it; but if it feels like a top-down tool to monitor workers or replace their judgment, expect pushback. As always with technology in public services, success will depend on frontline buy-in and training. The plan accordingly pledges investment in AI training for staff so that the workforce can confidently use these new systems.

Opportunities vs. Challenges: Striking a Balance

The opportunities presented by AI in the justice system are undeniably compelling. In broad terms, smarter automation could free up professional time, increase efficiency, and even personalise justice. The MoJ envisions AI helping deliver “swifter, fairer, and more accessible justice for all” – from reducing court backlogs to tailoring rehabilitation plans. For prisons specifically, predictive analytics can lead to a more preventative approach: rather than reacting to violence after the fact, prison staff can act on early warnings. More accurate risk assessments might also ensure the most dangerous offenders are kept under tighter controls while low-risk inmates get more opportunities for education and reform, a nuance that could reduce reoffending rates over time. Furthermore, automating tedious tasks (like sifting contraband messages or writing reports) means skilled professionals can spend energy on human-centred work – mentoring offenders, engaging with victims, or devising rehabilitation strategies – which no algorithm can replace. If implemented well, AI could thus contribute to both public safety and rehabilitation, a twin win that has often proved elusive in penal policy.

That said, these gains will only be realised if we navigate the risks and challenges with eyes wide open. One major concern is surveillance and privacy. Prisons are closed environments, but inmates do not forfeit all rights at the gate. Constant AI monitoring – whether through cameras, phone taps, or data mining – can create a feeling of perpetual surveillance. As seen in Singapore, prisoners complained that round-the-clock AI camera systems made them feel “like a dangerous terrorist who had to be watched all the time” even for minor offenses. Such an atmosphere could undermine rehabilitation; people who feel excessively policed may become more resentful or agitated. There are also data protection questions: what happens to all the sensitive data AI systems collect (phone call transcripts, behavior logs)? How long is it kept, and who has access? Without clear policies, there’s risk of abuse or leaks of personal information – a point raised by Singaporean rights groups who note a lack of clarity on data use and retention in high-tech prisons. The UK plan must ensure robust data governance and independent oversight of these tools to prevent function creep or misuse.

Another challenge is algorithmic bias and fairness. Criminal justice data often reflects societal biases – for example, minority and disadvantaged communities are overrepresented in prisons. If an AI tool is trained on historical data of violent incidents, it might unintentionally place extra scrutiny on inmates from certain racial or social groups due to biased patterns in the input. We’ve seen cautionary tales abroad. In the United States, risk assessment algorithms (like the COMPAS system) used in bail and sentencing have been criticised for racial bias, erroneously labeling Black defendants as higher risk more often than white defendants in some analyses. And in the Netherlands, the probation service’s OxRec tool – while transparent and based on research – drew concern for including socioeconomic factors (like education and employment) that correlate with race and class, potentially baking in structural bias to recidivism predictions. The UK must be proactive in addressing this. Each AI application should be vetted for bias, with diverse test data and ongoing audits. The government says it will use an AI and Data Ethics Framework and engage regulators – those commitments need to be ironclad. We might even require external algorithmic audits or an oversight body to regularly review outcomes for disparities. Fairness is not optional in justice; an AI tool that unjustly tags someone as “high-risk” due to their postcode or ethnicity would be not only unethical but could violate equality laws.

Linked to bias is the issue of accountability. Decisions in criminal justice carry life-changing weight – loss of liberty, years added to sentences, parole denied or granted. Who is accountable if an AI’s advice leads to a flawed decision? The plan emphasizes that AI will “support” human decisions, not replace them. That is critical. Judges, magistrates, prison governors and parole boards must ultimately exercise their judgement, with AI as an assistant, not an arbiter. We should avoid any scenario where someone can say “the computer says you’re high risk, so no parole” without the opportunity for explanation or challenge. Transparency is key here: if AI flags a prisoner as dangerous, authorities should be able to explain the reasoning in plain terms. Opaque “black box” algorithms have no place determining justice outcomes. Encouragingly, the OxRec tool in the Netherlands was built with a fully published methodology, which experts praised for transparency. The UK should similarly favor AI models that are explainable and open to scrutiny, rather than proprietary secret algorithms.

Finally, we must consider the legal and human rights framework. The European Union is moving to regulate high-risk AI systems (which would likely include those used in law enforcement and prisons) under its upcoming AI Act, requiring strict standards of transparency, risk management, and human oversight. The UK, outside the EU, has its own approach but should not fall behind on safeguards. A telling example comes from an unlikely quarter: welfare fraud detection. The Netherlands deployed an algorithmic system (SyRI) to predict benefit fraud, but in 2020 a Dutch court halted it for violating human rights, citing privacy and discrimination concerns. That ruling resonates widely – it shows that even well-intentioned government AI can cross legal lines if not carefully balanced against individual rights. Due process and the rule of law must remain paramount. If AI is used to gather evidence or intelligence in prisons, it should be subject to the same scrutiny and disclosure in court as any other source. And prisoners who feel an AI-based decision has wronged them (say, an unjust risk score keeping them in higher security) must have a way to appeal or contest that decision through human review.

Global Perspectives on AI in Prisons: Learning from the US, Netherlands, and Singapore

The United Kingdom is not alone in trying to inject AI into criminal justice, and it can learn from the experiences of other countries. In the United States, various AI and analytics tools have been trialed in corrections and policing. A notable use case is the monitoring of inmate phone calls. Several U.S. states have adopted AI software that automatically transcribes and analyzes prisoners’ telephone conversations, flagging key words or patterns that might indicate criminal activity or planned violence. There have been success stories – in Alabama, one such system reportedly helped solve a cold-case murder after picking up an inmate bragging about the crime, and officials say the technology has even prevented suicides by alerting staff to distress signals. However, American civil liberties groups have pushed back on these tools. They argue that relying on AI interpretations of speech is fraught with the potential for error and bias, and that incarcerated people (and their families) have virtually no recourse if an algorithm mishears slang as a threat. In fact, studies have found that popular speech-to-text AI services have higher error rates for Black voices, raising alarms that such monitoring could disproportionately punish minorities for misunderstandings. The U.S. experience underscores a vital lesson: AI can greatly assist in surveillance and investigations, but it needs rigorous accuracy testing and human verification before authorities act on its alerts. Moreover, the introduction of any AI system should be accompanied by clear policies to prevent over-disciplining inmates based on dubious machine evidence – a point U.S. reformers have emphasized in calling for oversight on prison AI projects.

The Netherlands, often a pioneer in criminal justice innovation, offers a more measured example. As mentioned, Dutch probation services use the OxRec algorithm to help assess recidivism risk for offenders on probation or leaving prison. What’s instructive is how they use it: OxRec provides a probability of reoffending based on factors like criminal history and personal circumstances, but it is explicitly kept as an aid to, not replacement for, professional judgement. Probation officers incorporate the score into their reports to judges, who remain free to decide on sentencing or release conditions. The Dutch were transparent about OxRec’s design (publishing its methodology and validation studies) and even then, local experts debated its fairness because it includes socio-economic variables that could mirror social biases. This demonstrates the importance of continued scrutiny – an algorithm that is statistically accurate in aggregate may still raise ethical questions about disparate impact. Notably, the Netherlands also trialed predictive policing algorithms in some cities, but these faced criticism and tighter regulation due to privacy concerns and lack of demonstrable benefits. The takeaway is that Europe tends to apply a precautionary approach: AI can be explored in justice, but under watchful eyes of regulators, researchers, and the courts. The UK, sharing similar legal values, should likewise ensure that any predictive tools are evidence-backed and regularly evaluated for accuracy and fairness. Pilot programs with independent academic evaluation could help prove effectiveness (or reveal shortcomings) before scaling up nationwide.

Looking to Singapore, we see a high-tech vision of incarceration – one that shines a light on both the promise and perils of AI. Singapore’s prison service has tested a comprehensive suite of AI surveillance: facial recognition cameras to count inmates, sensor networks to track movements, and behavior analysis software to detect fights or even if an inmate falls or loiters unusually. The authorities there laud these systems for improving security and allowing officers to focus on “more value-added work” like rehabilitation. There’s evidence that AI monitoring in Singapore has indeed enabled a smaller guard force to manage a large inmate population with quick incident response. However, former prisoners and rights advocates describe the experience as “dehumanising” – every moment under electronic watch, often with false alarms (one inmate recalls exercise routines triggering fight alerts) and an acute loss of privacy. Crucially, Singapore’s approach pushes the limit of how far technology can intrude on inmates’ lives, and it has prompted debate about dignity and mental health in custody. The lesson for the UK is to be cautious about over-automation. While a degree of monitoring is necessary in any prison, there is a balance to strike between security and humane treatment. The UK would do well to engage with civil society and ethicists when implementing AI surveillance – to draw red lines around practices that are too intrusive. As Phil Robertson of Human Rights Watch noted regarding Singapore, even if violence prevention is the goal, some uses of AI like ubiquitous facial recognition can be “overly intrusive and unnecessary” in prisons. British justice policy should seek a middle ground where technology aids safety but does not strip away all privacy or treat inmates solely as data points.

Transforming Prisons: Rehabilitation and Workforce Implications

What could these AI innovations mean for the future of prison reform and the people at the heart of it? From a rehabilitation perspective, there is hopeful potential. If AI systems help identify which prisoners are at risk of violence or self-harm, they can also help identify those who might benefit most from interventions – whether it’s conflict resolution training, mental health support, or transfer to a specialist unit. The MoJ’s plan specifically mentions using AI to enable “personalised education and rehabilitation,” such as tailoring training programs for offenders. For instance, imagine an AI that analyzes an inmate’s learning history, literacy level, and behavioral patterns to suggest vocational courses or therapy that have the highest success rate for someone with that profile. This could make rehabilitation efforts more targeted and effective, rather than one-size-fits-all. Technology might also improve continuity of care: the single digital offender ID could ensure that when someone is released on probation, their risks and needs – as identified in custody – are clearly communicated to probation officers, so support doesn’t lapse. As Scott Dylan, I have long advocated that prisons should be measured not just by security, but by how well they prepare individuals to rejoin society. Properly used, AI could assist in that mission by highlighting rehabilitative opportunities (as the plan puts it) alongside risk indicators. For example, an AI analysis might reveal that an inmate with a history of addiction is nearing release and flag that enrolling them in a drug treatment program could significantly cut their reoffending risk – a prompt for prison staff to act. In this way, AI can serve as a kind of guide towards a more constructive, less reactive prison environment.

The workforce implications are also significant. Prisons and probation services in the UK have been under strain, with staffing shortages and high burnout rates. Introducing AI could transform the nature of some jobs. Ideally, mundane tasks (paperwork, data entry, basic monitoring) will be offloaded to algorithms, while human staff focus on interpersonal roles that only they can do – like counseling, conflict de-escalation, and dynamic security (building rapport to gather intelligence). This shift could make careers in criminal justice more rewarding and skill-based. However, change can be unsettling. Some prison officers might worry that AI monitoring tools are there to scrutinise their performance or reduce the need for as many staff. It’s encouraging that the plan was developed in consultation with unions, but as implementation unfolds, maintaining an open dialogue will be essential. Training programs must not only teach staff how to use new tools but also reassure them of their continued, central role. After all, technology should augment, not alienate, the human element in prisons. If an AI flags a prisoner for potential violence, it’s a human officer who must approach and talk to that prisoner and defuse the situation – skills that require experience and empathy. In short, the workforce will need to evolve into tech-assisted rehabilitators. This may even attract new talent with blended expertise in criminal justice and IT. Over time, we might see roles like prison data analysts or probation tech-specialists emerging, to translate AI outputs into practice on the ground.

Of course, the drive for efficiency must not eclipse the human touch. Prisons fundamentally deal with people – often damaged and vulnerable people – and no algorithm can show compassion or exercise moral judgement. Scott Dylan’s perspective is that empathy and justice go hand in hand. We should use AI to reduce drudgery and improve safety, so that officers and support staff have more bandwidth to engage with inmates positively. That means KPIs for these AI projects should include measures of staff-inmate interaction quality and rehabilitation outcomes, not just cost savings or surveillance metrics. It’s notable that the Tony Blair Institute’s Director of Innovation Policy praised the Action Plan’s ambition, saying if implemented well it could “help offenders receive the personalised support they need for effective rehabilitation, making streets safer”. This encapsulates the dual benefit we must aim for: safer communities and better chances for offenders to turn their lives around. Technology can assist, but the heart of justice reform remains a human endeavor.

A Smarter, Safer, Fairer Future – If We Get It Right

The UK’s new AI Action Plan for Justice marks a pivotal step towards modernising a justice system often seen as antiquated. It’s a vision of smart justice – one where data and algorithms help spot danger before it strikes, streamline cumbersome processes, and allocate resources where they’re most needed. As we’ve explored, the potential upsides are significant: fewer violent incidents behind bars, more efficient courts, and more personalised rehabilitation that could ultimately reduce reoffending. Embracing innovation is not just about efficiency; it’s about building a justice system that is responsive and preventative rather than reactive.

However, as with any powerful tool, implementation is everything. This is where UK legislators, technologists, and criminal justice leaders must step up together. First, we need clear ethical guidelines and transparency at every turn. The public has a right to know what AI systems are being used, what data feeds them, and how decisions are made. The MoJ should publish results of pilots and ongoing audits – for instance, releasing statistics on the accuracy of the violence predictor and any biases detected, as well as the outcomes (e.g. number of violent incidents averted). Independent oversight will be crucial: this could take the form of an advisory panel including technologists, ethicists, prisoner representatives, and legal experts to continually review AI projects in justice. Parliament too should keep a close eye, perhaps via committees scrutinising the rollout and ensuring it remains evidence-driven. If an AI tool doesn’t actually deliver on its promise or has unacceptable error rates, it should be revised or scrapped – no clinging to tech for tech’s sake.

Secondly, legislation and regulation may need updating. The UK could consider a statutory framework for algorithmic decision-making in criminal justice, enshrining principles like human oversight, non-discrimination, data protection, and the right to appeal an automated decision. Proactive alignment with emerging international standards (such as the EU’s AI Act) would position Britain as a leader in ethical AI governance. We should also encourage a domestic ecosystem of researchers to stress-test government algorithms (“red-teaming” them for weaknesses) and of startups focusing on ethical AI solutions for the public sector. The tags I’ve used – AI in justice, algorithmic governance – signify that this is as much about governance innovation as technology innovation.

Finally, this moment calls for a cultural mindset shift within the justice system. Leaders in policing, prisons and probation must champion a vision where AI is there to empower, not replace. Frontline staff should be involved in design and feedback – after all, a tool is only as good as its fit with reality. And we must engage the broader public, including former prisoners and victims’ groups, in discussion about these changes to build public trust. If citizens understand that, say, an AI is being used to better protect prison officers and inmates – and that it’s rigorously checked for fairness – they are more likely to support it. Conversely, any secrecy or dismissiveness about concerns will breed suspicion. In a justice system, legitimacy is everything.

Bringing this to a close with a final though, AI can indeed help redesign the future of British prisons and the wider justice system – but it must be done the right way. As someone passionate about prison reform, I am optimistic that with the proper checks and balances, technology can accelerate much-needed changes: making prisons safer and more rehabilitative, making courts more accessible, and making the public safer in turn. The key is ensuring that our use of AI is smart, transparent, and humane. We have the opportunity to lead the world in integrating AI with justice ethically. Let’s seize that opportunity, but temper boldness with wisdom. In the pursuit of cutting-edge solutions, we must never lose sight of the core values of justice – fairness, dignity, and second chances. With that compass in hand, AI could indeed herald a brighter era for British justice.

Scott Dylan is the founder of NexaTech Ventures and an advocate for prison reform. He writes on AI, justice, and social impact.

Comments
To Top

Pin It on Pinterest

Share This