Skip to content
Skip to main content

Weighing up the reliability of expert criminal evidence

Updated Thursday, 18 May 2023

Caroline Derry and Ian M. Kennedy debate the effects of forensic and linguistic distinctions on real-life decision making.

Find out about The Open University's Law courses.

 

This content is part of our Jury Hub.

 


Expert criminal evidence reliability is a hot topic of debate these days. Depending on who you ask, decision makers who are tasked with evaluating evidence for trials are either sceptical (such as here and here) or overly trusting of it (see here, here, and here). In England and Wales, the Law Commission and Court of Appeal have suggested a ‘more rigorous approach’ is needed to it. What is certain is that its reliability is a topic of debate.

 

A great deal of work has been done in recent years to address the public’s confidence in the use of forensic evidence relied upon for criminal trials. The Law Commission’s recommendations led to guidance for criminal courts (see 19A.5-6) on assessing the reliability of expert evidence. The Forensic Science Regulator (FSR) continues to push for statutory powers to govern the use of forensic evidence in the criminal justice system, and through its Codes of Practice and Conduct (Codes) recognises how factors such as bias and uncertainty of measurement can undermine the reliability of evidence. Mitigation of these types of issues can vary between the different disciplines of forensics. For example, some would argue uncertainty of measurement is not relevant to digital forensics, where there is little measurement science going on.

 

Nevertheless, the last step in the communication of a forensic expert’s analysis (from any discipline) is the report and the words it contains to communicate their conclusions.

 

As reported elsewhere, phrases such as ‘is consistent with’, ‘likely’ are problematic, yet remain in common use in courts. A study involving 500 participants in a variety of forensic disciplines found ‘questionable communication practices’ were common and that ‘little information’ was provided to support their ‘absolute conclusions’.

 

Other studies have also found problems with the ambiguous use of language, such as Sunde who examined multiple expert reports and concluded digital forensics relied on ‘strength of support’ style conclusions that were one-sided and relied on only 1 hypothesis. Crucial information for assessing the credibility was often absent or poorly presented. There was also a lack of descriptions for the validity or reliability of methods used.

 

Even within the realm of standards designed to support practice, problems have been identified. Marshall found the FSR’s reliance on standards such as ISO 17025 and even their own guidance is subject to imprecise language and “may cause an over-emphasis on establishing requirements”. This, they concluded, encouraged the development of overly complex methods, which would risk making it more difficult for a jury to evaluate their fitness for purpose.

 

When it comes to the use of language, the FSR provide guidance in this area in terms of on opinion evidence, acknowledging that different fields of FS have “evolved their own ways of assessing that probative value and use their own terminology.” In support of this, the FSR promote the use of the Case Assessment and Interpretation (CAI) framework, one of four guides to the appropriate use of statistics in the courtroom published by the Royal Statistical Society (RSS).

 

The Court of Appeal has given less guidance on the language in which opinions should be expressed. Rather, it has focused on how experts should give their evidence as a whole. It has emphasised the need for an expert to provide objective ‘independent assistance’ to the court and to state the basis of their opinion. They should acknowledge the limits of their expertise and of the data.

 

You might be forgiven for thinking this is all wrapped up nicely then, so why bother to blog about it? Despite the sound advice from the RSS, the FSR concede there remains “a philosophical debate” between the advocates of quantified probability and those who believe probability is “personal and incorporates uncertainty”. The courts, meanwhile, have adopted an uneven approach. DNA matches are routinely expressed using quantified probabilities; but this approach is resisted for other kinds of evidence. So once again the Jury is faced with making decisions in the classic scenario of disagreeing experts.

 

Become an OU student

Author

Ratings & Comments

Share this free course

Copyright information

Skip Rate and Review

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?