Using a case study for illustration, this presentation proposes that meaningful review of probabilistic genotyping system results by an analyst is challenging, if not impossible, where the likelihood ratio is low. The author will posit that the vast range of possible likelihood ratio values permits “forgiveness” of troubling data, making it likely that exculpatory data will be missed.
Probabilistic genotyping systems (PGS) purport to rely on the experience and judgment of analysts at three significant stages. First and separate from the PGS analysis, the analyst will interpret the DNA data and make appropriate edits, producing an electropherogram. Second, the analyst will determine the number of contributors (“NOC”) present in the sample. The data and NOC are then inputted to a PGS (here, we will be discussing STRmix), which will attempt to resolve the mixture, deconvoluting and identifying contributing genotype possibilities. Finally, the analyst will review the PGS output, including diagnostics, genotype weightings and contribution ratio, endeavoring to ensure the results make “intuitive sense.” According to manufacturers, this critical check on the algorithm relies on the interpreting analyst’s experience and training to affirm that the deconvolution comports with the data observed.[1]
However, one case study illustrates that in some circumstances, the analyst will be unlikely to question the PGS results either because doing so would undermine the laboratory’s preliminary interpretation or, more likely, a low likelihood ratio may appear to have incorporated the uncertainty the analyst identifies, rendering amendment seemingly unnecessary. However, this may be masking exclusionary (and exculpatory) results under the guise of thorough review.
The underlying issues identified could include improper interpretation of the underlying data, an erroneous NOC determination, insufficient or less than ideal underlying data, or failures of the PGS itself. However, when a person of interest is compared to the mixture and a very low likelihood ratio is produced, these potential failures may not be closely examined or recognized, and exculpatory information could be overlooked.
In this case study, a sample is determined to be made up of two contributors. One is a significant major contributor (99+%) likely from a blood stain and the other a very minor contributor (less than 1%) likely from low template DNA from skin cells. The resulting likelihood ratio, however, is so low (but still admissible in court) notwithstanding various factors that suggest full exclusion. A closer manual review of the data and person of interest comparison reveals that more than half of the second contributor alleles distinct from the major contributor do not align with the person of interest. Instead of a resulting exclusion as might be expected for a single source data set, however, this likelihood ratio remains inclusionary. It is worth examining this case to investigate what factors may have been considered or not considered, to acknowledge the increased uncertainty regarding low level minor contributors and to raise awareness of the potential exclusionary result that is reported as an inclusionary, albeit low, likelihood ratio
[1] Russell, L., Cooper, S., Wivell, R., Taylor, D., Buckleton, J., & Bright, J.A. (2019), A Guide to Results and Diagnostics within a STRmix™ Report, Wiley Interdisciplinary: Forensic Science, 1(6), e1354.