This is the Final Research Report for a project with the goal of relating matching probabilities to the error rates that arise in common individuality studies and proposing a paradigm for evidence interpretation based on error-rate studies.
The research sought to determine 1) the formal sampling and statistical experiments for source and sub-source propositions for questioned document, fingerprints, and facial recognition; 2) whether a paradigm can be proposed for reasoning about the source of traces based only on error rates used to characterize the performance of automated identification systems; 3) whether a set of methods can be developed for characterizing the uncertainty about estimated error rates; and 4) how to present the results for the uncertainty associated with error-rate-based methods for quantifying evidence. This report indicates the project has implications for shifting forensic practice paradigms in several significant ways. First, formalizing source and sub-source propositions builds the foundation for forensic evidence interpretation. Second, the proposed paradigm based on error rates involves the interpretation of forensic evidence in a similarity stage and an exclusion stage. The paradigm is more intuitive for forensic scientists to understand than giving one single number based on likelihood ratio. Interpreting the forensic evidence in separate stages also provides the jury more information than the likelihood ratio. Third, visualization tools developed in the project present forensic examiners intuitive statistical graphics about the uncertainty associated with error-rate-based methods for quantifying evidence. 6 figures, 4 tables, and 50 references