Forensic Data Science Laboratory


The Forensic Data Science Laboratory conducts research and development aimed at improving casework capabilities within and across multiple branches of forensic science, particularly casework conducted within the new paradigm for the evaluation of forensic evidence, i.e.:

  • quantification of strength of evidence as a likelihood ratio
  • calculation of likelihood ratios using relevant data, quantitative measurement, and statistical models
  • validation of system performance under conditions reflecting those of the case under investigation
  • reduction of the potential for cognitive bias.

AIFL graph

The likelihood-ratio framework is the logically correct framework for evaluation of forensic evidence. The potential for cognitive bias can be reduced by restricting subjective judgements to matters such as selection of appropriate data to enter into the system and by directly reporting the output of the statistical model as the strength of evidence.

In terms of adoption of the new paradigm, two of the most advanced branches of forensic science are forensic DNA and forensic voice comparison. We have particular expertise in these branches of forensic science, plus expertise in forensic inference and statistics and in machine learning. We collaborate with researchers and practitioners who have expertise in other branches of forensic science.

We also work on increasing the understanding of forensic inference and statistics among forensic scientists and lawyers.

FDSL also supports the development of international standards in forensic science and forensic speech science.  Dr Morrison Chairs the Forensic Science Committee of the British Standards Institution (BSI) and through the Forensic Science Committee of the International Organization for Standardization (ISO), is contributing to the development of ISO 21043 Forensic Science.  In the Speaker Recognition Subcommittee of the Organization of Scientific area Committees for Forensic Science (OSAC), he leads a task group developing a multi-part standard for forensic speaker recognition. A subunit of FDSL is the Forensic Speech Science Laboratory.

Structure and formation

The Forensic Data Science Laboratory (FDSL) is part of the Computer Science Department, School of Engineering and Applied Science, and part of the Aston Institute for Forensic Linguistics (AIFL). A subunit of FDSL is the Forensic Speech Science Laboratory (FSSL).

The Laboratory was established in 2019 as part of the formation of AIFL. Initial funding for AIFL came from a GBP 5.4 M grant from Research England’s Expanding Excellence in England (E3) programme and a GBP 0.6 M strategic investment by Aston University.

structure and formation


The director of the laboratory is Dr Geoffery Stewart Morrison. Dr Morrison was a leading applicant on the E3 grant that established AIFL. Prior to the formation of the Laboratory, Dr Morrison and his colleagues spent more than a decade promoting and developing implementation of the new paradigm, primarily in forensic speech science.

Dr Geoff

Dr Morrison’s former appointments include Director of the Forensic Voice Comparison Laboratory, School of Electrical Engineering & Telecommunications, University of New South Wales, and Scientific Counsel, Office of Legal Affairs, INTERPOL. He is author of more than 50 academic publications, and has been a subject editor and a guest editor for the journals Speech Communication and Science & Justice. He is Chair of the Forensic Science Committee of the British Standards Institution (BSI), and an active member of the Forensic Science Committee of the International Organization for Standardization (ISO) and of the Speaker Recognition Subcommittee of the Organization of Scientific area Committees for Forensic Science (OSAC). He has forensic casework experience in Australia, Canada, Northern Ireland, Sweden, and the United States.

Joining us in May 2020, the Deputy Director of the Laboratory will be Dr Roberto Puch-Solis. Dr Puch-Solis has almost two decades of experience in forensic data science. During this time he has conducted research on glass, fibres, fingerprints, and DNA. His primary research is in evaluation of DNA profiles. He has lead the development of probabilistic systems for the evaluation of DNA profiles, including a probabilistic genotyping system that is currently used in casework.

Dr Puch-Solis’s former appointments include Statistician in the Interpretation Group of the Forensic Science Service, and Lead Statistician at LGC Forensics and Eurofins Forensic Services. His experience in these roles includes conducting research and development, conducting forensic casework, providing casework support, providing training, and providing internal and external consulting services.

Our People

AIFL Members
Dr Nabanita Basu
Research Associate in Forensic Data Science 
Dr Roberto Puch-Solis    
Deputy Director of the Forensic Data Science Laboratory, Senior Lecturer in Forensic Data Science
Dr Geoffrey Stewart Morrison
Director of the Forensic Speech Science Laboratory
Dr Philip Weber 
Research Fellow in Forensic Data Science
Additional Aston Members
Dr Diego Faria
Senior Lecturer in Computer Science
Dr George Vogiatzis
Senior Lecturer in Computer Science
Dr Patrick Geoghegan
Lecturer in Biomedical Engineering
Honorary and Adjunct Staff
Dr Rachel Bolton-King
Associate Professor of Forensic Science, Department of Criminal Justice and Forensic Science, Staffordshire University
Dr Rolf Ypma
Forensic Data Scientist, Netherlands Forensic Institute
Dr Ewald Enzinger
Senior Research Engineer, Eduworks Corporation
Prof Cuiling Zhang
Director, Chongqing Institutes of Higher Education Key Forensic Science Laboratory
Dr Claudia Rosas
Associate Professor, Instituto de Lingüística y Literatura, Universidad Austral de Chile


The following research and development projects are in progress or under development.

Forensic speech science: Development of a forensic voice comparison system

Updated 2020-02-09.


Laboratory members and adjunct members working on this project: Dr Morrison, Dr Weber, Ms Szczekulska, Dr Enzinger, Prof Zhang, Dr Rosas.

We are developing a forensic voice comparison system that can be used for research and casework. We view a system for conducting forensic voice comparison as not simply a collection of tools, but also protocols, databases suitable for training and testing under casework conditions, documentation, validation reports, and well-trained practitioners. We aim to develop a system that will meet legal admissibility requirements such as those of Federal Rule of Evidence 702 and the Daubert trilogy of Supreme Court rulings in the United States, and of Criminal Practice Directions 19A in England & Wales.
In forensic voice comparison casework, the relevant population and the recording conditions vary greatly from case to case. Researchers and practitioners need protocols, tools, and data that provide them with the flexibility to deal with this case-to-case variability. Practitioners need to be able to train (or adapt) the system for the conditions of the case, and they need to be able to empirically validate the performance of the system under conditions reflecting those of the case. In order to inform practice, researchers need to explore which options and settings give best performance under particular conditions, and explore the robustness of systems to variability in conditions.

Commercially marketed software tools often lack flexibility, and may be too expensive for researchers and practitioners in lower GDP countries. Many researchers and practitioners in the field lack the programming skills to make use of existing open-source automatic speaker recognition toolsets, and licencing restrictions may prevent such toolsets from being used for casework which counts as commercial activity. For different reasons, existing commercial and open-source tools are often insufficiently well documented for end users and others to be able to easily understand what the tools are actually doing. This is in opposition to the transparency that may be required by the courts. Researchers and practitioners therefore need software tools that are low cost, flexible, and easy to use (controllable via GUI or only requiring very limited programming skills), that are very well documented, that are designed to facilitate validation, and that have code that is open to inspection (we envisage open source, but not open distribution).

In the context of this research and development project, we define two groups of end-users: 1. Researchers and practitioners who will (potentially) use the system to do research and to conduct casework. 2. Service users, i.e., organizations that commission practitioners to perform forensic voice comparison analyses. Potential service users include defence lawyers and law-enforcement agencies.

This project is conducted in collaboration with several partner organizations, including:

Universidad Austral de Chile 
Policía de Investigaciones
Principal collaborator: Dr Claudia Rosas

Southwest University of Political Science and Law
Principal collaborator: Prof Cuiling Zhang

German Federal Police, Bundeskriminalamt (BKA) 
Principal collaborator: Dr Michael Jessen

Netherlands Forensic Institute (NFI) 
Principal collaborator: Mr David van der Vloed

Swedish National Forensic Centre (NFC) 
Principal collaborator: Ms Fanny Carlström Plaza

United States of America:
Federal Bureau of Investigation (FBI) 
Principal collaborator: Mr David Marks


2018–2019 we worked with partners and collaborators on end-user needs assessments in order to identify the data that need to be collected, and the software tools, protocols, and training programmes that need to be developed.

2018–2019 we developed prototypes for core software tools.

2020 Jan–Feb: Consolidated end-user needs assessment and draft of functional requirements completed and distributed to partners and collaborators for feedback.


Research England, Expanding Excellence in England (E3).

Forensic speech science: Consensus on validation of forensic voice comparison

Updated 2020-03-14.


Laboratory members and adjunct members working on this project: Dr Morrison, Dr Enzinger, Dr Ypma.

Since the 1960s, there have been calls for forensic voice comparison to be empirically validated under casework conditions. Since around 2000, there have been an increasing number of researchers and practitioners who conduct forensic-voice-comparison research and casework within the likelihood-ratio framework. In recent years, this community of researchers and practitioners has made substantial progress toward validation under casework conditions becoming a standard part of practice: They have developed procedures, metrics, and graphics for validating forensic voice comparison systems and reporting the results. The Speech Communication virtual special issue on validation of forensic voice comparison systems was completed in 2019. There is also an ongoing effort through the Organization of Scientific Area Committees for Forensic Science (OSAC) to develop a standard for validation of forensic voice comparison systems.

An outstanding question is:

  • Given the results of an empirical validation of a forensic-voice-comparison system, how can one decide whether it is good enough to be used in court?

In September 2019 we held a two-day meeting to discuss this issue. The meeting was organized and sponsored by the Forensic Speech Science Laboratory (Aston University), and hosted by the Netherlands Forensic Institute (NFI).

Invited were a group of participants who could potentially produce a document that would be seen as representing what is “generally accepted within the relevant scientific community”. Invitees were individuals who have knowledge and experience of validating forensic voice comparison systems in research and casework contexts, and individuals who have actually presented validation results to courts. Also invited were individuals who could bring a legal perspective on the issue, and individuals with knowledge and experience of validation of forensic science more broadly.

The issue was discussed:

  • from an admissibility perspective, e.g., in the US with respect to FRE 702 and the Daubert trilogy, and in England & Wales with respect to CPD 19A.
  • with respect to jurisdictions and situations in which formal admissibility criteria do not apply but in which knowledge of validation results should be relevant for the court.
  • from a laboratory procedure or best practice perspective – should the practitioner proceed to analyze the questioned- and known-speaker recordings?

During the meeting, a consensus was reached among those in attendance.

An initial draft of a document describing the consensus was written, and in December 2019 was distributed to meeting attendees. The draft is now being revised using a cyclical process involving videoconference meetings and document revision. The process will later be expanded to include original invitees who were unable to attend the original meeting.

The plan is to submit the final version of the document for publication open-access in a reputable forensic science journal.


Research England, Expanding Excellence in England (E3).

DNA: Statistical evaluation of DNA profiles

Updated 2020-02-19.


Laboratory members and adjunct members working on this project: Dr Puch-Solis, Dr Morrison.

This project will extend Dr Puch-Solis’s existing work on evaluation of DNA profiles. It will include comparison of different approaches to probabilistic genotyping.

It will also include transfer of concepts, techniques, and models (e.g., for calibration) from forensic speech science to evaluation of DNA profiles.


Research England, Expanding Excellence in England (E3).

Firearms: Calculation of likelihood ratios from forensic comparison of fired cartridge casings

Updated 2020-02-08.

fire arm

Laboratory members and adjunct members working on this project: Dr Bolton-King, Dr Basu, Dr Vogiatzis, Dr Morrison.

During 2020, Dr Bolton-King and her collaborators are creating a database of scans of 9 mm Luger type cartridge casings fired from semi-automatic pistols. The aim is to scan 10 cartridges fired from each of 1000 pistols (10,000 cartridges total). The scans are being made using an Evofinder Data Acquisition System.

We will exploit the database to develop and validate a system that uses image-processing and machine-learning techniques to calculate likelihood ratios, likelihood ratios addressing the hypotheses that questioned- and known-origin cartridge casings were fired from the same pistol versus that they were fired from different pistols (pistols that fire 9 mm Luger type ammunition).


Research England, Expanding Excellence in England (E3).

Gait analysis: Calculation of likelihood ratios from forensic comparison of video recordings of walkers

Updated 2020-02-08.

gait analysis

Laboratory members and adjunct members working on this project: Ms Szczekulska, Dr Geoghegan, Dr Faria, Dr Morrison.

In collaboration with Prof Egbert Otten and Ms Marie Wiedemeijer, Center for Human Movement Sciences, University of Groningen, we are developing a project on forensic gait analysis.

We will build on Prof Otten and Ms Wiedemeijer’s existing work that calculates likelihood ratios using features that are visually extracted from video images by human coders. We plan to collect larger forensically relevant databases, attempt to make improvements in statistical modelling, and conduct validation studies. We also plan to explore automatic extraction of features.

Fingerprints: Calculation of likelihood ratios from fingermark and fingerprint images

Laboratory members and adjunct members working on this project: Dr Puch-Solis, Dr Basu, Dr Vogiatzis, Dr Morrison.

We are developing a research project on calculating likelihood ratios for comparisons of fingermarks and fingerprint images.


Recent forensic science publications authored by Laboratory members

Updated 2020-02-09.

  • Morrison G.S., Enzinger E., Ramos D., González-Rodríguez J., Lozano-Díez A. (2020). Statistical models in forensic voice comparison. In Banks D.L., Kafadar K., Kaye D.H., Tackett M. (Eds.), Handbook of Forensic Statistics (Ch. 21). Boca Raton, FL: CRC. (Preprint and videos available here.)
  • Rosas C., Sommerhoff J., Morrison G.S. (2019). A method for calculating the strength of evidence associated with an earwitness’s claimed recognition of a familiar speaker. Science & Justice, 59, 585–596. Available here.
  • Morrison, G.S., Enzinger, E. (2019). Multi-laboratory evaluation of forensic voice comparison systems under conditions reflecting those of a real forensic case (forensic_eval_01) - Conclusion. Speech Communication, 112, 37–39. Available here.
  • Morrison, G.S., Kelly, F. (2019). A statistical procedure to adjust for time-interval mismatch in forensic voice comparison. Speech Communication, 112, 15–21. Available here.
  • Morrison G.S., Enzinger E. (2019). Introduction to forensic voice comparison. In Katz W.F., Assmann P.F. (Eds.) The Routledge Handbook of Phonetics (ch. 21, pp. 599–634). Abingdon, UK: Taylor & Francis. Available here.
  • Morrison G.S., Ballantyne K., Geoghegan P.H. (2018). A response to Marquis et al (2017) What is the error margin of your signature analysis? Forensic Science International, 287, e11–e12. Available here.
  • Morrison G.S. (2018). Admissibility of forensic voice comparison testimony in England and Wales. Criminal Law Review, (1), 20–33. (Preprint available here.)
  • Morrison G.S., Enzinger E., Zhang C. (2018). Forensic speech science. In Freckelton I., Selby H. (Eds.), Expert Evidence (Ch. 99). Sydney, Australia: Thomson Reuters. (Preprint available here.)
  • Morrison G.S., Poh N. (2018). Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios / Bayes factors. Science & Justice, 58, 200–218. Available here.
  • Morrison G.S. (2018). The impact in forensic voice comparison of lack of calibration and of mismatched conditions between the known-speaker recording and the relevant-population sample recordings. Forensic Science International, 283, e1–e7. Available here.
  • Morrison G.S., Enzinger E. (2018). Score based procedures for the calculation of forensic likelihood ratios – Scores should take account of both similarity and typicality. Science & Justice, 58, 47–58. Available here.