Aston Institute for Forensic Linguistics Past Events
Many of our events are recorded and our library of videos can be accessed with additional material such as audience Q & A and write ups by Institute members.
Please find a selection of talks in the playlist below.
1. Cathy Basterfield: Analysis of written information for vulnerable communities
2. Ana-Maria Jerca: Agreement among adversaries
3. Deise Ferreira Viana de Castro: Women and motherhood in criminal justice
4. Ma. Kaela Joselle Madrunio: Epistemic and evidential markers and stancetaking of courtroom interactants in crime docuseries
5. Marie Bojsen-Møller: When genres collide: The uptake of threats in legal documents
6. Joao Pedro Padua: Narrative devices for moral construction of character in judicial decisions: examples from two high-profile Brazilian cases
7. Matthew Adegbite: Persuasion through code-switching in police-suspect interrogation in multilingual Nigeria
8.Andrea Nini and Shunichi Ishihara: The likelihood of lexicogrammatical overlap
9: Rebecca Reglin and Tatjana Scheffler: Emoji and truthfulness
10. Shaomin Zhang: From flaming to incited crime: recognising cyberbullying on Chinese WeChat account
11: Qing Zhang & Xuyi Tang: Contaminated narrative: a case study of a juvenile victim’s narration in police interviewing in China
12. Peter Gray: The modal muddle in jury directions
13. Chris Heffer: Poisoning, partisanship and the regulation of misinformation
14. Tim Grant and Malcolm Coulthart: Authorship and arbitration: the importance and unimportance of multiple authorship in the $50bn Yukos Award
15. Lauren Devine: Through a glass darkly: considering issues of legal language and interpretation for linguists
Bio: Janet Ainsworth is the John D. Eshelman Professor of Law at Seattle University and has an honorary appointment at China University of Political Science and Law in Beijing. Before joining the faculty of Seattle University, she practiced criminal trial and appellate law as a public defender in Seattle and continues to assist in Innocence Project cases. Her scholarly interests lie at the intersection of law, language, and culture. Together with Lawrence Solan, she co-edits the Oxford University Press series, Studies in Language and the Law.
Is there a Method to our Madness?: Some Considerations regarding Methodologies in Forensic Linguistics, Past, Present, and Future
Forensic linguistics as a field has utilized a variety of linguistics-based methodologies in its research program and, as a result, achieved significant success in improving the quality of justice systems. This presentation will consider some of the challenges faced by our field as it matures and continues to develop, and will suggest ways in which forensic linguists can work collaboratively with scholars based in other disciplines to continue the forensic linguistic program of making legal systems more just, fairer, and more accurate through an understanding of language issues in the law.
Bio: Tammy Gales is an Associate Professor of Linguistics and the Director of Research at the Institute for Forensic Linguistics, Threat Assessment, and Strategic Analysis at Hofstra University, NY. She received her Ph.D. from the University of California, Davis, where she was awarded a fellowship to examine authorial stance in threatening communications at a behavioral analysis firm of retired FBI agents in Washington D.C. Her subsequent research has applied corpus and discourse analytic methods to the examination of stance markers in parole board hearings, cross-examinations of assault victims, and confession statements; and she has applied corpus methods to the interpretation of disputed meanings in legal statutes and trademark cases. Gales has trained law enforcement from agencies across Canada and the U.S; she currently serves on the Executive Committee for the International Association of Forensic Linguists; and she is co-editor of the new Elements in Forensic Linguistics series from Cambridge University Press.
"I need a forensic linguist to help with my problem": The missing link between experts and those who need them
In addition to several research and social justice goals, the IAFL’s aims include 1) promoting the use of linguistic evidence in a range of criminal and civil cases, 2) furthering the interchange of ideas and information between legal and linguistic communities, 3) improving public understanding of the interaction between language and the law, and 4) disseminating knowledge about language analysis and its forensic applications among relevant professionals around the world. In each of these aims there is a shared theme – connecting those with linguistic expertise to those who may benefit from that expertise. Yet, in this very goal, we currently have a missing link.
Bio: Jennifer Glougie has been a member of the British Columbia Law Society since 2004 and practices primarily in labour and employment law. She was appointed to the BC Labour Relations Board in 2016 and has been the Board's Associate Chair of both Mediation and Adjudication since 2018. She received her Ph.D. in Linguistics from the University of British Columbia in 2016. Her dissertation examined the semantics and pragmatics of English evidential expressions, using police interview transcripts as a data source. Jennifer was part of the IAFL Executive, as an Ordinary Member, between 2015 and 2019.
Linguistic hurdles to accessing positive rights; a case study
Forensic linguists often identify and expose linguistic hurdles that compromise an individual’s capacity to defend themselves against state and other actions. True access to justice, however, requires more than just a protection against unfair state intervention; individuals must be free to, and capable of, using the legal system to enforce positive rights they have against others.
Bio: Chris Hutton's research focuses on political issues in language, linguistics, and law. Following his 1999 study of linguistics and ideology in Nazi Germany, Linguistics and the Third Reich (Routledge) he pursued various projects at the intersection of linguistics, law and intellectual history. The focus in this work has been on issues of legal definition and classification. Publications include Race and the Third Reich (2005), Definition in Theory and Practice (with Roy Harris, 2007), and Language, Meaning and the Law (Edinburgh, 2009). More recently he has published two books: Integrationism and the Self. Reflections on the Legal Personhood of Animals (Routledge, 2019) and The Tyranny of Ordinary Meaning: Corbett v Corbett and the Invention of Legal Sex (Palgrave, 2019). His current research concerns US naturalization law between 1874 and 1945, looking at how it was caught between the categories of race science and those of ordinary language.
Ordinary language as a legal construct: remarks on classification and self-classification
The ordinary meaning doctrine states that words and phrases in statutes, contracts, and other legal texts are given their ordinary meaning, unless they are deemed to be technical terms of law, or, to belong to a distinct commercial, professional, dialectal, or sub-cultural variety. The category of ordinary language is central to debates in the philosophy of language and literary theory. Within linguistics, in particular sociolinguistics, it has no particular resonance. In law, ordinary language (and its near-synonyms) is a pivotal concept, both in judicial reasoning and in jurisprudential discussions of legal interpretation. It is contrasted at the most fundamental level with legal language, that is, words and phrase that are recognized as belonging to law’s specialized register. An implicit claim to stability and communality makes ordinary language a plausible reference point in legal argumentation, in that it evokes a bridge between the legal domain and the community of speakers to whom law is addressed. It implies a substantial overlap between the language of law and the language of ‘ordinary speakers’, and thereby offers an implicit defence against arguments that law is an alienated or ‘foreign’ discourse domain. But the category of ordinary language suffers from a sociological and sociolinguistic deficit. It is arguably not possible to give empirical substance to the category, even while it possesses powerful intuitive plausibility. Yet law cannot in practice dispense with this category, though it is manifestly open to manipulation in judicial reasoning. Typical cases concern words like vehicle, sandwich, or building. This presentation focusses on self-classification, looking in particular at naturalization cases in the United States, where questions of ordinary language arise in relation to categories of human identity. Here the interpretative politics of law become much more contentious. Law is then faced with a difficult choice between strategies of objectification and the recognition of self-classification and self-designation.
Bio: Nicci MacLeod is a Senior Lecturer in English Language and Linguistics at Northumbria University, having spent many years working at the Centre for Forensic Linguistics, the predecessor of AIFL. Her work at Aston with Tim Grant was concerned with the linguistic aspects of identity disguise in online investigations into child sex abusers, and she is co-author of a book on this topic Language and Online Identities (Cambridge). She has conducted research on a wide range of other topics from police interviews with rape victims through to narratives of atrocity in 17th century Ireland, and has also carried out a large volume of forensic linguistic casework including in cases of blackmail, sexual abuse, and murder. Her current research interests centre variously on investigative interviewing, the status of expert evidence, and representations of criminality in a wide range of contexts.
Dealing with sensitive topics in forensic linguistics teaching, research, and casework: striking the balance
As the latest BAAL guidelines (BAAL 2021) attest, understanding of research ethics in applied linguistics has noticeably broadened in recent years, now encompassing not only our responsibilities to research participants but also to researchers, students, and the general public throughout the research and teaching processes. As our deeply missed former president Ron Butters highlighted for us back in 2011, we have committed ourselves to ‘deepening and understanding the nature of the ethical implications of our testimony and writing’ (p.352): and it is thanks to the work of him and others that we now have the IAFL Code of Practice to guide us in the uniquely forensic linguistic endeavour of casework. He noted then there was still much work to be done, so how far have we come in the ten years since he gave his retiring presidential address?In this paper I illuminate the complex issues that our dealings with sensitive topics and data potentially present to researchers, students, and the public at large, with specific reference to three distinct areas of work in which I have been involved over my career: research into highly controversial and disputed narratives of atrocity collected in 17th century Ireland; teaching and dissemination around (i) the language of online child sexual abusers and (ii) investigative interviews with women reporting rape; and, finally, criminal defence casework dealing with the determination of meaning in Urban British English. Each of these contexts presents a unique set of challenges to be negotiated, encompassing everything from community cohesion and racial equality through to potential triggers for individual past trauma.I conclude with some thoughts on how me might confront these difficulties in a way that does not, and is not seen to, compromise our impartiality or academic freedoms. The turn in applied linguistics towards a broader understanding of ethics is of course to be applauded. I suggest that by positioning ourselves firmly within an approach that gives ground to student concerns, prioritises researcher welfare, attaches importance to engaging with the public ramifications of our work, and shamelessly promotes our discipline to those who need us most – whether they know it or not – we can best serve our organising principle of improving justice through language analysis.
Bio: Isabel Picornell, PhD CFE, Isabel is a consultant forensic linguist and Director of QED Limited, providing forensic linguistic services to the corporate, investigative, and intelligence sectors. She holds a PhD in Forensic Linguistics from Aston University (UK) and is currently a Visiting Research Fellow at the Aston Institute for Forensic Linguistics with a research interest is authorship and deception in faked contexts. Isabel is Vice President of the International Association of Forensic Linguists (IAFL), a member of the Germanic Society for Forensic Linguistics (GSFC), and a Certified Fraud Examiner (ACFE).
IAFL at 30: where we are, and where do we go from here?
The International Association of Forensic Linguists is on the cusp of turning 30 years old. In that time, Forensic Linguistics has become much more popular, research has flourished, and the Association has grown tenfold. However, this success has raised questions about our identity today: What is ‘Forensic Linguistics’; and, what does it really mean to be a ‘forensic linguist’. Looking forward, where does the Association fit in addressing issues of research and professionalism in practice in the years to come? In this plenary talk, I present my own thoughts on these issues and a vision for the future of the Association.
Date: 21 October 2021
Prof Monika Schmid: Language analysis for the determination of origin: Linguistic and legal problems (Essex University)
Asylum seekers often do not have the required documentation (birth certificate, passport) to prove the truth of their claimed origin and persecution. In cases where the narrative is in doubt, many countries have been employing the controversial instrument of Language Analysis for the Determination of Origin (LADO). This consists of an interview being conducted by a native or proficient speaker of one (or more) of the languages the refugee can be expected to speak, based on their claimed origin, and an analysis of this interview by someone with native and/or specialised knowledge of that language.
Dr Marc Alexander: Calling for Help (Keele University, UK)
"My talk focuses on an overarching interest I have exploring social problems in our communities. Through the qualitative approach of discursive psychology, underpinned by conversation analytic methods, I investigate telephone calls between people who are in crisis (e.g., through homelessness, or neighbour disputes) and institutions that have a remit to help. Findings from my research supplement and extend our knowledge (both in theory and application) regarding how people formulate their concerns and the ways in which organisations manage those concerns through their institutional remits."
Durrell Malik Washington Sr. and Toyan Harper: Achieving Juvenile Justice Through Abolition: A Review of Social Work’s Role in Shaping the Juvenile Legal System and Steps Toward an Abolitionist Future (University of Chicago, USA)
"The first juvenile court was created in 1899 with the help of social workers who conceptualized their actions as progressive. Youth were deemed inculpable for certain actions since, cognitively, their brains were not as developed as those of adults. Thus, separate measures were created to rehabilitate youth who exhibited delinquent and deviant behavior. Over one hundred years later, we have a system that disproportionately arrests, confines, and displaces Black youth. Our paper critiques social work’s role in helping develop the first juvenile courts, while highlighting the failures of the current juvenile legal system. We then use P.I.C. abolition as a theoretical framework to offer guidance on how social work can once again assist in the transformation of the juvenile legal system as a means toward achieving true justice"
The Aston Institute for Forensic Linguistics (AIFL) hosted its Annual Symposium in September 2020. Due to current restrictions, the event was hosted virtually, yet still attracted a vast number of delegates.
The five centres of AIFL showcased the diversity in cutting-edge research taking place. Each of the centres had the opportunity to introduce themselves, their speakers and the exciting research projects their teams are currently working on.
The two-day online event featured numerous presentations and poster sessions from the Institute’s researchers. Throughout, AIFL facilitated rich discussions and created exciting interactions between delegates from law enforcement legal and academic backgrounds. The poster sessions, delivered digitally through a new online environment Welcome.me, allowed attendees to experience the ‘social spaces’ whilst interacting over coffee (albeit homemade!) and discussing the topics raised.
The event was exceptionally well attended with over six hundred delegates from across five continents signing up for the event, and around three hundred delegates attending the busiest session.
Forensic linguistics is at the cutting edge of the undercover policing of child sexual abuse on the open internet and dark web, and language and identity is a fundamental part of this. The authors have drawn on their extensive experience in training undercover officers to develop innovative methods in identifying the creation and performance of online personas, crucial in detecting identity disguise online.
Incorporating the launch book: Grant, T. & MacLeod N. (2020) Language and Online Identities The Undercover Policing of Internet Sexual Crime. CUP
10:00 - 11:00 Dr Nicci MacLeod, Northumbria University. Where it all Began: Linguistic Training for Online Identity Assumption
11:00 - 11:30 Dr Andrea Nini, University of Manchester. Authorship clustering for the dark web: Methodological and theoretical remarks.
11:30 - 12:00 Daniela Schneevogt - AIFL, Aston University. “we were just really in love”: referentiality and clusivity of the pronoun ‘we’ in a Dark Web community of child sex abusers.
12:00 - 1:00 Prof Nuria Lorenzo-Dus, Swansea University. Developing Resistance against Online Grooming: From Linguistic Analysis to Practice-based Interventions.
2:00 -2:45 Matt Sutton, Senior Manager, National Intelligence Hub (CEOP), National Crime Agency. The linguistic contribution to the investigation of online sexual crime – Operation CACAM, the investigation into Matthew Falder.
2:45 - 3:15 Dr Emily Chiang, AIFL, Aston University, Dr Dong Nguyen, Alan Turing Institute and Prof Jack Grieve, University of Birmingham. Rhetorical analysis of suspected child sexual offenders’ interactions in a dark web image exchange chatroom
3:45 - 4:45 Prof Tim Grant AIFL, Aston University Linguistic. Prof Tim Grant identities: theory and practice in dark web child abuse fora.
4:45 Book launch celebration – with light refreshments.
Dr Nicci MacLeod - Northumbria University. Where it all Began: Linguistic Training for Online Identity Assumption
The monograph launched during this event is the end product of almost ten years of involvement in the training of online undercover operatives (UCOs) in the linguistic aspects of identity disguise – a task that is required as part of a wide range of types of investigation, including those into the online sexual abuse and exploitation of children.
This talk tracked the development of linguistic input into this kind of training from the initial discussions and back-of-an-envelope thoughts on what might be the most relevant theories and ideas for trainees all the way through to the theoretically sophisticated approach to identity that has arisen from these research projects. Heeding Robert’s (2003) plea that “the design and implementation of [applied linguistics] research needs to be negotiated from the start with those who may be affected by it”, Nicci described how she worked in close partnership with practitioners in order to ensure the collaborative research was maximally impactful. Drawing on observations of trainee UCOs preparing for a live operation and a series of trials her and her team had run in which trainees had their performances assessed prior to and after linguistic training, Nicci demonstrated the measurable changes that her research and input has made to professional practice.
As well as addressing some stereotyped beliefs about the way particular groups of people use language online, linguistic input based on Nicci and colleagues’ research also raised trainees’ awareness of higher levels of linguistic analysis, such as pragmatics and interactional patterns. Nicci showed this enhanced knowledge being put into practice in a simulated operation carried out by an experienced UCO as part of the project. She concluded with some thoughts on how the work influenced her team’s thinking around language and identity, a theme picked up by Professor Grant in the afternoon session.
Dr Andrea Nini – University of Manchester. Authorship clustering for the dark web: Methodological and theoretical remarks
An important problem in dark web investigations is how to link usernames that belong to the same person in web forums. Combining data that belongs to the same offender can significantly help investigations but often the only evidence available to link usernames is linguistic. In the field of computational authorship analysis, the task of grouping texts in a corpus by authors is called ‘author clustering’ and it relies on cluster analysis techniques using frequency of linguistic items as features. This task is related to ‘authorship verification’, or the task of confirming that a certain text was written by a specific suspect, which is one of the most difficult tasks in authorship analysis. This talk covered the methodological problems in applying these techniques to dark web forum data and propose some theoretical solutions. Andrea included remarks on how studying this problem can shed more light on our understanding of linguistic individuality for forensic linguistics.
Prof Nuria Lorenzo-Dus, Swansea University. Developing Resistance against Online Grooming: From Linguistic Analysis to Practice-based Interventions.
The internet enriches children’s lives, providing learning, creative, entertainment and social opportunities. Yet it has a dark side, too, potentially exposing them to abuse and harm. This includes sexual grooming, known instances of which are increasing rapidly and with many more cases going unreported. Children who have been / are being groomed via the internet may not tell anyone because they feel ashamed or guilty; some may not even realise that they are being groomed given offenders’ manipulative tactics.
How can we better tackle the problem of online child sexual grooming? In this talk, Nuria advocated the importance of understanding both offenders’ linguistic modus operandi and child victims’ discourse within what is essentially a communicative process of entrapment. Firstly, she introduced the data (pseudo- and real- online grooming conversations) and methods (primarily Corpus Linguistics) that over a series of inter-connected studies have enabled the identification of complex communicative patterns within online child sexual grooming. Secondly, she focused on key results regarding offenders’ strategic use of ‘vague language’ and children’s attempts to resist grooming. Finally, Nuria discussed two interventions geared towards combating online child sexual grooming: a hybrid Artificial Intelligence - Corpus Linguistics tool for detecting groomer language and a prevention-oriented training resource for professionals designed to raise their awareness of groomers’ communicative tactics and children’s discourse in response to them. Both interventions are based upon multi-disciplinary academic work, integrating Linguistics, Computer Sciences, Criminology and Public Policy, and are being developed in collaboration with stakeholders, including child protection and law enforcement agencies.
Daniela Schneevogt, Aston Institute for Forensic Linguistics. “we were just really in love”: referentiality and clusivity of the pronoun ‘we’ in a Dark Web community of child sex abusers
Criminals use the Dark Web to build networks for conversation and support (Holt et al. 2015). For those with a sexual interest in children, the internet facilitates the abuse of children, the distribution and consumption of illicit imagery and the exchange of ideas and advice (Durkin et al. 2006; Cohen-Almagor 2013; Holt et al. 2015). Such communities create dense linguistic layers of meaning which are difficult to penetrate by persons outside the community. Drawing on Bell’s (1984) notion of audience design, Van Leeuwen’s (2013) social actor framework and Scheibman’s (2004) concept of clusivity, Daniela’s study aimed to investigate how users of a Dark Web child sex abuse forum use the first person plural pronoun ‘we’ by carrying out a two-fold annotation for semantic referents and clusivity. In these texts, first person pronouns are used in a much wider array of contexts than first anticipated. In addition to the well-studied variation in clusivity – that is, differences between exclusive and inclusive referents – large variation across two further axes was identified: group and function. For example, abusers normalise their actions when referring to both a child and a forum user together as ‘we’, portraying children as active and equal partners in those pseudo-intimate relationships. Scheibman’s (2004) clusivity categories are therefore not sufficient in explaining the different pragmatic functions of the pronoun ’we’ in child abuse forum communication. Applications of these findings include online undercover policing, such as infiltration of crime-related fora, as discussed by Grant and MacLeod (2017, 2020).
Dr Emily Chiang, Aston Institute for Forensic Linguistics, Dr Dong Nguyen, Alan Turing Institute, Prof Jack Grieve, University of Birmingham. Rhetorical analysis of suspected child sexual offenders’ interactions in a dark web image exchange chatroom
Child sexual offenders regularly convene in online spaces to exchange illicit imagery and advice about abusive practices (Davidson & Gottschalk, 2011; Westlake & Bouchard, 2016). In response, law enforcement agencies around the world are increasingly deploying undercover officers who pose as offenders to gather intelligence and evidence on offending communities. Currently, however, little is known about how offenders interact online, raising significant questions around how undercover officers should ‘authentically’ portray the child sexual offender. Emily presented a linguistic description of authentic offender-offender interactions taking place on a dark web image exchange chatroom. She analysed the rhetorical moves and strategies of chatroom users and visualise users’ move structures using Markov chains, enabling us to compare the linguistic behaviours of specific user ‘types’. Emily and colleagues found that the predominant moves characterising this chatroom were Offering Indecent Images, Greetings, Image Appreciation, General Rapport and Image Discussion, and that these moves (and others) were employed differently by users of seemingly greater and lesser offending experience. Based on their findings, Emily suggested some practical take-home messages for undercover agents working in this domain.
Prof Tim Grant, Aston Institute for Forensic Linguistics. Linguistic identities: theory and practice in dark web child abuse fora
Tim addressed the idea of a linguistic individual, and how as individuals we draw on an array of resources to perform a variety of online identities. In a theoretical aspect of this discussion Tim explored how the resources we draw on enable but also constrain our identity performances and he showed in practical terms how this has two implications for online undercover officers (UCOs). The first implication is that in attempting to perform as another person the most convincing route will be to acquire the resources that a target individual draws on in their identity performances; and that these resources can be identified through a detailed linguistic analysis of chat logs (as demonstrated by Dr MacLeod in the first session). The second implication is that undercover officers need to learn to suppress those resources which they commonly use to perform their everyday identities where these resources are not also shared by the targeted individual. Failure to achieve this suppression of identity resource can lead to the performance of hybrid identities somewhere between the UCO and their target identity. Tim illustrated these points with a series of examples from dataset used in the book he was launching that day, and he concluded by considering next steps in linguistic research in assisting police in the investigation of online sexual crime.
This event explored the characteristics of linguistic varieties in urban contexts, and their implications for research in Forensics.
Reflections from the event below are by Natascha Rohde, PhD student, Aston Institute for Forensic Linguistics.
As one of the big themes within sociolinguistics, (new) urban varieties of language(s) have been observed and studied by many linguists in various different context. However, in the case of forensic linguistics, linguistic variation, especially in multicultural urban contexts bring about new and different questions which this event offered an insight to.
The day started with lexicographer and UBE (Urban British English) expert Tony Thorne from Kings from College London giving an insight into his extensive experience working with police in interpreting drill lyrics and other texts written in UBE. Under the title 'Translating the language of violence: gang slang and Drill lyrics', he gave an overview of the various aspects to consider when dealing with UBE in a forensic or legal context and shared examples from his long-standing career as an expert in urban slang and rap lyrics.
The presentation of the following speaker, Yaron Matras from the University of Manchester focused on the 'Structural and social aspects of cryptolects' and illustrated the many functions cryptolects can take over in their respective communities of practice. Matras emphasized the multidimensionality and complexity of cryptolects, highlighting their role of making everyday conversation inaccessible to people outside the group in order to coordinate logistics of transactions but also how these overlap with ethnolects, illustrated by the example of Shelta, the language of the Irish Traveller Community. While ethnic marginalised minorities use their own variety to flag solidarity and speak in the presence of bystanders and not be understood, their language also serves to aid group bonding, perform identity and talk about taboos in a respectful way, accepted by their community.
He also illustrated the wide range of linguistic features of various cryptolects highlighting that they can be seen primarily as lexicons rather than fully fledged languages, and symbiotic in nature, meaning they do exist in symbiosis with another language. Using the languages around them, most cryptolects work by manipulating part of the lexicon to make meaning inaccessible, which can be done by semantic extensions, adding or swapping syllables or by using an heritage language, like Irish in the case of Shelta.
The final talk of the day, delivered by Eithne Quinn and Latoya Reisner also from the University of Manchester was titled 'Procedural unfairness and racially loaded misunderstandings in the use of rap lyrics in UK criminal cases' and gave a remarkable insight into how the judicial system utilises drill rap lyrics in their criminal proceedings. They provided a rich account of their experience of how drill lyrics have been used as evidence within the criminal justice system, often without the expert knowledge needed to contextualise and understand them. Showcasing examples of how drill lyrics have been plugged out of their initial textual context, leading to fundamentally changed meaning. Quinn made a convincing argument for how linguistic expertise can help address the inequality of arms, and tackle injustice. They concluded with a call to combat racial injustice within the system by providing expert knowledge to defence council. They argue that this would address the inequality of resources when police and prosecution use drill lyrics as evidence in court to secure a conviction, often using inadequate approaches lacking linguistic expertise, and failing to appropriately contexualise them and possibly prejudicing jurys.
The event was completed by a vivid and inspiring roundtable discussion about the role(s) (forensic) linguists can and should contribute to ensuring fair(er) trials and thereby making a well-founded “attempt to improve the delivery of justice” (Tim Grant) using linguistic methods and expertise.
Reflections from Dawn Knight, ‘Ethical considerations for corpus construction: a Welsh language case study’ below are by Fiona Klecher, PhD student, Aston Institute for Forensic Linguistics.
Dawn Knight discussed the ethical considerations, from data collection through to publication, which affected each stage of creating the open-source CorCenCC corpus, the first national corpus of contemporary spoken, written and digital Welsh. One consideration was explaining the concept of a corpus and gaining informed consent, particularly with children. They were helped in this task by “Cor-pws-the-cat”, the project’s mascot, who was used to explain the idea of ‘sharing words’.
Some contributors were reluctant to be recorded because of concerns about being identified. Anonymisation is a complicated issue, compounded by the relatively small numbers of Welsh speakers. Even when identifying features such as names and addresses were removed, there were still concerns that people could be ‘re-identified’ through accent, dialect or recognisable situations. Knight questioned whether it is possible to be truly anonymous in a minority language context. This led to an interesting discussion about finding the right balance between protecting contributors’ anonymity, and retaining as much linguistic detail as possible.
You can read an excellent Event Summary written by Debbie Loakes, University of Melbourne over at the Research Hub for Language in Forensic Evidence here.
Transcription is almost always an institutional practice (Park & Bucholtz 2009). Across a range of institutional settings, ‘practitioners’ are eliciting and capturing spoken talk from ‘clients’ (Sarangi 1998), transcribing that talk, and later repurposing the transcripts in place of the original interaction. The transcription provides a written record of the spoken interaction, to be used by another party at a later date, in another setting or context.
Our point of departure for this event was that written records, and hence transcripts, are certainly necessary. However, we acknowledge that no transcript of spoken interaction can be exact. Over the three sessions we highlight how the transcripts are only ever a representation of the spoken talk and never direct copies, and they inevitably result in a loss of detail. While these ideas are well established in branches of linguistics that deal with transcription, they are not always clearly understood within the law. This has implications for the administration of justice, as our speakers will demonstrate in relation to transcription of police interviews, and of indistinct forensic recordings. In organizing this event we invite and encourage further linguistic input into this area of professional practice.
Dr Martha Komter, Netherlands Institute for the Study of Crime and Law Enforcement (NSCR)
In the Netherlands, police reports are drawn up by the interrogators in the course of the interrogations. These reports eventually serve to be quoted or summarised as pieces of evidence in court. Thus, the reports are removed from the context of their production, and inserted into the context of the court proceedings. These de- and recontextualisations inevitably entail changes of meaning. A more detailed inspection of relevant contexts reveals that de- and recontextualisations also occur in the process of transforming talk into text, in transforming this text into an official document, and in inserting that document into the case file.
Changes of meaning are a result of selective reporting and of the transformation of the interaction in the police interrogation. However, legal practitioners appear to rely on the assumption that what is reported represents 'the suspect's own words'. This can be associated with language ideologies that are deeply engrained both in the law books and in legal practice.
Dr Kate Haworth, Dr Felicity Deamer & Dr Emma Richardson, Centre for Spoken Interaction in Legal Contexts (SILC), Aston Institute for Forensic Linguistics, Aston University
In our presentation we introduce the ‘For the Record: applying linguistics to improve evidential consistency in police investigative interview records’ project. An examination of evidential consistency in investigative interview records; asking do the records serve as an accurate representation of the spoken interaction? Investigative interviews with suspects in England and Wales, are audio recoded as standard procedure and a Record of Taped Interview, or ROTI is produced. The original spoken data are (necessarily) substantially altered through the process of being converted into written format, yet little attention is paid to this. The extent to which the ROTI is an accurate representation of the audio recording is worthy of examination as the ROTI is routinely presented in court as part of the prosecution case; heavily relied upon, in place of the original audio recording. We share findings from an experimental study exploring to what extent we find variation in interpretations of police interviews when we manipulate the medium in which subjects (potential jurors) are exposed to the interview (i.e. as a written transcript or as an original audio recording). We also consider why records are not routinely standardised; considering a set of influencing factors which collectively result in varied records of spoken interaction.
Professor Helen Fraser, Research Hub for Language in Forensic Evidence, The University of Melbourne
Transcription of indistinct forensic audio – and a framework for understanding factors affecting the creation and evaluation of transcripts
This talk discusses transcription of indistinct covert recordings – conversation captured without the knowledge of the speaker(s), and used as forensic evidence in a criminal trial. This type of transcription is unusual in several ways. First, the audio is often of extremely poor quality, to the extent it is hard for independent listeners to make out what is said. Second, the content, and often the context, of the recording is unknown or contested. Third, the transcript is used, not as a record of what was said, but as assistance to the trier of fact (judge or jury) in hearing what is said, and thus in reaching a verdict.
These and other factors make forensic transcription even more difficult and problematic than other forms of transcription discussed in this symposium. Paradoxically, however, the transcripts are often produced and evaluated by personnel lacking specialised expertise in transcription (police and lawyers). Unsurprisingly, this causes significant problems (forensictranscription.net.au).
Of course, linguistic scientists are keen help create a better process – but how should that process work? Answering that question well requires a broad understanding of transcription in general. I present (as a starting point for discussion) a framework within which different types of transcript can be located, and suggest how this might form a useful tool for understanding factors that affect the creation and evaluation of transcripts.
The event was followed by a panel discussion.
This event was hosted by the Centre for Forensic Text Analysis and provided a platform for scholars interested in the linguistic individual and how empirically-based findings can inform and improve methods of forensic authorship analysis.
Lars Bülow (University of Vienna): Systematic and non-systematic idiolectal variation from a variationist perspective
This talk introduces not only systematic but also non-systematic idiolectal variation in spoken language from a variationist perspective. Whereas in variationist sociolinguistics, attention has always been given to those cases in which individuals systematically vary across different discourse types or styles, i.e. intra-speaker variation (cf. e.g. Labov 1966, 1972; Bell 1984; Coupland 2001; Hernández-Campoy 2016), very few studies have focussed specifically on idiolectal variation which occurs in the same style of speech irrespective of the context, the situation, or the communication partner (cf. Bülow et al. 2019: 98; Bülow and Pfenninger 2021). It will be argued that both types of idiolectal variation need to be considered in relation to the dimension of time. In addition to the theoretical background, this talk will also present a panel study that spans over 40 years dealing with idiolectal variation across two discourse types (formal and informal speech) in Austria.
Neus Alberich, Andrea Batel, Krzysztof Kredens and Piotr Pezik (Aston University): Idiolectal variation in Spanish across four discourse types
The idea of authorship attribution is based on two assumptions: that every language user has a unique linguistic style, or 'idiolect', and that features characteristic of that style will recur with a relatively stable frequency. Hundreds of style markers and a variety of attribution techniques have been proposed over the years with some recent studies reporting very high attribution success rates for the less complex closed-set tasks. However, one problem with such studies has been their tendency to use sociolinguistically homogeneous data, whereas a forensically useful author identification system needs to be able to capture stylistic similarities between texts created in different genres and contexts, and for different purposes and audiences.
This paper reports on a study involving nine participants providing linguistic input in Spanish in four discourse types (interview, all-group meeting, email, Whatsapp messages). Using word n-grams as the basic classification tool, we have measured within-author and between-author variability, and identified features that appear to be stable in some of the idiolects across all four discourse types. We ask why this should be the case and discuss the potential of those features to be used in authorship attribution tasks beyond our study.
Malvina Nissim (University of Groningen): Do author traits survive variation? Profiling across genres and languages
Author profiling is the task of predicting some of the author’s traits, like gender or age, disclosed through writing. To perform profiling automatically we develop systems that from existing data learn to make predictions over new, unseen examples. How similar should existing and new examples be in order for systems to be successful? This obviously depends on a more general question: what's the persistence of author traits across different texts, different genres, and even different languages? I will unpack this core point through the presentation of a series of cross-genre and cross-lingual profiling experiments. By discussing not only results but also experimental choices and settings, I will end the talk with reflections on what the optimal experiment to answer our question should look like.
Tatiana Litvinova (Voronezh State Pedagogical University): Idiolect identification in cross-genre and multi-genre scenarios using an approach from bioinformatics
Despite one and a half century of research efforts, identification of an idiolect based on quantifiable linguistic features remains a challenging task in practice. The complexity of the task increases when training documents and the documents in question differ in topic and/or genre, albeit this scenario is not uncommon in forensic settings where small training corpora are typical (Kredens and Coulthard 2012). To address this type of idiolect identification problem, a corpus of multiple texts per author is needed. The texts should represent the author’s idiolect in various ways, i.e. should differ in topic, genre, mode (written/oral), type, way of production (hand-written, typed on physical or touchscreen keyboard), etc. The authorship of all the texts in such a corpus should be unquestionable
A team at the Corpus Idiolectology Lab has collected the first freely available resource of this type, RusIdioStyle, which is now a part of the RusIdiolect database (Litvinova 2021). RusIdiolect has metadata related to both text and author.
Three datasets were derived from RusIdioStyle with each of them containing texts by four different authors. Each author’s idiolect was represented by four genres: picture description, essay, narrative, description of the day. Two scenarios of idiolect identification were constructed: 1) multi-genre, i.e. both training and test sets were compiled from texts in four genres; 2) cross-genre, i.e. the classifier was trained on picture descriptions, stories, essays and tested on descriptions of the day. A range of stylometric markers were used: most frequent word forms and punctuation marks, most frequent lemmas with and without punctuation marks, character n-grams, POS n-grams, full morphological tags, indices of lexical diversity, etc. They were used separately to test the efficiency of each type of the features. As an analytical tool, methods for multivariate data analysis as implemented in the R package mixOmics (Rohart et al.2017) were used, namely PCA for assessing the main source of variation and its supervised version – PLS-DA used as classifier.
Using the above methodology, it was shown that genre was the major source of variation for most types of the features despite the general claim about their context-independent nature. Nevertheless, for all the datasets and for both scenarios the accuracies of idiolect identification higher than the baselines were obtained (the significance of the results was tested), albeit the performance of the classifiers differed with respect to the datasets, as well as the most efficient features.
A possible explanation of the results is discussed, and directions of further research are outlined.
Kredens K. and Coulthard M. (2012). Corpus Linguistics in Authorship Identification. In: The Oxford Handbook of Language and Law. Edited by Lawrence M. Solan and Peter M. Tiersma.
Litvinova T. (2021). RusIdiolect: A New Resource for Authorship Studies. In: Antipova T. (eds) Comprehensible Science. ICCS 2020. Lecture Notes in Networks and Systems, vol 186. Springer, Cham.
Rohart F, Gautier B, Singh A, Lê Cao K-A (2017). mixOmics: An R package for ‘omics feature selection and multiple data integration. PLoS Comput Biol 13(11): e1005752.
Abstract for symposium
In the first decade of the 2000s, procedures and statistical models were developed for calibrating the likelihood-ratio output of automatic-speaker-recognition systems. These procedures and models were quickly adopted for calibrating the likelihood-ratio output of human-supervised-automatic forensic-voice-comparison systems. Since at least the early 2010s, recommendations have been made to use the same calibration procedures and models in other branches of forensic science. Interest in doing this is now growing. Published examples can be found in the context of multiple branches of forensic science, including fingerprints, DNA, mRNA, glass fragments, and mobile telephone colocation. There are also published examples of the use of these procedures and models to calibrate human judgements. The 2021 Consensus on validation of forensic voice comparison and the Forensic Science Regulator of England & Wales’s 2021 Development of evaluative opinions both recommend/require the use of calibration.
This symposium brings together some of the leading researchers in the calibration of the likelihood-ratio output of automatic-speaker-recognition systems and of forensic-evaluation systems. They explain what calibration is and why it is important. They present algorithms used for calibrating likelihood-ratio systems, and metrics used for assessing the degree of calibration of likelihood-ratio systems. They discuss aspects of calibration on which there is consensus, aspects on which there is disagreement, and aspects requiring additional research. They also discuss how to encourage wider adoption of calibration of likelihood-ratio systems in forensic practice.
You can view the presentation slides here.
Forensic Data Science Laboratory, Department of Computer Science & Aston Institute for Forensic Linguistics, Aston University
Calibration in forensic science
You can view the presentation slides here.
Geoffrey Stewart Morrison
Forensic Data Science Laboratory & Forensic Speech Science Laboratory, Department of Computer Science & Aston Institute for Forensic Linguistics, Aston University
In the first decade of the 2000s, procedures and statistical models were developed for calibrating the likelihood-ratio output of automatic-speaker-recognition systems. These calibration procedures and models were quickly adopted for calibrating the likelihood-ratio output of human-supervised-automatic forensic-voice-comparison systems. They were adopted in both research and casework. The 2021 Consensus on validation of forensic voice comparison recommended that “In order for the forensic-voice-comparison system to answer the specific question formed by the propositions in the case, the output of the system should be well calibrated” and that “forensic-voice-comparison system should be calibrated using a statistical model that forms the final stage of the system”. Since at least the early 2010s, recommendations have been made to use the same calibration procedures and models in other branches of forensic science. Interest in doing this is now growing. Published examples can be found in the context of multiple branches of forensic science, including fingerprints, DNA, mRNA, glass fragments, and mobile telephone colocation. There are also published examples of the use of these procedures and models to calibrate human judgements. In this presentation I answer the questions: What is calibration? Why is it important? and How is it performed? I also discuss how this approach to calibration relates to the calibration requirements in the Forensic Science Regulator of England & Wales’s 2021 appendix to the Codes of Practice and Conduct: Development of evaluative opinions.
Dr Morrison is Director of Aston University’s Forensic Data Science Laboratory & Forensic Speech Science Laboratory. Since 2008, he has published multiple papers related to calibration of forensic-evaluation system, including a 2013 tutorial paper on the topic. He was lead author of the 2021 Consensus on validation of forensic voice comparison.
Calibration in automatic speaker recognition
You can view the presentation slides here.
Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET
Most modern speaker verification systems produce uncalibrated scores at their output. Although these scores contain valuable information to separate same-speaker from different-speaker trials, their values cannot be interpreted in absolute terms – they can only be interpreted in relative terms. A calibration stage is usually applied to convert scores to useful absolute measures that can be interpreted, and that can be reliably thresholded to make decisions. In this presentation, I review the definition of calibration and explain its relationship with Bayes decision theory. I then present ways to measure quality of calibration, discuss when and why we should care about it, and show different methods that can be used to fix calibration when necessary.
Dr Ferrer is a researcher at the Computer Science Institute, affiliated with the University of Buenos Aires and with the National Scientific and Technical Research Council of Argentina (CONICET). She received her PhD in Electronic Engineering from Stanford University in 2009. Her primary research focus is machine learning applied to speech processing tasks.
Calibration in forensic voice comparison
You can view the slides from the presentation here.
AUDIAS Lab, Escuela Politécnica Superior, Universidad Autónoma de Madrid
In this presentation, I describe the role of calibration in forensic voice comparison, focusing on the use of automatic systems in a Bayesian decision framework. I describe computation of calibrated likelihood ratios in the context of scenarios and recording conditions typically encountered in forensic casework. I present algorithms commonly used for calibration. I also discuss the importance of calibration in the process of validating forensic-voice-comparison systems, and discuss recommendations and guidelines published by the European Network of Forensic Science Institutes (ENFSI).
Dr Ramos is an Associate Professor at the Audio, Data, Intelligence and Speech (AUDIAS) Laboratory of the Autonomous University of Madrid. He is author of numerous publications on applying and measuring calibration, especially in the context of forensic problems. He has served on scientific committees, and has often been invited to present on the role of calibration in forensic science.
Measuring calibration of likelihood-ratio systems
You can view the slides from the presentation here.
Netherlands Forensic Institute
In this presentation, I explain the concepts of what constitutes well-calibrated probabilities and well-calibrated likelihood ratios. I briefly describe graphical representations for assessing degree of calibration. I then focus on several metrics designed to assess degree of calibration, and present the results of a study comparing the performance of different metrics. Three metrics are taken from the existing literature, and one is a novel metric. One existing metric is based on the expected value of different-source likelihood-ratio values and the expected value of the inverse of same-source likelihood-ratio values (after Good, 1985), another is based on the proportion of different-source likelihood ratios above 2 and the proportion of same-source likelihood ratios below 0.5 (after Royall, 1997), and the third is Cllrcal (Brümmer & du Preez, 2006). The novel metric is devPAV (Vergeer et al., 2021).
Dr Vergeer is a research scientist in forensic statistics at the Netherlands Forensic Institute. His research focuses on computer-based methods for evaluation of strength of evidence, and on measuring and improving the performance of human experts. He has published multiple research papers on calibration of likelihood-ratio systems and on measuring the degree of calibration of likelihood-ratio systems.
Moderator: Rolf J.F. Ypma
Principal Scientist, Netherlands Forensic Institute
Forensic Data Science Laboratory, Department of Computer Science & Aston Institute for Forensic Linguistics, Aston University
The presenters will discuss aspects of calibration on which there is consensus, aspects on which there is disagreement, and aspects requiring additional research. They will also discuss how to encourage wider adoption o
During this presentation, we will explore interactions on a non-emergency online police chat (Digital 101). Specifically, we are interested in the sequential difference between Digital 101 and telephone calls to 101. This presentation is part of a larger project that investigates how online chat can be (better) used as the medium for non-emergency ‘calls’ to the police. The data for this project was collected over a two year period from a UK police force. Half of the data was collected in 2019 before the Covid pandemic and the other half was collected in 2020, during the periods of lockdown and other restrictions.
The findings (both quantitative and qualitative) shed light on, and provide a sound evidence base for this claim, rather than leaving it as an untested assumption. Participants were presented with a police-suspect interview from a murder enquiry, either in the form of a transcript or the original audio recording. Participants who read the transcript of the interview drew significantly different conclusions to those who had heard the original audio recording with respect to the interviewee's emotions and behaviour, as well as the degree of truth in their version of events. Open text box responses provide a rich insight into features of the interviewee's language and/or delivery that influenced participants' perceptions and interpretations of the interview.
Date: 12 Nov 2020
This presentation was given to the Preston Linguistics Circle, hosted by UCLAN, at the invitation of Dr Dominik Vajn. It considers a vitally important but generally undervalued aspect of investigative interviewing, namely the process of converting the spoken interview interaction into an institutionally approved, written evidential document. Formal interview records have significant legal standing in the criminal justice system.
Around the world, various different approaches are taken to producing them, some significantly more reliable than others. The UK system of routinely audio-recording and transcribing all police-suspect interviews is often regarded as an example of best practice. However, this paper demonstrates that even such an apparently robust method of processing linguistic evidence is still problematic, and argues that contamination and bias are currently institutionally embedded in the system.
The full abstract for the presentation:
This presentation considers a vitally important but generally undervalued aspect of investigative interviewing, namely the process of converting the spoken interview interaction into an institutionally approved, written evidential document. Formal interview records have significant legal standing in the criminal justice system. Around the world, various different approaches are taken to producing them, some significantly more reliable than others. The UK system of routinely audio-recording and transcribing all police-suspect interviews is often regarded as an example of best practice. However, this paper demonstrates that even such an apparently robust method of processing linguistic evidence is still problematic, and argues that contamination and bias are currently institutionally embedded in the system.
I will present the key findings of my research into the production of police interview records in England & Wales, grounded in both academic linguistic theory and professional practice. Uniquely, it includes interviews with interview transcribers at a major English police force, offering a perspective which has hitherto received scant attention, despite the enormous practical impact of their work. Indeed, a key part of this research project is to give voice and recognition to this much under-valued group of workers, whose very existence is often entirely overlooked, yet whose work holds the key to fairer representation of interviewees’ voices in the criminal justice process.
I will show how transcribers deal with aspects known from linguistic research to convey substantial meaning, but for which there are currently no standards regarding their representation in official transcripts. These include pauses, discourse markers, ‘no comment’ interviews, and transcription of video-recorded data. This is combined with linguistic analysis of authentic interviews and their official transcripts, and legal analysis of potential consequences in court of the representational choices which transcribers are tasked with making on a daily basis.
The presentation will conclude with practical recommendations as to how to improve evidential consistency in investigative interview data, thereby setting out a manifesto for the new centre for Spoken Interaction in Legal Contexts (SILC) within Aston’s Institute for Forensic Linguistics.
Date: 13 Nov 2020
In this presentation, Joanne Traynor, a PhD student at Anglia Ruskin University presented her work exploring the factors which may influence communications officers as they code and interpret police incident logs. Her experience of working in this context, combined with her mixed method approach made for an extremely insightful presentation.
Joanne seeks to bring voice to the communications officers, highlighting how linguistic focus in this area – often undertaken by Conversation Analysts – does not examine how communications officers perform and interpret their role outside of the telephone calls in the police control room.
Date: 8 Oct 2020
This presentation was based on Annie’s recently completed PhD research, which investigated discursive manifestations of the statutory child-adult divide in police interviews with 17- and 18year-old suspects. In the context of her police interview research, Annie is particularly interested in language ideologies in connection with age, the administration of the police caution, and the discursive role of appropriate adults in interviews with vulnerable suspects.
Date: 17 February 2022
In our presentation, we will present our recently started project entitled “Interactional Patterns in Swedish Police Interviews. ‘Doing objectivity’ when Asking Information Seeking Questions”, funded by the Swedish Research Council.
In a state governed by the rule of law, police interviews (PIs) must be carried out in an objective and impartial manner. Stakeholders’ rights must be considered in every aspect of the interview, yet, criticism has been raised concerning the Swedish police’s possibilities to conform to such principles. In this project we will examine Swedish PIs as a site where principles of rule of law are managed in situ, through interaction. We will analyze audio recordings with particular focus on how objectivity is enacted and performed through information seeking questioning, and in oral summaries of the interviewee’s statement. Data will be analyzed using Conversation Analysis (CA).
In the seminar, we will present our project and the planned studies. We would also like to take the opportunity to share our experiences regarding the process of ethical approval. We will explain the Swedish procedure and would very much like to hear how the seminar participants have dealt with potential difficulties while collecting and working with sensitive data, and how they have overcome various hurdles.
For a recording of this talk, please contact Lina.Nyroos@sh.se
Date: 14 October 2021
In the aftermath of the 2013 trial in which George Zimmerman was prosecuted for, and subsequently acquitted of, the murder of Trayvon Martin, an African American man, the public perception/evaluation of African American English (AAE) in the United States has resurfaced as a key issue within sociolinguistics. For example, Rickford and King (2016: 949), in a high-profile article in the journal, Language, argue that it was AAE that was ‘on trial’ in this case, and that linguists must assume some responsibility for ‘dispelling fictions and prejudices against vernacular speech.’ In this paper, I am also interested in the social evaluation of AAE in a U.S. courtroom; however, building on Rickford’s and King’s work, I suggest that a focus on ‘language and linguistics’ may not go far enough in investigations of linguistic differentiation and social hierarchies.
This paper considers intertextual practices in an American rape trial, Maouloud Baby v. the State of Maryland, a trial in which both the accused and the complainant were African American. While there were no overt references to race at any point in the trial, the prosecuting lawyer spent a considerable amount of the cross-examination of the accused engaged in an intertextual exercise—he quoted extensively from a written transcript of the accused’s police interrogation, quoting and animating the accused’s utterances from the interrogation, many of which contained linguistic features of AAE. Indeed, I argue that a significant part of the prosecutor’s efforts to undermine the credibility of the accused involved the lawyer drawing attention to, and discursively foregrounding, the accused’s use of AAE. However, the complainant in this case was also a speaker of AAE (and used many of the same features as the accused), raising the question of how her use of AAE may have been insulated from the discriminatory discursive work of the prosecuting lawyer. In resolving this puzzle, I have found it useful to move away from approaches to language and race that, as Lo and Chun (2020: 32) say, view ‘racialized linguistic signs as objective facts’ in favour of approaches that investigate the processes by which race and language ‘come to be co-naturalized.’ In particular, I draw on work by Agha (2005) and Rosa and Flores (2017) on enregisterment and, specifically, raciolinguistic enregisterment—work that reverses the more standard way of understanding the relationship between linguistic varieties and social categories. That is, rather than viewing racialized varieties as stable, empirically-observable objects that ‘emanate’ from racialized subjects, Rosa and Flores maintain that, in understanding the negative evaluation of a racialized linguistic variety such as AAE, it is more productive to focus on the ideological work that ‘listening subjects’ (Inoue 2006) do in linking particular linguistic forms to particular social types or ‘figures of personhood’ (Agha 2005: 39).
Within a rape trial, then, where cultural ideologies of masculine and feminine sexualities are highly salient, I argue that the figure of personhood mobilized by the prosecutor’s foregrounding of AAE was a deeply-rooted racist version of African-American masculinity as hypersexual and physically violent. Collins (2005) has argued that, in spite of the heterogeneity of black masculinities in the United States, it is this figure of an African American man that is viewed as ‘authentically’ black and, as such, constitutes an important mechanism by which anti-black racism is perpetuated and justified. There are good reasons that a prosecuting lawyer would want to strategically recruit this kind of figure in this setting, given how prejudicial such a characterization would be to a man accused of rape. Moreover, the gendered nature of these meanings indexed by AAE, at least in this context, meant that the complainant, who also had features of AAE in her speech, would be protected from the discriminatory discursive work of the prosecuting lawyer./p>
The negative evaluation of stigmatized varieties in courtroom settings is well-documented (see Eades 2010 for discussion) with speakers of such varieties generally being viewed as less credible and less trustworthy than speakers of so-called standard varieties. In this paper, I attempt to draw attention to the limitations of focusing exclusively on linguistic forms in understanding the stigmatization of racialized varieties in the courtroom given that social meanings (e.g., racist meanings) can be expressed through the figures of personhood that become ‘emblematic’ (Agha 2005) of linguistic varieties.
Agha, A. (2005) Voice, footing and enregisterment. Journal of Linguistic Anthropology 15: 38-59.
Collins, P.H. (2005) Black Sexual Politics: African Americans, Gender, and the New Racism. New York: Routledge.
Eades, D. (2010) Sociolinguistics and the Legal Process. Bristol: Multilingual Matters.
Inoue, M. (2006) Vicarious Language: Gender and Linguistic Modernity in Japan. Berkeley, CA: University of California Press.
Lo, A. and Chun, E. (2020) Language, race, and reflexivity: A view from linguistic anthropology. In H. Samy Alim et al (eds) The Oxford Handbook of Language and Race. New York: Oxford University Press.
Rickford, J. and King, S. (2016) Language and linguistics on trial: Hearing Rachel Jeantel (and other vernacular speakers) in the courtroom and beyond. Language 92: 948-988.
Rosa, J. and Flores, N. (2017) Unsettling race and language: Toward a raciolinguistic perspective. Language in Society 46: 621-647.
Date: 10 June 2021
Forensic linguistics as a field has been distributed by Coulthard and Johnson (2007, pp. 8–10) into two (or, more recently, three; cf. Coulthard and Johnson, 2013, p. 7) sub-fields;. The first deals mostly with the language of legal contexts and settings—dubbed “The language of the legal process”. The second deals mostly with language data that becomes relevant as evidence in legal proceedings—dubbed “Language as evidence”.
The methods traditionally associated with the former are mostly qualitative, drawing on the concepts, analytical procedures and ontological assumptions about language stemming from linguistic and discourse-analytic subfields such as pragmatics, interactional sociolinguistics, conversation analysis, critical discourse analysis, and so on. Specifically, concepts such as conversational implicatures, sequential organization of face-to-face interaction, textual/discursive genres/registers, information organization of utterances and the like form the bread-and-butter of the (forensic) linguistic approach to the language as used in legal settings (Coulthard and Johnson, 2007, chap. 1; Shuy, 2015).
In this talk, I want to draw upon this tradition of consolidated qualitative analytical concepts, procedures and methods, but also propose to expand it to incorporate quantitative methods that might fill in the gaps that qualitative methods leave; especially in que issue of generalizability and of dealing with substantial amounts of linguistic data. To do this, I will present the findings of a recent pilot study I did on data from judicial decisions from the Brazilian High Court (“Superior Tribunal de Justiça”) on the legal issue of the validity of suspect identification. Drawing on qualitative concepts and methods stemming from ethnomethodology and pragmatics (Pádua, 2019) and on quantitative concepts stemming from corpus linguistics and natural language processing—in this case, N-Gram language models (Jurafsky and Martin, 2019, chap. 3), I proposed that the Court performed what I called a “deontic transformation” of the legal norms relevant to the issue. This deontic transformation differs from a more general interpretive formulation of meaning, in that it negotiates the illocutionary force of the norms. I propose, further, that the linguistic data allow us to formulate a strong hypothesis that this transformation was carried out in order to artificially loosen the legal requirements of suspect identification and, because of that, validate convictions that might otherwise be annulled.
I discuss the relevance and usefulness of mixed methods, that are already pervasive on the Language as evidence side, also to the Language of the legal process side. And I discuss the implications that this type of research can have on both the linguistics and the legal analyses of legal interpretation.
Coulthard, M., Johnson, A., 2013. Introduction: Current debates in forensic linguistics, in: Coulthard, M., Johnson, A. (Eds.), The Routledge Handbook of Forensic Linguistics. Routledge, London, pp. 1–15.
Coulthard, M., Johnson, A., 2007. An introduction to forensic linguistics: Language in evidence. Routledge, London.
Jurafsky, D., Martin, J.H., 2019. Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition [WWW Document]. URL https://web.stanford.edu/~jurafsky/slp3/edbook_oct162019.pdf
Pádua, J.P., 2019. Discursive devices for inserting morality into law: initial exploration from the analysis of a Brazilian Supreme Court decision. Lang. Law=Linguagem e Direito 6, 11–29. https://doi.org/10.21747/21833745/lanlaw/6_1a1
Shuy, R.W., 2015. Discourse analysis in the legal context, in: Tannen, D., Hamilton, H.E., Schiffrin, D. (Eds.), The Handbook of Discourse Analysis. Wiley Blackwell, Oxford, UK, pp. 822–840.
Date: 22 April 2021
In this talk, I presented an overview of key findings from a linguistic ethnographic case study exploration of legal advice communication about UK refugee and asylum law. Taking as my starting point the conceptualisation of the lawyer in legal advice consultations as translating and transposing between ‘two competing world views and two associated and competing discourses’ (Maley et al. 1995: 42), I showed how in the context of my research site, such mediatory activity extended well beyond the legal-lay dimensions usually discussed in the literature (e.g. Conley and O’Barr 1990), into linguistic, cultural and other institutionally-connected practices of communicative mediation.
The study examined one lawyer’s communication in face-to-face legal advice meetings with a diverse range of clients seeking immigration advice. Through analysis of data from this rarely-accessed legal communicative context, I illustrated how at the interactional level, lawyer and client negotiate understanding and build rapport through second language use and sometimes with the support of interpreters; whilst at the level of discourse, the genre of legal advice interaction supports a meaningful dialogic exchange of information and perspectives that is fundamental to successful advice-giving. I then draw on the study to critically consider how legal advisors function as mediating professionals at a structural level, both supporting and regulating clients’ interactions with legal actors and institutions. I end with some of the study’s implications for research and for legal practice.
Date: 18 March 2021
In Heffer 2020, I outline the TRUST framework for analyzing untruthfulness in everyday life. The key planks of that framework are, firstly, that the concept of untruthfulness needs to include not just insincerity (typified by lying) but also epistemic irresponsibility (typified by bullshit) and, secondly, that it is possible to systematically analyse putative cases of untruthfulness through a simple heuristic. Though simple, that heuristic brings out the categorical and ethical complexity of untruthfulness in situated context. I exemplify the framework primarily through examples from the media and politics, partly because the book was written during the years of Brexit and Trump but also because the forensic contexts merit another book.
In this talk, then, I explain how the TRUST framework can be applied to forensic contexts and consider whether the framework needs to be adapted to these contexts. On the one hand, we can easily see at work the major categories of insincerity e.g. the withholding of information in police interrogation; misleading in cross-examination; and the lying enshrined in perjury.
On the other, the legal process, like science, is meant to be a bullshit-free zone where the ultimate aim is to achieve evidential accuracy. To some extent this is achieved in court because the highly strategic nature of trial discourse means that untruthfulness is usually deliberate. Yet rape cases flounder due to a first level of epistemic irresponsibility: dogma in the form of entrenched rape myths, which are sincerely but irresponsibly held. And police forces have fallen victim all too often to bullshit techniques and technologies proposed by ‘experts’ who might sincerely believe their inventions.
Date: 4 February 2021
Date: 26 Nov 2020
In her presentation, Dr Tkacukova conducted a linguistic and socio-legal analysis of online forums for Litigants in Person (People representing themselves in court - LIP). Her study focuses on forums and social media groups run by McKenzie Friends (litigation friends that help people represent themselves in court on a voluntary basis or for a fee - MFs). She uses Corpus Linguistics (Sketch Engine) and a qualitative approach (content analysis) to explore the corpora and LIP’s concerns and advice need. Her study highlighted functions performed by MFs on social media and the quality of MFs’ advice.
Her interesting quali-quantitative analysis reveals many characteristics that shed lights on positioning of both LIP and MFs. For example, LIP’s subcorpus shows a preponderance of N-gram with negation of abilities or wishes, lack of knowledge… Conversely, MFs use expression of support, certainty, advice etc. MFs’ subcorpus also shows that MFs highlight difficulties of LIPs with legal discourse and construct their professional image by advertising their services and positioning themselves as trusted experts.
Reflections from the event below are by Leigh Harrington, Research Associate, Aston Institute for Forensic Linguistics.
This interdisciplinary project focussed on issues of access to justice for the general public, namely legal advice provision on social media. Recent cuts to legal aid in civil and family cases have led to an increase in people representing themselves in court, known as Litigants in Person (LiPs). There has consequently, also been an increased use of alternative sources of DIY law online which legal information and advice, including McKenzie Friends (MFs), lay advisers or litigation friends (mostly without a formal legal background) who provide LiPs with help and support on a voluntary or paid basis.
This talk explored the forums and social media groups ran by MFs where these lay advisers interact with LiPs seeking help. Dr Tkacukova’s data for this study comprised of a corpus of exchanges on these online platforms between these different users. Using a combination of corpus analysis and then content analysis, the talk explored the concerns and advice needs of LiPs and the functions of MFs and the quality of their advice.
Initial corpus analysis of the LiP subcorpus revealed N-grams which negated abilities or wishes, such as lack of knowledge. These highlighted the reduced legal capabilities of LiPs, their potential vulnerability, and need for expert advice. Corpus techniques were also used to define the roles and functions of MFs, which mainly entailed explaining court procedures and the role of social services, setting expectations for LiPs, and advising them. The analysis found that MFs perform these functions services from a closer socially and linguistically defined position than would be expected from legal professionals. Dr Tkaculova showed that whilst MFs construct their professional image as trusted experts and often provide useful procedural information and clarifications, their legal advice is often problematic in terms of its substantive content (it can obstruct justice, mislead, or be technically correct whilst giving unrealistic advice) and linguistic framing (for instance, through defamatory comments).
Dr Tkacukova commented that whilst these lay advice platforms have their advantages, they can also give rise to unfounded trust between users, which can be to the detriment of LiPs who are socially and emotionally vulnerable. She recommended an increase in public awareness of the advantages and limitations/potential risks of MFs, the roles and functions of MFs, and strategies for how to identify biased advice.
Date: 23 Jan 2020
The criminal trials of direct action protesters are in many ways extraordinary episodes within the criminal justice process. Here, the protection of philosophical belief required by the ECHR and Human Rights Act 1998 vouchsafe the sincerity of the defendants’ commitment: as a result, remorse displays are not central to mitigation decisions, while defendants may seek to justify their actions through pleas of necessity.
Yet socio-legal discussion of these issues is often absent, and discussion of necessity are characteristically highly normative; whilst practical questions of the protection of philosophical belief at trial typically focus on sentencing, at the expense of a focus on the conduct of the trials themselves. Graeme used a forensic sociology lens to the trials of activists, to discuss how justification and excuse, remorse and recidivism are constructed, and how processes of separation and remediation are performed and imposed in the courtroom space. He applied this lens to two recent high profile activist trials in the English courts: the trial of the Stansted 15 and the appeal trial of the Frack Free Three.
In both cases, the charges brought against the activists were much more severe than those typically experienced by non-violent direct action protesters. Yet, both ultimately resulted in lenient sentencing, thus apparently upholding ECHR-protected rights to freedom of speech and assembly. Through ethnographic observation and discussion of legal decisions, he argued however that the conditions and conduct of these trials, particularly concerning questions of remorse, necessity, duress, and good character, effectively serve to narrow rather than uphold the expression of Convention freedoms. Specifically, he argued that the emphasis on remorse in the Frack Free Three ruling, and the interpretation of necessity as duress (and its subsequent effective unavailability) in the Stansted trial effectively forces activists to divest their political beliefs as a cost of securing their liberty. As such, he argued that these prosecutions and the terms of their outcomes should be considered serious acts of the chilling of dissent.
Date: 18 February 2021
In this seminar the speakers presented the results of a technical consultancy carried out for the defense in a case of misidentification of the speaker during preliminary investigations. Using a real case consultancy, the speakers addressed the issues connected with the use of noisy audio of limited duration in forensic settings, and considered the possibilities of cross-disciplinary work by matching linguistic and engineering skills.
Date: 24 June 2021
In this talk I will be looking at some of the interactional mechanisms which underlie the construction of “legal truth” in jury trials and which form the discursive processes of turning facts and expert opinions into evidence. Using a case study, I will examine the interactional behaviour of expert witnesses and counsel acting within the constraints of the Anglo-American adversarial system.
Adopting a discourse-analytic perspective, I will demonstrate what stances they adopt and what interactional resources they employ to position themselves vis-à-vis their interactants and their knowledge claims. Building on such linguistic concepts as stance, speaker commitment, epistemicity and evidentiality, I will show how expert knowledge is claimed, disclaimed, attributed and contested. To this end, I will consider the interplay of the pronouns I, you and we with verbal markers of experiential, cognitive and communicative stance (Marín-Arrese, 2009), demonstrating a correlation between the participants’ roles and communicative goals and the type of stance they adopt during testimony.
Date: 28 October 2021
Violent extremist messaging is not created in isolation from the broader social and political context from which it emerges. Few however have investigated how violent extremist messaging - and specifically online messaging - relates to broader political and social narratives. Does the messaging reflect mainstream narratives? Do they offer “extreme” versions of mainstream narratives and, if so, how? This paper is the result of a methodological enquiry carried out by an interdisciplinary team of forensic linguists (Booth and Schneevogt), a computer scientist (Ribeiro), and a political violence scholar (Toros) directing the team. The team was tasked with developing a methodology as part of a GCHQ-funded research project investigating the gendered narratives in online violent extremist messaging, focusing in particular on gender constructions around the cases of Shamima Begum, the teenage British woman who left for Syria to join DAESH and later was stripped of her citizenship when she tried to return, and Brenton Tarrant, who killed 51 people an attack on two mosques in Christchurch, New Zealand. The aim of the project was to investigate the relationship between messaging on social networks known for extremist content and mainstream networks. Results demonstrated a complex dialogue between the extremist and the mainstream environment, with overlapping narratives in terms of gender constructions. The aim of this project is to go beyond an analysis of influencers and to move away from mainstream platforms such as Twitter to examine the gender narratives that emerged and dominated among users of platforms (4Chan) and on specific channels known to attract violent extremist views. The project also investigates how these gender constructions relate to: a) the metanarrative that dominates understandings of gender and violence, in particular political violence; b) the gender constructions dominating mainstream online platforms (in this case comments articles related to the two cases in The Independent and The Daily Mail); and c) to those dominating in state narratives on violent extremism (Action Counters Terrorism YouTube Video Campaign).
Date: 20 May 2021
In this talk, Julien presented a linguistic and discursive analysis model for analyzing meaning, which is based on a methodology that falls within the wider framework of digital humanities and is equipped with digital tools. Julien illustrated this approach with various corpora, composed of political discourse extracted from the digital social network Twitter or YouTube, or from the web. He also present a case study witch illustrates he contribution of digital humanities to forensic science.
Date: 4 March 2021
News is understood to be a way for individuals to inform themselves of current important events, a way to gain information upon which we form our global outlook and opinions (Gelfert 2018). What if that information is false? Or worse: What if you can’t tell if that information is false? Researchers are attempting to tackle fake news from different angles but it’s possible we are talking at crossed purposes (Markines et al. 2009; Horne & Adali 2017; Yang et al. 2017).
Using the term “fake news” doesn’t allow for a distinction between disinformation and misinformation — intentionally vs factually false information respectively (Lewandowsky et al. 2017). Depending on the data, the results will either encompass misinformation and coincidentally include disinformation or only apply to misinformation. The issue being that misinformation occurs without there necessarily being intent — mistakes happen. With fact checked corpora used increasingly as an easy data source, the research has been pigeonholed and we are only able to address the known side of the problem - misinformation (Markines et al. 2009; Horne & Adali 2017; Yang et al. 2017; Tacchini et al. 2017; Conroy et al. 2015).
In this presentation Helena presented a case study to address the issue of disinformation, exploring whether communicative intent (in terms of deception but also journalism) can be measured through the assessment of the linguistic choices made by the author. The study analyses a single author, Jayson Blair, from a single news source, the New York Times, producing a consistent linguistic style and news-type. These controls are used to explore whether it is possible to identify a deceptive style within a single author. We analyse his communicative purpose through the application of a register analysis (Biber 1988) and focussed corpus linguistic approach. The results demonstrate where his communicative purpose varies (intent to deceive or tell the truth) his linguistic style also varies. This shows a way forward for the analysis of fake news. Next steps? To apply this to more individuals to see if the results are transferable and subsequently answer more fully the question posed above.
Biber, D., 1988. Variation across speech and writing. Cambridge University Press.
Conroy, N.J., Rubin, V.L. and Chen, Y. 2015. Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), pp.1-4.
Gelfert, A., 2018. Fake news: A definition. Informal Logic, 38(1), pp.84-117.
Horne, B.D. and Adali, S. 2017. This just in: fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. arXiv preprint arXiv: 1703.09398.
Lewandowsky, S., Ecker, U.K. and Cook, J., 2017. Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), pp.353-369.
Markines, B., Cattuto, C. and Menczer, F., 2009, April. Social spam detection. In Proceedings of the 5th International Workshop on Adversarial Information Retrieval on the Web(pp. 41-48). ACM.
Yang, F., Mukherjee, A. and Dragut, E. 2017. Satirical News Detection and Analysis using Attention Mechanism and Linguistic Features. arXiv preprint arXiv:1709.01189.
Tacchini, E., Ballarin, G., Della Vedova, M.L., Moret, S. and de Alfaro, L. 2017. Some like it hoax:
Automated fake news detection in social networks. arXiv preprint arXiv:1704.07506.
Date: 28 January 2021
Reflections from the event by Felicity Deamer, Aston Institute for Forensic Linguistics.
Tanya’s presentation allowed us to take a step back, and gain some conceptual clarity over harmful forms of speech online. Although the primary focus of the talk was to foreground and tease apart three distinct forms of illegal speech online, Tanya shone the light a little wider, and drew our attention to the psychology of online communication, and how hateful and harmful communication is born out of and allowed to proliferate in the virtual world.
Tanya introduced us to threats, hate speech, and incitement, and showed us how they are closely connected, but also how they come apart, and as such need to be considered separately. Referencing a unique data set of reports of hate speech in Denmark (provided by the Center for Prevention of Exclusion), Tanya picked apart the grammatical profile of a large set of hateful and threatening online communication to reveal specific features, which Tanya argues are suggestive of specific devices being employed by authors of online hate speech, in order to distance themselves from the harm they wish upon their victims (and others by association).
Tanya’s presentation shed light on the form and function of online harmful communications by mapping linguistic analysis onto contemporary thinking surrounding the motivations behind the damaging use of social media platforms.
Date: 19 Nov 2020
Reflections from the event by Felicity Deamer, Aston Institute for Forensic Linguistics.
In his presentation, William Dance from Lancaster University discussed social media users’ motivations for sharing false content online. William first explained the difference between disinformation, misinformation, and fake news, arguing that the concept of disinformation can be best defined as intentionally factually incorrect news that is published to deceive and misinform its reader. The second part of the talk introduced the corpus that William built from Tweets containing URLs of disinforming news. Finally, William gave an overview of the semantic fields identified in the corpus and discussed some strategies social media users utilise when sharing disinformation, including concession, dramatic amplification, hypotheticals, presupposition violation, and omission.
Date: 19 Nov 2020
Hybrid political campaigns can be influenced by gamification strategies, that is the use of video game elements in non-gaming domains. In recent years, gamification has been applied to the realm of politics with the intended goal of increasing voter engagement and citizens’ participation.
In Francesco’s talk, he presented the results of collaborative research on the last EU elections campaign in the Italian Twittersphere. He focused on the online activity of Matteo Salvini, former Italian Interior Minister and leader of the League (Lega), and a specific social media contest, Vinci Salvini! (“Win Salvini!”), whose second edition was launched three weeks before the 2019 EU Elections.
He discussed the impact of such gamification strategy, clarifying how it increased the volume of Salvini’s retweeted tweets and, in turn, his message visibility. Francesco identified a small but particularly active group of suspect users, which he labelled as devotees for their commitment and intense retweeting activity in the run-up to the elections. They share some characteristics (account creation date, type and amount of followers and friends, etc.) and, most fundamentally, reveal an ambivalent nature. On the one hand, they show a quasi-automated behaviour, systematically liking and retweeting any content shared by Salvini. On the other hand, they manifest distinctive human features (type of replies). Being driven by political affinity and the potential reward of a social media contest, we think they represent a peculiar type of crowdsourced political agents.
Date: 30 Jan 2020
One of the most unfortunate consequences of the internet is that child sexual abusers have various online channels through which to communicate with victims and other offenders. Emily’s talk demonstrated the use of move analysis (Swales, 1981; 1990) as applied to two types of online child abuse interaction and how it might assist in online police investigations. Emily first considered offenders' moves in 'grooming' conversations, revealing patterns in move use, which pointed to individual ‘styles’ of grooming. Secondly, she focused on interactions between suspected offenders and undercover police officers posing as offenders, showing how interactants' moves work towards the performance of the offender identity, and how this performance compares across the two groups.
Date: 25 March 2021
In this seminar we were guided through real romance fraud communications and we explored how these interactions progress from innocuous beginnings to the financial devastation of the victims, without causing alarm. Revealing how the use of language can be akin to the tactics of coercive control and domestic violence and abuse, this session also showed how traditional approaches to prevention and awareness-raising could lull victims into a false sense of security in the fraudulent relationship in which they are unknowingly engaged. This research is being used to inform police protect and prevent strategies, financial institutions’ practices in stopping the harm sooner, and dating service approaches to user protection.
Date: 25 February 2021
In this talk, I explained how social norms are changed by particular kinds of speech, notably, but not only, oppressive speech. I take slurs as a case study and then argue that the mechanism can be extended to other data. The starting point for the mechanism is a speech-act model of slurs (Popa-Wyatt & Wyatt, 2017). This model says that a slur is a move in a conversational game that assigns a low-power role to the target. The new idea in this paper is that we can use game theory to explain how the slur alters the conversational dynamics in a way that also alters social norms. Specifically, I describe the mechanisms by which the low-power conversational role leaks out into the larger social game.
As an example, consider a case where a slur is used against a person and this slurring use alters the social norms that are subsequently applied to them by other audience members. My model assumes that a conversational game is embedded in a social game, and argues that a move in the former can change the norms in the latter. Key to this is the idea of a two-way inheritance rule. This rule has an import component and an export component. Conversational roles are typically imported from the social game. Once a low-power role is accommodated in the conversation, it may be exported to the social game, thus changing the norms associated with the corresponding target group. There are several candidate mechanisms for this export rule. I consider these and suggest the mechanism of inferential presupposition has the most explanatory power. I argue that this inferential presupposition shifts norms by changing the social roles associated with the target.
Popa-Wyatt, Mihaela, Wyatt, Jeremy L. (2017), Slurs, Roles and Power, Philosophical Studies 175: 2879-2906.
Date: 11 February 2021
Legal systems around the world assume that violent intent is not only real, but that it is also detectable in threatening language. However, empirical studies examining how, or even whether, violent intent is encoded in language are rare, and tend to explore the issue primarily through psychological theory. This linguistic analysis hypothesizes that authorial intent is indeed detectable in the language of threats, if only obliquely, because the functional aim of a threat issued with true violent intent is different than one issued for other communicative purposes, e.g., to cause fear. A novel combination of frameworks is employed to test this hypothesis on a dataset of six realized and eight non-realized threats.
First, Audience Design Theory and Speech Act Theory delimit the investigation to the most common kind of threatening language, called ‘leakage’ in the threat assessment literature and a ‘pledge to harm’ in Speech Act Theory. Next, the Folk Concept of Intentionality and Biological Naturalism theorize which cognitive elements of intent may be expressed by pledges to harm. Finally, Systemic Functional Linguistics, and the discourse semantic method of Appraisal in particular, identify the various attitudinal and interpersonal meanings in the pledge dataset. Non-realized pledges are discovered to contain significantly more violent ideation, creating a prosody of heightened menace, while the realized pledges are more concerned with ethical evaluations. Hypothetically, these patterns of stance taking show that the non-realized and realized texts are engaged in divergent ‘fields of activity’, that of announcing and explaining respectively. Different communicative purposes point to different psychological intentions spurring the production of each pledge type, potential evidence that violent intent is indeed detectable in the language of pledges to harm.
Date: 13 Feb 2020
Threats constitute what may be termed an illicit genre (Bojsen-Møller, Auken, Devitt & Christensen, 2019),, since they are often socially and sometimes legally proscribed (Fraser 1998; Gales 2010; Muschalik 2018). The (il)legality of a threat is dependent on legislation (Solan & Tiersma 2005), and, notably, on the emphasis legislation and precedent have placed on threateners’ intent.
Threats constitute what may be termed an illicit genre (Bojsen-Møller, Auken, Devitt & Christensen, 2019),
, since they are often socially and sometimes legally proscribed (Fraser 1998; Gales 2010; Muschalik 2018). The (il)legality of a threat is dependent on legislation (Solan & Tiersma 2005), and, notably, on the emphasis legislation and precedent have placed on threateners’ intent.
However, being ultimately a psychological state, the intent is notoriously difficult to assess (cf. Hurt & Grant 2018), and in court, defendants may claim that they never intended to threaten. Furthermore, they can use more or less persuasive linguistic strategies to distance themselves from the language crime they are accused of committing, particularly if the wording of the threat was indirect. Indirect threats are particularly difficult to prosecute and penalize, since reasonable doubt may be raised regarding their intended meaning, possibly allowing the sender a recourse to ‘plausible deniability’ (Solan & Tiersma 2005).
In this seminar, Marie Bojsen-Møller discusses her new paper, which takes its starting point by comparing the role of ‘intent’ (mens rea) in Danish, UK and US legislation or case law on threats:
Bojsen-Møller further examines several Danish court cases which have threatening messages at the heart of the cases and focuses on the appeals to defendants’ intent as argued by prosecutors, defence lawyers, defendants and judges.
Q&A with Marie Bojsen-Møller
1. Is there a specific threat letter or message you came across during your research that really stood out to you? And why?
I would say the teenage girl who on Instagram wrote “I’ll be the next school shooter LMAO watch out”. That was one of the threats that stuck with me the most because there is an aspect of youth, just trying to test the limits of discourse. It’s difficult to say what their purpose really was, if it was just to be a part of a group that has a very extreme way of talking to each other or if it was the beginning of thoughts about actually doing a school shooting. We don’t know, and that really interests me because a lot is at stake. It’s the difference between ignoring someone who’s going to kill people or over-reacting and maybe creating people who are angrier because she must be angrier now that she has been in jail for a long time and she was kicked out of her high school and so on. So yeah, there’s a lot at stake. There’s a lot on the line.
2. In your opinion, what is one of the most common misconceptions laypeople (in DK) have about threats and threatening communications?
People misconceive threats, believing that the only force they have is the potential of something else happening, which is the thing they are threatening to do. Threats in themselves are also an act of violence, a linguistic act of violence. I think that’s very important to understand because it underlines a very basic thought in linguistics, that languages act. It’s not something abstract where you don’t do anything. That’s why I like to show these examples where people say “I didn’t do anything” and they did. I’m not saying they were going to do something else, but they did try to intimidate someone and that’s a crime in itself in many countries.
3. Are lawyers in DK aware of the work forensic linguists do and its benefits for the legal system?
No, not at all. Lawyers know a lot about language, we just know other aspects of it, and it’s difficult for them to understand. It’s not the same as saying that we know more about how to do a proper cross-examination or something; It’s more something about analysing the ways that language works.
Date: 16 Dec 2021
In Italy, during the so-called Years of Lead, written statements have been disseminated by the Red Brigades (an organized far-left terrorist group) in which they summarized the ideological intentions and the plan for the “Armed Struggle”. In addition to the propaganda dissemination of the group's aims, the Red Brigades reported crimes and kidnappings to the detriment of personalities from industry and politics using the form of written communication. In particular, on March 18, 1978, a first statement was sent stating the kidnapping and secret detention of Aldo Moro (a member of the Italian Christian Democracy party). The statement, during the period of the detention of Aldo Moro (March 16 - May 9, 1978), was followed by nine other statements, one of which represents a fake.
The famed Italian linguist Tullio De Mauro has been the first to carry out an analysis of this famous first statement. In fact, the day after its appearance (March 19, 1978) Tullio De Mauro analyzed it and published the article Tentativo di lettura filologica del messaggio Br. Non è come gli altri: sembra tradotto dal francese on the Italian newspaper Paese Sera. The scholar proposed the presence of French stylistic elements in that statement. Subsequently, the statements have been the subject of various linguistics analyses conducted by writers, journalists and forensic professionals (Marchetti, 2017).
To the best of our knowledge, a linguistic analysis of the Red Brigades’ written statements exploiting the techniques associated with Computational Stylometry (CS) has not yet been attempted. CS techniques automatically process texts by analyzing the style in which are written. Some of the main tasks in CS include authorial profiling, identification of textual authorship and recognition of plagiarisms and textual interpolations. According to (Daelemans, 2013), each author corresponds to a unique stylistic brand and no other individual shares the same stylistic characteristics. These stylistic features could merely be imitated. In fact, on several occasions, attempts have been made to replicate the authorial style (Heiser, 2007). In the context of our analysis, a statement appeared on April 18, 1978 was denied by terrorists, accused to be fake and replaced by the original statement some days after.
Our aim is to show the possibilities of linguistic investigation in forensic science using CS techniques. Taking as an example a tragically known case of Italian history, it will be shown a posteriori to what extent the fake statement is different from the original ones. Through stylometric analysis we are able to underline and distinguish, within a short time, the characteristics related to fake text sample.
Date: 4 November 2022
This talk draws on an analysis of 110 authentic telephone-mediated debt collection encounters collected from a British credit union. Debt collection calls take place between a debt collector and indebted individuals and are usually outbound, meaning they are initiated by the creditor, typically with the main intent of collecting the money owed to them. As such, they can often be complex and demanding interactions. They are arguably even more so when initiated by credit unions, which are small, non-profit, financially-inclusive and responsible organisations that explicitly place their members (and debtors) at the centre of their practices.
I apply concepts and theories from (im)politeness research, especially facework, to explore how debt collectors and indebted individuals manage the complex dynamics and different priorities at play in these interactions. The findings demonstrate an approach to debt collection that overall does not match the stereotypical conception of being aggressive and threatening. Instead, these interactions are managed cooperatively and with empathy and compassion.
The talk highlights that linguistic analysis is well-equipped for assessments of whether people in positions of financial instability or vulnerability are treated with respect, integrity, and fairness. These assessments are valuable to individuals, organisations, and regulatory bodies alike.
Date: 18 November 2021
My talk will discuss findings from observations of screening and substantive interviews at the Home Office in Croydon, which form part of a wider Academic Research Project called Linguistic and Intercultural Mediations in a Context of International Migrations focusing on interaction and mediation between governmental actors, charities workers and displaced people.
Language is a key factor in the life of migrants and in the encounters between displaced and “native” people but it’s too often underexplored. This presentation will focus on the issue of translation and interpretation of the Arabic language in the asylum process and how non-verbal communication plays also an important role during the interviews and therefore on the decision made on asylum claim.
This talk will highlight the representation of Arabic language for native and non-native speakers of Arabic throughout the asylum process and reflect on the Researcher’s position in this particular space and time. This presentation will conclude on the innovative research project called Migralect that will be launched in January 2022.
Date: 24 March 2022
As part of the response to the COVID-19 pandemic, many jurisdictions across the world introduced remote hearings as an alternative way of continuing to offer access to courts. The presentation draws on the report prepared for a judicial review of an immigration tribunal appeal case, revolving around the claim that the quality of interpreting conducted in fully online hearings is, by default, inferior to interpreting in face-to-face hearings. In the absence of pre-existing research comparing the impact of the physical and fully online settings on interpreting in legal contexts, the expert witness report in the analysed case drew on linguistic principles governing conversation and turn-taking management, power relations and narrativisation and discursive practices in the two distinct environments.
The presentation reflects on the investigations conducted in preparation for the expert witness report and pursues the following aims: (1) explore the importance of effective communication in immigration settings; (2) challenge common misconceptions in relation to how narratives are elicited, shared and perceived; (3) propose safeguarding strategies for enhancing discursive practices in fully remote hearings.
Date: 10 December 2020
In her excellent chapter on forensic linguistics in O’Keeffe and McCarthy’s Routledge Handbook of Corpus Linguistics, Janet Cotterill (2010) outlines the various ways in which corpora and corpus linguistic methodologies have been (and can be) applied in forensic linguistics. She includes in her discussion the use of already existing general reference corpora by forensic linguists, the building of specialised corpora in forensic contexts and the use of web-as-corpus. The chapter concludes by detailing some challenges of using corpora for forensic purposes and predicting future challenges. This talk revisits Cotterill’s chapter and takes stock of the work that has been done in the ten years since its publication. It considers the potential that has been fulfilled, the opportunities that remain and the new challenges that have emerged in the application of corpus methods in forensic linguistics.
Date: 20 May 2021
In this talk, Julien presented a linguistic and discursive analysis model for analyzing meaning, which is based on a methodology that falls within the wider framework of digital humanities and is equipped with digital tools. Julien illustrated this approach with various corpora, composed of political discourse extracted from the digital social network Twitter or YouTube, or from the web. He also present a case study witch illustrates he contribution of digital humanities to forensic science.
Date: 9 December 2021
The BBC drama “Vigil” and Channel 5’s brief series “Life Beneath The Waves” has brought life onboard submarines into sharp focus. Operating in hostile environments, be that the ocean itself or within close proximity to other hazards, submarines by their very nature are dangerous places to work. Using open source materials and drawing on his own experiences, retired Submarine Commander Mark Williams will discuss how and what submarines communicate, the challenges of communicating whilst avoiding detection, and effective communication between submariners, including the use of language in critical or urgent situations.