Reproducibility in lie detection research: A case study of the cue called complications
Reproducibility in lie detection research: A case study of the cue called complications
Abstract
Purpose
This review examined reproducibility in verbal lie detection research, wherein studies typically involve coding statements to identify deception cues. Such coding is prone to analytic flexibility that can invite false positives. I focused on the cue called complications as a case study. The variable emerged in the literature simultaneously with the availability of open science resources—providing a reasonable expectation that the relevant materials would be archived in accessible repositories if not in the publication.
Methods
I reviewed 30 relevant publications to assess whether complications research is amenable to auditing.
Results
The findings indicated sufficient consistency in the definitions of complications and little ambiguity regarding what the variable denotes. Additionally, numerical estimates indicated that the extant results in the literature might be replicable—but with a significant caveat. Such replicability entirely depends on acquiring the coding protocols and anonymized raw data of published studies. However, that critical information is not publicly available. I discuss the ramifications of this barrier to reproducibility: it prevents the auditing of published findings, which allows explaining null findings away with post hoc explanations that depend on inaccessible information.
Conclusions
At a minimum, journal editors and reviewers must insist on the codebooks of coding protocols. Providing the corresponding anonymized raw data should also be a requirement unless specific obstructions like grant agreements prevent data sharing. The nature of verbal lie detection research necessitates this policy.