AMERICAN JOURNAL OF QUALITATIVE RESEARCH
Inter-Coder Agreement in Qualitative Coding: Considerations for its Use

Sean N. Halpin 1 *

AM J QUALITATIVE RES, Volume 8, Issue 3, pp. 23-43

https://doi.org/10.29333/ajqr/14887

OPEN ACCESS   1088 Views   1189 Downloads

Download Full Text (PDF)

Abstract

The historically quantitative-dominated field of health sciences has increasingly embraced qualitative methods. However, calls for quantitative measures of rigor, such as Inter-coder Agreement (ICA), remain. The aim of this manuscript is to demystify ICA and provide practical guidance. I begin by describing considerations while planning for ICA, including differences between various ICA tests (i.e., percent agreement, Holsti Method, Cohen’s kappa, Krippendorf’s alpha, and Gwet’s AC1 and AC2), setting the threshold of acceptability for your chosen test, deciding whether to use qualitative data analysis software, choosing the number of coders, selecting what data will be coded by more than one coder, developing a deductive codebook, creating a process for resolving coding disagreements, and establishing an audit trail for codebook changes. Next, I provide step-by-step guidance on an iterative process used for enacting ICA. Finally, I discuss the importance of reporting, emphasizing clarity, conciseness, completeness, and accuracy.

Keywords: Trustworthiness, rigor, inter-rater reliability, qualitative coding

References

Citation