Intercoder Agreement Qualitative Research

Φεβ 28 2022
admin

In the Kappa (RK) column, the results table displays a randomly corrected percentage match value. It takes into account the likelihood that two people will randomly select and assign the same codes in a document (if they simply selected codes at random without taking into account the data material). The calculation is only useful if you select the Count unmapped codes as matches option and is therefore only visible when this option is selected. The results table lists all the documents evaluated and thus provides detailed information on the agreement of the individual documents. The Percentage column shows the percentage match to each code. The line is used to calculate the average percentage match – in the example, it is 93.33%. The system checks whether the two programmers “agree”, i.e. whether they match individual segments in their coding. This option is the most advanced of the three options and the most commonly used option for qualitative coding. A percentage value can be set to determine when two coded segments are considered to match. Achieving intercoder reliability is not suitable for all research studies. Here`s what you can consider when deciding whether or not to aim for the reliability of the intercoder.

The second table allows a close verification of the agreement of the intercoder, that is, it is possible to determine for which encoded segments the two encoders do not match. Depending on the parameter you select, the array contains the segments of both encoders or only those of one encoder, and indicates whether the second encoder has assigned the same code at this point. You run a study in which you want multiple researchers to interpret the data in the same way, and there are many different metrics to calculate the reliability of the intercoder. Examples include: Objective: To illustrate how ICR assessment can be used to improve coding in qualitative content analysis. The reliability of the intercoder when you choose to use it is an important part of content analysis. In some studies, your analysis may not be considered valid if you don`t achieve a certain level of consistency in your team`s coding of data. Although coding requires some degree of subjective judgment, the reliability of the intercoder maintains that this judgment is shared by the researchers on your team. For the calculation of coefficients such as kappa, segments usually need to be defined in advance and have predefined codes. In qualitative research, however, it is common not to define segments a priori, but to give both encoders the task of identifying all the document points they deem relevant and assigning one or more appropriate codes. In this case, the probability of two encoders encoding the same section with the same code would be lower, and therefore Kappa would be greater. It could also be argued that the probability of random encoding matches in a text with multiple pages and codes is so insignificant that kappa is equal to the simple percentage match.

In any case, the calculation should be carefully considered. Each code specifies the total number of encoded segments (Total column), the number of matches (chords), and the percentage of code-specific agreement. In the row, the (non-)matches are added together so that an average percentage match can be calculated. In the example, it is 93.08%. For the calculation of the “P-chance” or probability of a match, MAXQDA uses a proposal by Brennan and Preacher (1981), who discussed in detail the optimal use of Cohen`s kappa and its problems with unequal edge sum distributions. In this calculation, the random match is determined by the number of different categories used by the two encoders. This corresponds to the number of codes in the code-specific results table. It often happens that programmers differ slightly from each other when assigning codes, e.B.

because a person has more or less coded a word. This is usually irrelevant in terms of content, but can lead to unnecessarily low percentage matching when absolutely identical coding is required, and lead to “erroneous” non-agreements. The number of corresponding codes is displayed in the upper left corner of the table with four fields. In the upper right corner and the lower left corner, you will find non-matches, that is, one code, but not the other, has been assigned in a document. In MAXQDA, the segment-level intercoder agreement only takes into account segments with at least one code associated; Therefore, the cell at the bottom right is by definition null (because document sections are only included in the analysis if they are encoded by both encoders). The following dialog box appears where you can adjust the settings for checking the intercoder agreement. The examination of the intercoder agreement includes the following: The sample table at the top right shows that a total of 12 codes were analyzed. There was disagreement (marked with a stop sign in the first column) for the code “opinion/negative” and the code “opinion/neutral…” ” and only within a document (indicated by the “Disagree” column). The numbers in the Agreement, No Agreement, and Total columns refer to the number of documents. Show only disagreements – Hides all matching lines and provides quick access to documents where programmers don`t match.

Methods: The key steps of the procedure are presented, which are based on data from a qualitative study on the patient`s perspective on back pain. The C code was included in the intercoder agreement verification, but was not assigned by either encoder 1 or encoder 2 in the document. In this case, if you select Ignore unassigned codes, the C code is ignored and the relative number of corresponding code assignments is 1 of 2 = 50%. If the other option is selected, the match is 2 of 3 = 67% because the C code is taken into account. Discussion: The quantitative approach to ICR evaluation is a practical tool for quality assurance in qualitative content analysis. Kappa values and close verification of contractual rates help to estimate and increase the quality of coding. This approach facilitates coding best practices and increases the credibility of the analysis, especially when large samples are queried, different coders are involved, and quantitative results are presented. Background: High intercoder reliability (ICR) is required in qualitative content analysis for quality assurance when more than one programmer is involved in data analysis. There is a lack of standardized methods for ICR methods in the qualitative analysis of content in the literature. Ac = random match = 0.5 above the number of codes selected for analysisAo = Observed match = percentage of matchKappa (RK) = (Ao – Ac) / (1 – Ac) Check during the qualitative coding process to check if your researchers are coding consistently or not.

Make adjustments as needed. In qualitative analysis, intercoder compliance analysis is mainly used to improve coding instructions and individual codes. Nevertheless, it is often desirable to calculate the percentage of agreement, especially with regard to the research report to be prepared later. This percentage of the match can be viewed in the code-specific results table above, which takes into account both the individual codes and the set of all codes. P Observed is the simple percentage of the match as displayed in row of the code-specific results table. Researchers often express a desire to include not only percentage compliance rates in their research reports, but also randomly adjusted coefficients. The basic idea of such a coefficient is to reduce the percentage of correspondence to what would be obtained if codes were randomly assigned to segments. The reliability of the intercoder also allows you to share and conquer safely. If you know your team is able to program relatively consistently, you can divide the work and let each researcher take a different part of the data and know they`ll code it consistently. Since texts in qualitative analysis processes are not often divided into fixed text units, the system checks the correspondence for each segment encoded by the two encoders (analysis: segments of the two documents). This means that each coded segment is parsed for a match. You can also choose to analyze only the segments of document 1 or the segments of document 2.

This can be useful, . B for example, to test the extent to which a programmer matches an example code assignment reference. The criterion is the frequency of occurrence of the code in the document; specifically, the frequency of compliance (agreement) of the code assignment. You want to use the different perspectives of several researchers Here too, two result tables are generated: the code-specific results table and the detailed agreement table. When assigning codes to qualitative data, it is recommended to define certain criteria. They assume, for example, that the encoding is not arbitrary or random, but that a certain degree of reliability is achieved. The MAXQDA Intercoder Agreement function allows you to independently compare two people encoding the same document. In qualitative research, the purpose of comparing independent programmers is to discuss differences, find out why they occurred, and learn from the differences to improve coding agreement in the future. In other words, the actual percentage of match is not the most important aspect of the tool.

However, this percentage is provided by MAXQDA. .