Você está na página 1de 9

Item Reliability (alpha and split-half) and Inter-rater agreement (Kappa)

(DR SEE KIN HAI) 1. Cronbachs alpha and split-half reliability assess the internal consistency (do the items tend to be measuring the same thing) of the items in a questionnaire. 2. Split-half reliability measures the correlation between the first half and the second half items. 3. Coefficient alpha is the average of all possible split-half reliabilities for the questionnaire. 4. Inter-rater reliability is a measure of agreement between the ratings of 2 different raters. It involves the extent of agreement between raters on their rating as compared to agreement by chance. How to run Item Alpha reliability and Split-half A Bahasa Melayu teacher wanted to know her students attitude towards Bahasa Melayu. She administered a 10-item questionnaire which uses 4-point Likert scale i.e. SD=1, D=2, A=3 and SA=4. The teacher wanted to determine the internal consistency (Cronbachs alpha and split-half reliability coefficient) of this questionnaire in her pilot study before it is used in the actual main study. 20 students were randomly selected from the population to be respondents. The following [Variable View] and [Data View] show the students responses on the questionnaire items from the pilot study.

SD=1, D=2, A=3, SA=4

1. Select [Analyze] then [Scale] and [Reliability Analysis..] to open the dialogue box. 2. Move [Q1] to [Q10] into the [Items] box then click on [Statistics] to open the subdialogue box.

Clock on [Scale if item deleted] then [Continue] and [OK]

Interpreting the Output

3 This table shows the number of cases= 20

The alpha reliability= 0.721. According to George and Mallery (2003) = Cronbach alpha coefficient. 0.9 (v good) 0.8 (good) 0.7 (acceptable) 0.5 (poor) 0.5 (unacceptable)

This is the alpha scale e.g. = 0.721 if item 2 is deleted.. As your = 0.721

which is acceptable in the reliability of the instrument, all the items are retained

Reporting the output The alpha reliability of the 10 item scale was 0.72, indicating that the scale had good reliability. An alpha of 0.70 or above is considered satisfactory.

How to run the Split-half reliability (using the data above)


1. Select [Analyze] then [Scale] and [Reliability Analysis..] to open the dialogue box. 2. Move all the 10 items into the [Items] box and in the [Model] box select [Splithalf] then [OK].

Interpreting the output This table shows thee Guttman Split-half reliability of the 10 items as 0.596 which rounded to 2-decimal places is 0.60.

Reporting the output The split-half reliability of the ten-item scale was 0.60, indicating that the scale had only moderate reliability.

How to run coefficient)

the

Inter-rater

agreement

(Cohens

Kappa

1. Kappa measures the amount of agreement between the 2 raters taking into account the agreement by chance. Po Pc 2. k = where Po = Prob of relative observed agreement, Pc =Hypothetical 1 Pc prob by chance agreement. K= 1 (complete agreement) and k =0 (no agreement between the raters). Example below shows the ratings by (a) teacher A and (b) teacher B on 20 students categorical scores in Bahasa Melayu essay in terms of (1) poor, (2) a moderate or (3) good.

1. Select [Analyze] then [Descriptive Statistics] and [Crosstabs] to open the dialogue box.

2. Move [TeacherA] into [Rows] and [TeacherB] into [Columns] and select [Statistics..] to open the sub-dialogue box.

3. Select [Kappa] then [Continue] and [OK].

The data have been arranged into a 2 x 2 contingency. The number of cases on which Teacher A and Teacher B agree are shown in the diagonal cells. There are 3 for the rating Interpreting the output of poor essay, 4 for moderate and 9 for the rating of good essay

Kappa= 0.673 and is statistically significant, it indicates a good agreement. Kappa Coefficient and Split-half coefficient K < 0.20 (poor agreement) 0.2 0 < K < 0.40 (fair agreement) 0.40 < K < 0.60 (moderate agreement) 0.60 < K < 0.80 (good agreement) 0.80 < K < 1.00 (v good agreement) Reporting the output

Kappa for the agreement between the ratings of Teacher A and Teacher B was 0.67, which indicates a good agreement. COURSEWORK 1

26 students of a school completed an 18- item questionnaire on their perception towards Jawi lesson. You are to determine the internal consistency of this instrument using Cronbachs alpha and split-half reliability in this pilot study before you embark on your main study in another school. Item 5 and 15 are negatively worded. (Use [Recode] to change the scoring of these negative items).(SD=1, D=2, U=3, A=4, SA=5)

9 COURSEWORK 2 Two English language teachers have rated the work of 25 students from Maktab Duli using 1 = very good; Po Pc 2 = moderate and 3= Poor. By using the Cohens Kappa Coefficient ( K = where Po = 1 Pc probability of agreement from observed value, and Pc = hypothetical probability of agreement due to chance ) deduce if the two teachers have high rate of agreement in the marking of students work.

Student 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Teacher A 1 3 2 1 3 2 1 3 2 3 2 3 3 2 1 3 1 3 2 3 1 3 2 2 1

Teacher B 2 3 1 2 3 2 2 3 2 2 2 3 3 2 1 2 1 3 2 3 2 3 2 3 1

Você também pode gostar