Re: Statistic for inter-group comparison on categorization tasks?

Jeff Adams (jeffa@kurz-ai.com)
Tue, 18 Jun 1996 09:29:01 -0400

> Clearly the kappa statistic can be used for each group of judges
> separately, i.e. you can use it to assess how the nurses' within-group
> reliability compares with the doctors' within-group reliability on the
> same task. But that doesn't show that the nurses as a group can be
> expected to generally make the *same* diagnoses. So the question is:
> what is the proper statistical analysis for the hospital to use if it
> wants to show that diagnoses made by nurses do or do not differ
> significantly from those made by doctors?

Did you get an answer to this? I'd be curious to hear what people
have told you.

I would imagine that it might follow lines similar to ANOVA: that is,
Compute the statistic for the (J1+J2) set of doctors+nurses (taken
as a single group), and compare this to the kappa statistic for
the J1 doctors & J2 nurses separately. If the nurses are consistent
with the doctors, the combined kappa should show good consistency.

Jeff

-- 
Jeff Adams
Language Modeling Scientist
Kurzweil Applied Intelligence
http://www.kurz-ai.com/people/jeffa