Kottner, J., Audige, L., Brorson, S., Thunder, A., Gajewski, B. J., Hr.bjartsson, A., et al. (2011). Guidelines have been proposed for reliability reporting and agreement studies (GRRAS). Int. J. Nurs. Stud. 48, 661-671. doi: 10.1016/j.ijnurstu.2011.01.017 This report has two main objectives.
First, we combine known analytical approaches to perform a comprehensive assessment of the match and correlation of scoring pairs and unravel these often confusing concepts by providing an example of good practice for concrete data and a tutorial for future references. Second, we examine whether a screening questionnaire designed to be used with parents can be reliably used with maternal assistants in the evaluation of early expression vocabulary. The evaluation covers a total of 53 vocabularies (34 parent-teachers and 19 parent-parents-couples) collected for two-year-olds (12 bilingual). First, the reliability of intergroups is assessed using the intraclass correlation coefficient (CCI) both within and within subgroups. Then, based on this analysis of the reliability and test reliability of the tool used, the Inter-Rater agreement, the size and direction of the evaluation differences are analyzed. Finally, Pearson correlation coefficients in standardized vocabulary are calculated and compared across subgroups. The results highlight the need to distinguish between insurance measures, consistency and correlation. They also show the impact of reliability on the evaluation of agreements. This study shows that parent-teacher evaluations of children`s early vocabulary may achieve a similar match and correlation to those of mother-father assessments on the vocabulary scale assessed. The bilingualism of the child studied reduced the likelihood of counselor consent. We conclude that future reports on the consistency, correlation and reliability of credit ratings will benefit from a better definition of stricter methodological concepts and approaches.
The methodological tutorial proposed here offers the potential to improve comparability beyond empirical reporting and can help improve research practices and knowledge transfer in pedagogical and therapeutic nass. Fleenor, J.W., Fleenor, J.B. – Grossnickle, W.F. Interrater Reliability and Performance Assessment Agreement: A Methodological Comparison. J Bus Psychol 10, 367-380 (1996). doi.org/10.1007/BF02249609 with x1/x2 – comparative values and Sdiff-SEM2. The latter indicates the typical error of the difference between two test results and thus describes the distribution of differences if no difference is actually found. SEM has been calculated as SEM-s11-rxx with s1-SD and rx-reliability of the measurement. Another approach to concordance (useful when there are only two advisors and the scale is continuous) is to calculate the differences between the observations of the two advisors. The average of these differences is called Bias and the reference interval (average ± 1.96 × standard deviation) is called the compliance limit. The limitations of the agreement provide an overview of how random variations can influence evaluations.
As explained above, we found a significant amount of divergent ratings only with the more conservative approach to calculating THE ROI. We looked at factors that could influence the likelihood of diverging ratings. Neither the sex of the child, nor whether it was assessed by two parents or a parent and a teacher, systematically influenced this probability. The bilingualism of the child studied was the only factor studied that increased the likelihood that a child would have divergent values. It is possible that different assessments for the small group of bilingual children reflect systematic differences in vocabulary used in the two different environments: German unilingual daycares and bilingual family environments. Larger groups and more systematic variability in bilingual environmental characteristics are needed to determine whether bilingualism has a systematic effect on advisor compliance, as proposed in this report, and, if so, where this effect originates.