|Background and Rationale:||The assessment of clinical performance relies on the use of raters and their ability to detect, select and process relevant behaviors or events displayed by candidates. In this way, raters have been identified as a significant source of error. Attempts to improve rater-based assessments through scale development or rater training have yet to result in substantial improvement in rating quality. These and other attempts to improve rater based assessments may be limited without further considering the extent to which the task assigned to raters aligns with their cognitive and perceptual capacities.|
|Objective:||The objective of this study is to explore rater cognition as a source of error in the assessment of clinical performance by considering the alignment (or lack thereof) between cognitive demands imposed by a rating task and human cognitive architecture.|
|Research Question:||During the assessment of clinical performance, in what way does mental workload affect rater cognitive processes and rating quality?|
|Design:||Raters will be required to evaluate 3 clinical performances under different mental workload conditions, depending on group assignment, using a 2x2 factorial design. Factor A will be number of competency dimensions to be considered. Factor B will be the presence or absence of an additional ecologically valid task. Outcome measures will include the quality of the information detected by the examiner, the time required to complete the task, two indications of mental workload, and a retention test. A post-rating interview will be used to further explore rater cognition under different load conditions.|
|Anticipated Significance:||The results of this study will inform assessment efforts embedded within the move towards competency-based assessment (indicating if spreading attention over many dimensions negatively influences rater judgment) as well as shedding light on the potentially detrimental impact of requiring examiners to divide their attention across multiple tasks such as those involved in the Simulated Office Oral examinations soon to be used by the MCC as part of the harmonization project. More broadly, the results should inform scale development, rater training, assessment design and novel perspectives emerging in the field of performance assessments.|
Study still ongoing.