Medical Council of Canada

Score interpretation

Score interpretation guidelines

We have prepared these comprehensive guidelines to help interpret results from each Medical Council of Canada (MCC) examination. We provide general guidelines first followed by specific guidelines for each exam.

Please keep the following in mind when using and interpreting scores for all MCC exams.

  • Consider the purpose, level of knowledge, skills assessed and intended use of the exam. Secondary use of exam results should be exercised with caution and be in line with the exam’s purpose. No assessment tool is designed for all purposes.
  • All MCC exams are criterion-referenced, which means the pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of how others perform. Passing means the candidate has achieved the level of knowledge, skills, and attitudes targeted by the exam.
  • For each exam, candidates may see a different set of questions or cases than other candidates taking the same exam during the same session. These different versions or “forms” of the exams are offered for quality assurance purposes. Great care is taken when assembling these forms to ensure they meet all test specifications and are each as comparable to each other as possible.
  • Additionally, psychometric analyses are performed post-exam to adjust for slight difficulty differences across forms. This is known as “linking”, which allows the comparison of scores over time across forms and sessions.
  • Each exam has a score range and a mean and standard deviation based on a “reference group”, a cohort of candidates who are representative of the candidate population. Score comparisons across time are best made by looking at how far a score is relative to the mean of the reference group. This also applies to comparing scores when there are changes to score scales over time.
  • As an example, a Medical Council of Canada Qualifying Examination (MCCQE) Part I score of 310 on the 100 to 400 scale, in place as of 2018, with a mean of 250 and standard deviation of 30 is two standard deviations above that of the group mean from April 2018. An MCCQE Part I score of 700 on the former 50 to 950 scale with a mean of 500 and a standard deviation of 100 is two standard deviations above that of the group mean from spring 2015. These two scores represent similar performance.
  • Subscores for the Medical Council of Canada Evaluating Examination (MCCEE) and the MCCQE Part I are reported on the same scale as the total and so are comparable across exam forms and sessions. However, for the National Assessment Collaboration (NAC) Examination and MCCQE Part II, subscores are reported on a different scale than the total score and are thus not comparable across examination forms or sessions.
  • Use total scores rather than subscores. Based on significantly less data, subscores are not as reliable as total scores and should not be used to compare candidate performances. Subscores are provided to candidates as formative feedback on their relative strengths and weaknesses in various competency areas.
  • The MCC cautions against comparing candidates based on small total score differences. The total score is designed to be most precise around the pass score to support a reliable pass/fail decision. Total scores are not designed to provide score precision along a wide range of the score scale. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.
  • Selection decisions should not be based on a single assessment tool. We recommend that MCC exam results be used in conjunction with other sources of information (for example, results from another MCC exam, medical school transcripts, reference letter, other credentials, etc.) to obtain a comprehensive view of a candidate’s qualifications.
Read more
Read less

Medical Council of Canada Evaluating Examination (MCCEE) – offered until 2018

The MCCEE was a screening examination that assessed the basic medical knowledge and problem solving of a candidate at a level comparable to a minimally competent medical student completing his or her education in Canada and about to enter supervised practice. The examination was based on the MCC Objectives, which are organized under the CanMEDS roles. It was a four-hour computer-based examination that consisted of 180 multiple-choice questions (150 scored questions and 30 non-scored pilot questions).

The MCCEE was a criterion-referenced exam for which the pass/fail was determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of how others performed. It was a prerequisite for International Medical Graduates (IMGs) to challenge the MCCQE Part I and was the minimal requirement for an IMG’s entry into postgraduate medical education in Canada. However, as of 2019, IMGs will be able to challenge the MCCQE Part I directly, without first having to pass the MCCEE.

Each candidate who challenged the MCCEE received two score reports – a Statement of Results (SOR) and a Supplemental Feedback Report (SFR). The SOR included a total score and a final result (e.g. pass, fail). The total score was reported on a scale ranging from 50 to 500 with a mean of 271 and standard deviation of 50. As of May 2017, the pass score was 261. It was established by a panel of physician experts from across the country following a rigorous standard-setting exercise in November 2016. Prior to May 2017, the pass score was 250 on the reporting scale of 50 to 500. For candidates who took the MCCEE prior to May 2017, their final result remains valid.

Additional information about a candidate’s performance profile in various competency domains was provided on the SFR. Please note that the subscores in the SFR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The MCCEE subscores (though expressed on the same scale as the total score) were based on significantly less data and thus do not have the same level of precision as the total score.

The MCCEE total scores were linked across examination forms and sessions using statistical procedures. This allowed the comparison of scores over time across forms and sessions.

The MCCEE score distribution was along a wide range of the score scale compared to clinical skills examinations such as the NAC Examination. However, as with the NAC Examination, it was designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. Small score differences should not be over-interpreted as they may have fallen within the range of values that might reasonably have arisen as a result of measurement error.

The MCC cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection decisions. When comparing candidates for program selection, it is generally appropriate to consider the MCCEE results in conjunction with the results from the NAC Examination as well as other relevant information (e.g., medical school transcripts, application letter, reference letter, etc.) to obtain a comprehensive view of a candidate’s qualifications.

Additionally, please note that MCCEE scores before and after 2008 should not be compared as the test design, test length, scoring method, delivery mode (computer vs. paper-and-pencil), and reporting scales were different.

Read more
Read less

National Assessment Collaboration (NAC) Examination

The NAC Examination assesses the readiness of IMGs for entry into a Canadian residency program, regardless of where they pursued their undergraduate education. It is a one-day examination that consists of 12 Objective Structured Clinical Examination (OSCE) stations — 10 scored stations and two non-scored pilot stations. The exam is designed to assess the knowledge, skills, and attitudes at the level expected of a recent Canadian Medical Graduate (CMG) for entry into supervised clinical practice in a postgraduate training program. The examination is based on the MCC Objectives, which are organized under the CanMEDS roles.

The NAC Examination is a criterion-referenced exam for which the pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of how others perform.

Each candidate who challenges the NAC Examination receives two score reports – a Statement of Results (SOR) and a Supplemental Information Report (SIR).

The SOR includes a total score, the pass score and final result (e.g., pass, fail). The total score is reported on a scale ranging from 300 to 500 with a mean of 400 and a standard deviation of 25 based on all March 2019 results. The current pass score is 398 and was established by a panel of physician experts from across the country following a rigorous standard-setting exercise in April 2019.

Prior to March 2019, there was a different Blueprint and scoring approach (i.e., based on rating scales only) for the NAC Examination. The scale ranged from 0 to 100 with a mean of 70 and a standard deviation of 8. The mean and standard deviation were established using all results from the March 2013 session.

Because the exams are different and based on different blueprints and scoring approach (i.e., based on a combination of rating scales, checklist items and oral questions), you cannot directly compare scores from before March 2019 to those in March 2019 (and beyond). However, you can compare your performance to the mean and standard deviation that was in place for your exam session.

As an example, a NAC Examination score of 425 on the 300 to 500 scale, in place as of March 2019, with a mean of 400 and standard deviation of 25 is one standard deviation above that of the group mean from March 2019. A NAC Examination score of 78 on the former 0 to 100 scale with a mean of 70 and a standard deviation of 8 is one standard deviation above that of the group mean prior to March 2019. These two scores represent similar performance.

Additional information about domain subscores in various competency domains is provided in the SIR. Please note that the subscores as reported in the SIR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The NAC Examination subscores are based on significantly less data and do not have the same level of precision as the total score. Furthermore, they are on a different scale than the total score and are not comparable across examination forms and sessions.

NAC Examination total scores are equated across examination forms and sessions using statistical procedures. As a result, total scores are placed on the same scale to enable score comparison across forms, over time (i.e., March 2019 and onward), and the same pass score is applied to candidates who took different forms.

The NAC Examination score distribution falls along a wide range of the score scale. However, as with other MCC exams, it is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.

The MCC cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection or other decisions. When comparing candidates for residency selection, it is generally appropriate to consider the NAC Examination results in conjunction with the results from the MCCQE Part I as well as other relevant information (e.g., medical school transcripts, application letter, reference letter) to obtain a comprehensive view of a candidate’s qualifications.

For purposes other than resident selection (e.g., practice-ready assessment programs), it is appropriate to consider MCCQE Part II results (if available and if required) before NAC Examination results as the former targets a higher level of clinical skills.

Read more
Read less

Medical Council of Canada Qualifying Examination (MCCQE) Part I

The MCCQE Part I is a summative examination that assesses the critical medical knowledge and clinical decision-making ability of a candidate at a level expected of a medical student who is completing his or her medical degree in Canada. The examination is based on the MCC Blueprint and on the MCC Objectives, which are organized under the CanMEDS roles. It is a one-day computer-based examination that consists of 210 multiple-choice questions (175 scored questions and 35 non-scored pilot questions) and 38 clinical decision-making cases (30 scored cases and eight pilot cases) that include short-menu and short-answer write-in questions.

Candidates graduating and completing the MCCQE Part I normally enter supervised practice.

The MCCQE Part I is a criterion-referenced exam for which the pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of how others perform. Passing means the candidate has demonstrated the knowledge, skills, and attitudes necessary as part of a requirement for medical licensure in Canada for entering supervised clinical practice.

Each candidate who challenges the MCCQE Part I receives two score reports – a Statement of Results (SOR) and a Supplemental Information Report (SIR).

The SOR includes a total score, the pass score and final result (e.g. pass, fail). The total score is reported on a scale ranging from 100 to 400 with a mean of 250 and a standard deviation of 30 based on all April 2018 results. The current pass score is 226 and was established by a panel of physician experts from across the country following a rigorous standard-setting exercise in June 2018.

Prior to 2018, there was a different blueprint for the MCCQE Part I. The scale ranged from 50 to 950 with a mean of 500 and a standard deviation of 100. The mean and standard deviation were set using all results from the spring 2015 session.

Because the exams are different and based on different blueprints, you cannot directly compare scores from before 2018 to those in 2018 (and beyond). What you can do, however, is compare one’s performance to the mean and standard deviation that was in place for the exam session in question.

As an example, an MCCQE Part I score of 310 on the 100 to 400 scale, in place as of 2018, with a mean of 250 and standard deviation of 30 is two standard deviations above that of the group mean from April 2018. An MCCQE Part I score of 700 on the former 50 to 950 scale with a mean of 500 and a standard deviation of 100 is two standard deviations above that of the group mean from spring 2015. These two scores represent similar performance.

Additional information about a candidate’s performance in the new Blueprint domains is provided on the SIR. Please note that the subscores as reported in the SIR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The MCCQE Part I subscores (though expressed on the same scale as the total score) are based on significantly less data and thus do not have the same level of precision as the total score.

The MCCQE Part I total scores are equated across examination forms and sessions using statistical procedures. As a result, total scores are placed on the same scale to enable score comparison across forms, over time (i.e., April 2018 onward), and application of the same pass score to candidates who took different forms.

The MCCQE Part I score distribution falls along a wide range of the score scale compared to clinical skills examinations such as the MCCQE Part II. However, as with the MCCQE Part II, it is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.

The MCC cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection or other decisions.

Read more
Read less

Medical Council of Canada Qualifying Examination (MCCQE) Part II

The MCCQE Part II assesses the candidate’s core abilities to apply medical knowledge, demonstrate clinical skills, develop investigational and therapeutic clinical plans, as well as demonstrate professional behaviours and attitudes at a level expected of a physician in independent practice in Canada. Based on the MCC Blueprint and MCC Objectives, the MCCQE Part II is a two-day examination that consists of 12 Objective Structured Clinical Examination (OSCE) stations — 10 scored stations and 2 non-scored pilot stations. Candidates are presented with 8 stations on day 1 (Saturday) and 4 stations on day 2 (Sunday).

The MCCQE Part II is a criterion-referenced exam for which the pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of how others perform.

Each candidate who challenges the MCCQE Part II receives two score reports – a Statement of Results (SOR) and a Supplemental Information Report (SIR).

The SOR includes a total score, the pass score and final result (e.g., pass, fail). The total score is reported on a scale ranging from 50 to 250 with a mean of 150 and standard deviation of 20 based on all October 2018 results. The current pass score is 138 and was established by a panel of physician experts from across the country following a rigorous standard-setting exercise in December 2018.

Prior to October 2018, there was a different blueprint for the MCCQE Part II. The scale ranged from 50 to 950 with a mean of 500 and a standard deviation of 100. The mean and standard deviation were established using all results from the spring 2015 session. Typically, subsequent cohorts would be compared to the session where the score scale was established, but the composition of cohorts changed after scale development due to the implementation of capacity limits for the MCCQE Part II. Between October 2015 and May 2018, a more stable cohort composition occurred for the MCCQE Part II, where on average the mean was 588 and the standard deviation was 89.

Because the exams are different and based on different blueprints, you cannot directly compare scores from before October 2018 to those of October 2018 and beyond. What you can do, however, is compare one’s performance to the mean and standard deviation that was in place for the exam session in question.

As an example, an MCCQE Part II score of 170 on the 50 to 250 scale, in place as of October 2018, with a mean of 150 and standard deviation of 20 is one standard deviation above that of the group mean from October 2018. A MCCQE Part II score of 677 on the former 50 to 950 scale with a mean of 588 and a standard deviation of 89 is one standard deviation above that of the group mean from the period after capacity limits were implemented in October 2015. These two scores represent similar performance.

Additional information about a candidate’s subscores in various competency domains is provided in the SIR. Please note that the subscores as reported in the SIR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The MCCQE Part II subscores are based on significantly less data and thus do not have the same level of precision as the total score. Furthermore, they are on a different scale than the total score and are not comparable across examination forms and sessions.

The MCCQE Part II total scores are equated across examination forms and sessions using statistical procedures. As a result, total scores are placed on the same scale to enable score comparison across forms, over time (i.e., October 2018 and onward) and the same pass score is applied to candidates who took different forms.

The MCCQE Part II score distribution falls along a wide range of the score scale. However, as with other Medical Council of Canada (MCC) exams, it is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.

The MCC cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection decisions.

Read more
Read less