On July 1, the American Board of Ophthalmology released the article-based questions for the 2019 Quarterly Questions program. If you’re participating in the program, you know that during Quarter 3, you chose 5 journal articles to read at your own pace and then answered a total of 10 questions about them. You probably noticed that these questions were different than the knowledge-based questions released in Quarters 1 and 2 of each year. Today I hope to answer: why do we have both knowledge-based and article-based Quarterly Questions?
Knowledge-based questions support the design of a summative assessment. A summative assessment (assessment of learning) is meant to determine how much knowledge (or skill or another attribute) someone possesses at a certain point in time. The primary user is someone other than the examinee (in this case, the ABO. In other cases: whoever is using those examination results to credential a person, hire a person, promote a person, etc.) The ABO is using the assessment to make a decision about how much knowledge someone possesses.
Article-based questions allows us to weave in a formative assessment activity to the Quarterly Questions program. A formative assessment (assessment for learning) is meant to educate. The primary user is the examinee. He/she is using the assessment to learn new things.
When diplomates ask why Maintenance of Certification can’t be a completely formative exercise, the main rationale is simply with the definition of formative. A formative assessment doesn’t tell the ABO anything about the examinee. By definition, it doesn’t tell us whether someone is keeping up. It doesn’t verify anything. That said, formative assessment is certainly valuable in the context of lifelong learning.
Though article-based Quarterly Questions are extremely popular, they wouldn’t be a good measuring stick for making a determination about someone’s certification. Here’s why:
As a non-clinician with little exposure to ophthalmology, I took the 2019 knowledge-based quarterly questions and answered just 13 out of 40 questions correctly for a score of 32.5%. This score was much lower than any diplomate who attempted the same Quarterly Questions. This failing result is an accurate measure of my competencies in ophthalmology (since clearly, I’m not an ophthalmologist!)
Then, I read five ophthalmic articles (one article each in core ophthalmology, neuro-ophthalmology, pediatrics, ethics, and glaucoma) and took the article-based questions. This time, I answered 9 out of 10 questions correctly (90%), which is just below the average score of 95%. This activity doesn’t tell the ABO anything about someone’s current ophthalmic knowledge (since I was able to perform well simply by reading the material).
As you can see, using a tool like the article-based questions to make a summative decision about someone’s certification would be problematic.
The first figure below depicts the distribution of diplomate performance on the 2019 knowledge-based Quarterly Questions. The light blue bars represent the number of diplomates achieving the score corresponding on the x-axis (out of a possible 40 questions correct). The green line depicts the score needed in order to pass. My score is depicted with the orange line.
The second figure (below) depicts the distribution of diplomate performance on the 2019 article-based Quarterly Questions. The light blue bars represent the number of diplomates achieving the score corresponding on the x-axis (out of a possible 10 questions correct). The green line depicts the score needed in order to obtain CME credit. My score is depicted with the orange line.
This data is current as of July 9, 2019.
What do you think about summative versus formative assessment? Have ideas about how to improve the Quarterly Questions program? Write to QuarterlyQuestions@abop.org.