Evaluation and Improvement of Multiple-choice Questions in the Doctor of Pharmacy Program

Objectives: To evaluate the quality of the current multiple choice questions (MCQ) in our bank, and to review the appropriateness of the student assessment processes by faculty in this Doctor of Pharmacy program. Also to identify barriers to the reduction of item writing flaws (IWF). Methods: Faculty, in teams of two, reviewed 7620 MCQs of the question bank. Two more follow up reviews were then performed to further understand the reasons behind the MCQ IWF: (1) 2185 randomly selected MCQ were monitored for timely submission (which is one week before the exam), and (2) the deleted MCQ post exam administration among the randomly selected 2875 MCQ. In both projects, the level of cognition was also investigated. Results: IWF in 7620 were conservatively assessed at 29.5%. IWF increased as the number of instructors in a given course increased. Late submission of questions in the studied sample of 2185 MCQs was at 22%. That has gotten worse as the years progressed year 1 to year 3 implicating pharmacy practice joint faculty more than others. MCQ deletion from the 2875 random sample was 4.6% across all levels of cognition. Negative correlation was found between the number of teaching faculty and percent of flawless MCQs. Discussion: To reduce IWF, all processes of assessment must be monitored, and appropriate interventions instituted through educa tion to faculty, review of MCQs prior to processing, engaging faculty in the improvement process. Full time college-based dedicated faculty tends to create an environment of cooperation between all faculty which in turn reduces weaknesses in the assessment processes.


Introduction
Student assessment is an essential part of academic teaching.
It reveals the quality and quantity of knowledge that was gained by the examinees throughout the course of instruction and determines if the student passed the course based on a preset standard of performance [1]. Most health science institutions use multiple choice questions (MCQs) as a conduit to making such an assessment along with other tools of assessment. Writing such questions can be a challenging task as each needs to be crafted clearly, and be flaw-free in order to maximize the accuracy of assessment and to minimize the influence of MCQ anomalies on adversely affecting the examinees performance especially the highly achieving ones [1,2]. The psychometric analysis of any MCQ through the calculated difficulty index and point bi-serial correlation, in addition to the faculty's expert judgment, are tools that can assist in pointing to difficult or poorly crafted MCQ [2][3][4].
The Quality Assurance and Academic Accreditation (QAAA) unit members of the College of Pharmacy (COP) at King Saud bin Abdul-Aziz University for Health Sciences (KSAU-HS), Riyadh expressed concerns over what they believed the questionable appropriateness of some MCQs in our assessment unit bank of questions. This concern was expressed in a focus group discussion where it has been proposed to initiate a focused review of MCQs to improve their respective quality. The review included checking the MCQs item's structure, grammar, spelling, phrasing, punctuation, language and logic. In addition to the specific purpose of this project which was to correct, improve, or eliminate inappropriate MCQs; lay down ground rules to improve student performance in exams; assist the students in focusing their attention the topics the MCQ is assessing rather than having them distracted by the level of language used or the content of poor MCQs; and most importantly to improve faculty's performance in writing MCQs.

Retrospective review of MCQs in the bank for appropriateness
A comprehensive review of 7,620 MCQs for 23 pharmacy courses (9 Pharmaceutical Sciences (PHBS 27 Credits) and 14 Pharmacy Practice (PHCS 39 Credits) taught in the first three professional years was performed by the teaching faculty at COP-KSAU-HS.
Fifteen courses were not included in this review since students' assessment is driven by rubrics rather than by MCQ exams (3 research and seminar courses, and 5 introductory and 8 advanced pharmacy practice experiences). All faculty reviewers were graduates of the North American and British educational systems.
The retrospective review process was planned for the 2014-2015 academic year. However, given the large number of reviewed MCQs by faculty teams of two, and due to their respective daily academic duties, it was not concluded until January 2016. All review-ers received a standard set of instructions prior to commencing the MCQs vetting, reviewed required college based writing criteria for MCQs, and agreed to a uniform MCQ evaluation process in order to improve the consistency in judging the questions and to eliminate bias. The criteria evolved around MCQs spelling and grammar, phrasing, structure, and content. The data were then tabulated as passed; corrected; rejected; or duplicated.

Results
A total of 7620 MCQs were reviewed. Overall, 70.5% of the reviewed MCQs passed the review without changes. The vast majority of the remaining MCQs were corrected for language (12.4%), Structure (9.1%) or for both language and structure issues (2.3%).  Table 2).

Discussion
An average of 70.5% of the 7620 reviewed MCQs for 23 courses passed the vetting (range 32.7% -97.8%). The data we generated corroborates with the reported item writing flaws (IWF) in the literature that appears to be common in most academic institutes and colleges even in different board exams. In a review for CME MCQs in radiology journals, 43% of the items violated one standard of the adapted seven guidelines for writing [5]. Another project reviewed 3,509 MCQs, where 85% of the items were flawed, yet others reported 50% and 56.8% of flawed MCQs [6,7] and several studies reported violations of guidelines ranged between 46 -76% [8,9]. It has been reported in the literature that tests given to students without revision by examinations committee or by peer review could possess too many flawed questions [10]. This is clearly demonstrated in our findings. A study that assessed the performance of medical students in four examinations in an American medical school revealed that 46% of the revised MCQs contained IWFs and 10 -15% of students who were considered as failures have been reclassified as passing the exam when flawed MCQs were removed [2,3].

Student evaluation of courses was reviewed to explore if IWF
had any effect. Eight pharmacy Practice courses were among the 12 courses that were ranked by students below average that year.
There was no cause and effect evidence implicating the IWF in these courses.
All data pointed at the need for the college to tighten up its as- ideal item writing principles are in wide circulation and discussed in our workshops [6,12]. The most common types of errors and flaws that were documented in the literature included late and hurry submission of questions before the exam, ignorance and inexperience of MCQs writing criteria, implausible distractors, improper language and low cognitive level of questions [6,8,10,[13][14][15]. The challenge to faculty increases when course learning outcomes requires higher cognitive levels in the crafted MCQs.
It is interesting to note that our College had, since its establishment six years ago, conducted yearly MCQ writing workshops. Based on this project's findings, the College focused on mitigating the perceived weaknesses in MCQs through various interventions. It also implemented a strategic plan to increase college based clinical faculty educated in North America through the state supported scholarship program to fill the shortage gap [16]. The two initiatives was thought to improve our assessment processes and outcome overtime.
Our objective of expanding this project was to attempt to identify a cause and effect that would then facilitate an efficient intervention. For example, it was quite notable that three of the six Therapeutics courses, the backbone of the Pharmacy Practice Program, fell below the average in IWF; two had up to 80% level 1 recall MCQs contrary to the expectations in clinical courses. Of them, one had late submission of MCQ up to 21% of the time. Students evaluated these three courses 3.24 -3.62/5, an unfavorable 65% -72%.
It is conceivable that the poorly crafted MCQs pertaining to these courses, their low level of cognition through the hurried submission, and perhaps the manner these lectures are delivered, reflect-ed negatively on the students' evaluation. In an effort to see if that affected the rate of student absenteeism, we discovered contrary to our expectations that these courses had less than the average absenteeism for that particular year (5% -9%) while the overall average of absence for that year was 10% (range 1% -28%).
Pharmaceutical Sciences courses fared better presumably be- MCQs, thus resulting in issues related to IWFs [17]. It is worth noting that the sequential model of teaching is used in our college, where some courses are taught by multiple instructors but only one instructor is present at a time. This is substantially different than other teaching models such as team teaching, where all instructors are present for all classes and thus share the lead role; and hybrid teaching where all instructors are present sometimes and one instructor is present at other times. A previous survey study on the benefits and drawbacks of using multiple instructors in a large research university in Canada suggested that advantages are maximized and disadvantages minimized either in courses with two or more instructors interacting and collaborating in class or when special care is taken with coordination and collaboration if the course is sequentially taught [18]. Our findings demonstrated that increased number instructors teaching any given course was also associated with higher rate of IWF (Figure 1-3). Additional

Conclusion
It appears from the outcome of this project that for any MCQ assessment in clinical programs to succeed, pharmacy faculty, especially joint practice ones, need to go through the MCQ writing skills workshops available in almost all large institutions. It is crucial to extract from all faculty the commitment to adhere to the established MCQ writing guidelines. The firm enforcement of psychometric analysis standards is also helpful in guiding the assessment process but is not more important than the final judgment of the material expert faculty. Course coordinators and support faculty need to review all submitted MCQs prior to their administration to students, and all violations must be discussed with the authoring faculty. Deleted questions must also be discussed with the students to solidify their knowledge base in relation to the disputed MCQs.
It is highly advisable that each course coordinator meet with all course teaching faculty in advance of the course commencement to discuss approach, teaching effectiveness, and assessment guidelines even the compliance with the cognition level for the expected MCQ per course learning outcomes. Finally, the most qualified teaching faculty in both departments are the specialists in their fields whether it is in pharmaceutical sciences or pharmacy practice. Thus, refraining from assigning non-specialists or numerous faculty to teach any given course, regardless how easy the course may appear, and changing the teaching faculty assignments frequently appear to be counterproductive to student comprehension and the assessment program.
While we may have documented some shortcomings in the assessment program due to having joint faculty solely teaching all clinical courses, we strongly believe that the recruitment of full time Pharmacy practice faculty create a more cooperative culture between the two types of faculty that leads to better cooperation in complete compliance with MCQ writing guidelines and in teaching effectiveness. The preliminary appointment of two such full time faculty in our college along with our improvement processes has clearly demonstrated the correctness of our speculations on some of the issues that needed resolution.