Doctoral Degrees (Office of the Dean: Health Sciences)
Permanent URI for this collection
Browse
Browsing Doctoral Degrees (Office of the Dean: Health Sciences) by Author "Brits, H."
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Open Access Psychometric analysis as a quality assurance system in OSCEs in a resource limited institution(University of the Free State, 2016-07) Ogah, A. O.; Jama, M. P.; Brits, H.English: A comprehensive study was carried out with a view to develop a guideline for psychometric analysis and recommend for its incorporation into the Quality Assurance examination policy of medical schools. The study was motivated by a gap that exists in the knowledge and skills of psychometric methods at KIU-Dar, the UFS, but also in Sub- Saharan Africa and the rest of Africa. Psychometrics involves statistics and mathematics, which is a nightmare for many practitioners. To bridge the gap, the researcher compiled a very simple guideline for psychometric analysis. The guideline discussed is user-friendly and requires simple easily accessible tools such as SPSS and Microsoft Excel. Moreover, a new psychometric programme for the rapid analysis of the OSCE data, developed and published by Tavakol and Doody early this year, was introduced in this study with recommendations for medical schools to purchase it for use in their medical education units to quality assure their examinations. By developing the strategy, the identified gap was bridged, in that it can aid in the training and encouragement of academic staffs to use the psychometric tools, skilfully on their subject examinations to improve assessments and training. The study was carried out in a resource limited medical school, at the Kampala International University, Dar es Salaam campus, to objectively measure the quality of the OSCEs in order to improve and harmonize the quality of our assessments, clinical training, quality of medical graduates and consequently patient care. The exit OSCE was conducted in four clinical departments (OBGYN, Paediatrics, Medicine and Surgery) in July 2015. The examiners were a mixture of consultants, specialists and medical officers. The research methods comprised of literature reviews and observations using checklists during the OSCEs. The examiners collected the scores (data) by means of checklists. A total of 27 graduating clinical students were assessed by 20 examiners in 82 OSCE stations across all the departments. The key findings in this study were as follows. The examiners were too few, while the OSCE stations were too many in 3 of the departments. OBGYN had too few OSCE stations. Of note is the very limited human resource in OBGYN, as all the internal examiners were part-time staff of the university. The examiner: station ratio was 1:4. The clinical stations were manned by one examiner each and the rest of the stations (50-75%) were in written format. The examiners were not very conversant with OSCE, they were not aware of global scoring, standard setting and psychometric analysis. Currently, our medical school uses a fixed university pass mark of 50% and the grading system is based on the raw scores. In addition, there was no blueprint and standardised patient for the OSCE. All the patients that were used were real, which limited the researcher from obtaining their ratings of the students’ performances as this was not permitted by the hospital administration. Also, the researcher could not obtain the examiners’ global scores. The Ministry of Health checklist that was used by the examiners did not capture global scoring nor standardised patients’ ratings. Since it was practically difficult to have an ideal examination setting, the raw OSCE scores were likely to be abnormally distributed with possibly outliers as we found in this study. The findings of the study inform helpful recommendations pertaining to accurate perception of the students’ performances and ability to make valid comparison with other assessments anywhere. The study suggests that it is better to convert the raw scores into Z-scores and the university grading systems should be built on Z-scores. As was also observed in the analysis, the letter grades based on the Z-scores corresponded to different raw score ranges in each station and subject. Moreover, to make an accurate pass-fail judgement on students’ performances, it is better to use the ‘gold standard’ method of setting the pass mark especially in a situation where the examination setting is not ideal. The borderline regression method is the recommended method. However, where there are no global scores, the borderline method, as was demonstrated in this study can be used to determine the pass mark for these examinations. The variability of the OSCE scores was generally high, which is not desirable for a criterion referenced test. Also, the variability in an ideal OSCE should only come from the different abilities of the students. However, in this study, most of the variability in the students’ scores was contributed by the examiners and their interactions with the students as noted in the ANOVA and the G-studies. The overall G-Coefficient was 0.75, which is comparable with other studies in the developed countries. However, the G-coefficient in each subject were lower and weakest in OBGYN. The station analysis in this study showed that the Internal Medicine OSCE was the best. The other subjects especially OBGYN had poor discriminating power and difficulty index for a criterion referenced assessment. The internal consistency and stability of the OSCEs in this resource constrained institution, based on Cronbach alpha, were low (below 0.25). Several Hawks and Doves were identified in the study. Recommendations in this regard were made. The sound research approach and methodology ensured quality, reliability and validity. The completed research can form the basis for a further research undertaking.