Office of the Dean: Health Sciences
Permanent URI for this community
Browse
Browsing Office of the Dean: Health Sciences by Author "Brits, Hanneke"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Best practices for quality assessment in the clinical phase of undergraduate medical training(University of the Free State, 2020-08) Brits, Hanneke; Bezuidenhout, J.; Van der Merwe, L. J.Medical universities have a responsibility to ensure quality assessment of clinical competence when they certify that they produce competent medical practitioners who can integrate knowledge, skills and attitudes. The assessment of clinical competence is complex, and can be characterised by tension between validity, reliability and fairness, due to the assessment on the “does” level. The defined problem that was addressed is that assessment in the clinical phase of the undergraduate medical programme (MBChB) at the University of the Free State has not been reviewed critically or benchmarked against local and international standards. This thesis intended to benchmark clinical assessment practices against an assessment framework and then propose an action plan on how to bridge the gap between theory and practice when assessing clinical competence. A pragmatic approach was followed to address the practical problems of uncertainty regarding the quality of assessment. From a theoretical perspective, an abductive approach was used to achieve inference. An explanatory sequential mixed method research design was used. During triangulation, alignment of and gaps between theory and practice were identified and solutions recommended. A proposal with an action plan was drafted to enhance the quality of clinical assessment in the undergraduate medical programme. Firstly, an assessment framework to benchmark clinical assessment in undergraduate medical training was compiled. A rapid literature review of local, national and international official regulations and policies, supported by best evidence practices, were used to compile this assessment framework. In this framework, the three components of quality assessment, namely, accreditation, assessment and quality assurance, were addressed. In the second part of the study, current assessment practices were reviewed through data collected from three sources, namely, students, lecturers and student marks, to ensure that different aspects were included in the review. A questionnaire with open and closed-ended questions was completed by clinical students in the undergraduate medical programme, to get the students’ perspectives on assessment. More than half the students were of the opinion that current assessments were not fair, and >90% complained about the lack of formal feedback after assessments. Secondly, the teaching and learning coordinators and module leaders of all the clinical departments involved in undergraduate medical training completed questionnaires on the assessment methods used in their departments. They also made recommendations for ways to improve current assessment practices. Using multiple choice questions and objective structured clinical evaluations were standard practice in most disciplines. Workplace-based assessment (WBA) was not well established and was only used in 30.1% of disciplines. The overemphasis on summative assessment was identified as an area for improvement. Thirdly, current assessment practices were evaluated for reliability. The decision reliability between end-of-block assessment and summative assessment was excellent, with a G-index of agreement of between 0.86 and 0.98. Using unobserved long cases during summative assessment was shown to be unreliable and questionable. During a formal focus group interview, answers were sought on how to bridge the gap between theoretical principles of quality assessment and current assessment practices. Finally, the researcher compiled a proposal with an action plan on how to enhance quality assessment in the clinical phase of the undergraduate medical programme. Most of the practices that compromise the quality of assessment can be addressed on an operational level, and will not be costly to implement. This includes training of assessors, implementation of WBA, effective feedback to students and blueprinting and moderating all assessments. Assessor training will improve the quality of assessments, and will also contribute to the professional development of assessors. Continuous WBA will have the ultimate effect of improving validity and reliability, which will benefit all stakeholders.Item Open Access An evaluation of the assessment tool used for extensive mini-dissertations in the Master's degree in Family Medicine at the School of Medicine, University of the Free State(University of the Free State, 2013-08-23) Brits, Hanneke; Bezuidenhout, J.; Steinberg, W. J.English: Family Medicine became a speciality in South Africa in 2007. Postgraduate studies in Family Medicine changed from part-time MFamMed to a full-time MMed(Fam) degree with changes in curriculum and assessment criteria. The overall goal of this study was to evaluate the current assessment tool for extensive mini-dissertations in the postgraduate programme for Family Medicine, UFS and, if necessary to produce a valid and reliable assessment tool that is user-friendly. An Action Research approach was used in this study, using mixed methods. In the first phase, the current assessment tool was evaluated and the data analysed quantitatively. In phase two, the quantitative results of phase one was discussed during a focus group interview and data were analysed qualitatively. Phase three was the production of a new, improved assessment tool. The evaluation of the new assessment tool did not form part of this study. In phase one, 11 internal and four external assessors evaluated four extensive mini-dissertations with the current assessment tool. In phase two, the internal assessors took part in a focus group interview and evaluated the current tool for validity regarding regulations of the assessment bodies as well as reasons for the differences in marks allocated to specific assessment categories (reliability). The current assessment tool complied with all the regulations of the assessment bodies. In four out of the possible 12 assessment categories the median scores allocated to specific categories varied more than 15%. During the focus group interview, reasons for this were identified and the assessment tool was adapted accordingly. A lack of training and experience in the assessment of extensive mini-dissertations was also identified as a contributing factor. The existing assessment tool currently still in use is valid, but not reliable for all assessment categories. The new assessment tool addresses these areas and will be implemented after training of assessors in 2012.