Main Article Content
An evaluation of the assessment tool used for extensive mini-dissertations in the Master’s Degree in Family Medicine, University of the Free State
Abstract
Background: Family Medicine became a speciality in South Africa in 2007. Postgraduate studies in Family Medicine changed from part-time Master of Family Medicine (MFamMed) to a full-time Master of Medicine (Family Medicine) [MMed(Fam)] degree, with changes in the curriculum and assessment criteria. The overall goal of this study was to evaluate the current assessment tool for extensive mini-dissertations in the postgraduate programme for Family Medicine, at the University of the Free State, and if necessary, to produce a valid and reliable assessment tool that is user-friendly.
Method: An action research approach was used in this study, using mixed methods. Firstly, marks given by 15 assessors for four mini-dissertations using the current assessment tool were analysed quantitatively. In Phase 2, the regulation of the assessment bodies and the quantitative results of Phase 1 were discussed by assessors during a focus group interview, and data were analysed qualitatively. An adapted, improved assessment tool (Phase 3) was developed and re-evaluated in Phase 4.
Results: The current assessment tool complied with the regulations of the assessment bodies. The scores allocated to specific categories varied with a median coefficient of variation of more than 15% in four of the possible 12 assessment categories. During the focus group interview, reasons for this were identified and the assessment tool adapted accordingly. During reassessment of the tool, individual assessors were identified as the reason for poor reliability.
Conclusion: The current assessment tool was found to be valid, but was not reliable for all assessment categories. The adapted assessment tool addressed these areas, but identified lack of training and experience in the assessment of extensive mini-dissertations by certain assessors as the main reason for unreliable assessment.
Method: An action research approach was used in this study, using mixed methods. Firstly, marks given by 15 assessors for four mini-dissertations using the current assessment tool were analysed quantitatively. In Phase 2, the regulation of the assessment bodies and the quantitative results of Phase 1 were discussed by assessors during a focus group interview, and data were analysed qualitatively. An adapted, improved assessment tool (Phase 3) was developed and re-evaluated in Phase 4.
Results: The current assessment tool complied with the regulations of the assessment bodies. The scores allocated to specific categories varied with a median coefficient of variation of more than 15% in four of the possible 12 assessment categories. During the focus group interview, reasons for this were identified and the assessment tool adapted accordingly. During reassessment of the tool, individual assessors were identified as the reason for poor reliability.
Conclusion: The current assessment tool was found to be valid, but was not reliable for all assessment categories. The adapted assessment tool addressed these areas, but identified lack of training and experience in the assessment of extensive mini-dissertations by certain assessors as the main reason for unreliable assessment.