Document Type : Original Article
Authors
1
Department of Infectious Diseases, School of Medicine, Infectious Diseases Research Center, Birjand University of Medical Sciences, Birjand, Iran
2
Department of Community Medicine, School of Medicine, Birjand University of Medical Sciences, Birjand, Iran
3
Department of Oral and Maxillofacial Medicine, School of Dentistry, Infectious Diseases Research Center, Birjand University of Medical Sciences, Birjand, Iran
4
e‑Learning Center, Birjand University of Medical Science, Birjand, Iran
5
Department of Epidemiology and Biostatistics, School of Health, Social Determinants of Health Research Center, Birjand University of Medical Sciences, Birjand, Iran
Abstract
BACKGROUND: Education and assessment have changed during the COVID‑19 pandemic so that
online courses replaced face‑to‑face classes to observe the social distance. The quality of online
assessments conducted during the pandemic is an important subject to be addressed. In this study,
the quality of online assessments held in two consecutive semesters was investigated.
MATERIALS AND METHODS: One thousand two hundred and sixty‑nine multiple‑choice online
examinations held in the first (n = 535) and second (n = 734) semesters in Birjand University of
Medical Sciences during 2020–2021 were examined. Mean, standard deviation, number of questions,
skewness, kurtosis, difficulty, and discrimination index of tests were calculated. Data mining was
applied using the k‑means clustering approach to classify the tests.
RESULTS: The mean percentage of answers to all tests was 69.97 ± 19.16, and the number of
questions was 34.48 ± 18.75. In two semesters, there was no significant difference between the
difficulty of examinations (P = 0.84). However, there was a significant difference in the discrimination
index, skewness, and kurtosis of tests (P < 0.001). Moreover, according to the results of the clustering
analysis in the first semester, 43% of the tests were very hard, 16% hard, and 7% moderate. In the
second semester, 43% were hard, 16% moderate, and 41% easy.
CONCLUSION: To evaluate the tests’ quality, calculating difficulty and discrimination indices is
not sufficient; many factors can affect the quality of tests. Furthermore, the experience of the first
semester had changed characteristics of the second‑semester examinations. To enhance the quality
of online tests, establishing appropriate rules to hold the examinations and using questions with
higher taxonomy are recommended.
Keywords