For quite a while now, students have been enduring something on a regular basis … Exams! The purpose of these exams is to separate the high-performers from the rest. Economic theory tells us that exams and performance in these exams is a form of signaling … So, good scores in school exams signals to the colleges that the student is bright, and good scores in college exams signals to employers that this student will make a good employee. However, and this school of thought has been around from relatively recent times, exams/tests/assessments should be used more as learning aids rather than performance evaluation.
Until recently, the technology for doing this wasn’t readily available, while the principle has been around for a while, I must say. I remember, as a school student, upon getting the answer sheets after the exams, going through them, and trying to analyze which areas of the subject I would need to focus on more, probably because I got some of the answers wrong on those areas. Well, in theory at least … Practice, ah … A different thing! I mean, come on … How many schoolboys have you see doing this!
Now, imagine doing this kind of analysis for an entire class of 30 or 40 kids. The task is humongous. Though this sort of thing would be quite helpful as this would enable learners rather than simply assess them, it wasn’t quite feasible given that we didn’t have the means to do this. Today, however, we do, and doing this can actually be quite simple.
First step, develop assessment taxonomy. Simply put, this could be a way of classifying questions, and assigning them to specific topics within a subject, for example, on a test for mathematics, one question could be tagged to circles, another to linear equations, and so on. This means that we can identify which area of the subject the question belongs to. Second step, develop topical tutorials. Now, at first sight, this might seem to be a humongous task, but it’s not necessarily so. Short, focused tutorials can today be developed relatively easily, and besides, this would, in most scenarios, be one-time effort (given that this sort of modularity would enable us to use or re-use these tutorials in whichever scenario they are required), or probably something which wouldn’t change too frequently, depending on the topic.
With these foundation blocks in place, it’s a simple step to map these tutorials to the same taxonomy that is defined for the assessment questions, and here, we have a system which, based on learner performance, can get o a granular level of performance and hence a granular level of learner understanding, and based on these, can recommend suggested learning titles to enable the learner to work on areas where they more need to. In this way, we could have an assessment tool which at the same time also enables learners, with applications ranging from school education to training, and so on,