There is, for obvious interest, quite a bit of interest in the subject of subjects, or, rather, in the subject of examinations on those subjects. The newspapers recently reported that Central Advisory Board of Education has recommended re-introduction of the class X exams. Another subject the article talks about is the policy of student detention based on exam results.
This brings us to a basic question … what is the purpose of exams? While there is definitely a need within the education system to assess achievement of learning objectives, the problem begins when exams are seen as a mechanism to weed out students who may not meet the criterion of meeting learning objectives. If the intent is to ensure that students learn the things they are supposed to, would studying the same thing again help a student understand better than the first time? This is akin to repeating something in the hope that just by repeating it, the other person will understand it. If the student didnt understand it the first time, isnt it more than likely that he wont understand the next time either?
Instead of having the student go through the entire year, it would be more helpful for the student if the focus was to be on topics the student was facing difficulty in understanding. A quick look at the answer sheet for the exam will give the answer. This, though, wont scale without the use of technology to support this, and today, we have the technology to move assessment in this direction.
Another aspect is to find out what we are testing. Are we testing memory of the subject, or are we testing understanding? If we are trying to assess achievement of learning objectives, we need to focus on understanding. This means the pattern of testing needs to change towards application of concepts from simple recitation of concepts, and we, as a nation, probably need an examination/assessment policy to complement the education and learning frameworks in the country.
We are told that marks (or grades) and qualifications are signals which serve to tell prospective employers about the worthiness of candidates for jobs … this as per classical economic theory. However, reading this article makes one think … what are marks measuring in the contemporary examination system in India?
There are a few possible things one could deduce from here:
- Children graduating schools are made up of different stuff, and are extremely bright.
- The University folks have lost it.
- The exam system is not exactly measuring earning.
Back when we were in school (this is another millenium, remember!) getting 80% in English meant you were really, really good at the subject. Mere mortals managed anything in the low to mid-70s, with some folks managing the 60s. Today, we are seeing a cut-off of 100% for Computer Science courses. If this is based on PCM (Physics, Chemistry, Mathematics), then one can assume that the kids are graduating school with exceptional understanding of the subjects. However, by the time these kids graduate, we find that corporates struggle to meet their hiring numbers. On the other hand, scoring 90s in English today should mean the kids should have an exceptional grasp of the language, but that isnt borne by observation.
Personally, I believe that the exam system is barking up the wrong tree (for biologists), or climbing up the wrong pole (for the rest of us). Marks dont seem to be measuring learning, though I dont know what they are measuring. To get a real understanding, exams need to test the kids, not on straight application of formulae, but to ask questions two or three steps removed from the data. And this isnt quite difficult to do.
For quite a while now, students have been enduring something on a regular basis … Exams! The purpose of these exams is to separate the high-performers from the rest. Economic theory tells us that exams and performance in these exams is a form of signaling … So, good scores in school exams signals to the colleges that the student is bright, and good scores in college exams signals to employers that this student will make a good employee. However, and this school of thought has been around from relatively recent times, exams/tests/assessments should be used more as learning aids rather than performance evaluation.
Until recently, the technology for doing this wasn’t readily available, while the principle has been around for a while, I must say. I remember, as a school student, upon getting the answer sheets after the exams, going through them, and trying to analyze which areas of the subject I would need to focus on more, probably because I got some of the answers wrong on those areas. Well, in theory at least … Practice, ah … A different thing! I mean, come on … How many schoolboys have you see doing this!
Now, imagine doing this kind of analysis for an entire class of 30 or 40 kids. The task is humongous. Though this sort of thing would be quite helpful as this would enable learners rather than simply assess them, it wasn’t quite feasible given that we didn’t have the means to do this. Today, however, we do, and doing this can actually be quite simple.
First step, develop assessment taxonomy. Simply put, this could be a way of classifying questions, and assigning them to specific topics within a subject, for example, on a test for mathematics, one question could be tagged to circles, another to linear equations, and so on. This means that we can identify which area of the subject the question belongs to. Second step, develop topical tutorials. Now, at first sight, this might seem to be a humongous task, but it’s not necessarily so. Short, focused tutorials can today be developed relatively easily, and besides, this would, in most scenarios, be one-time effort (given that this sort of modularity would enable us to use or re-use these tutorials in whichever scenario they are required), or probably something which wouldn’t change too frequently, depending on the topic.
With these foundation blocks in place, it’s a simple step to map these tutorials to the same taxonomy that is defined for the assessment questions, and here, we have a system which, based on learner performance, can get o a granular level of performance and hence a granular level of learner understanding, and based on these, can recommend suggested learning titles to enable the learner to work on areas where they more need to. In this way, we could have an assessment tool which at the same time also enables learners, with applications ranging from school education to training, and so on,