In todays L&D landscape, the way businesses determine who should participate in what training isnt far away from some sort of conjuring act. More often than not, the result of this is a mixed bag, and many of the L&D professionals I speak to tell me that the L1 scores (based on the Kirkpatrick model) are more often than not tending towards the lower end of the spectrum.
There are typically two ways a business determines training participation. One is based on mandated training (usually related to promotion/growth), while the other is nomination by the business manager. Both of these are based on picking up from a ‘menu’ of available programs, and neither really takes into consideration the actual learning needs of the individual.
This is where the idea of predictive learning comes in. The idea here is simple … today, with the technology available to us, especially in the Big Data/Analytics domains, the data about what has worked in the past in what context is available to the organization in a large scale. This data is available based on training, HR, and operations/business data. This rich data can be leveraged to determine what is the best training solution which would likely work in a particular employee context. Like Big Data, this neednt look at the reason (or connection) between cause and effect, rather, look at the linkages as they have been seen in the past.
An important aspect of this picture is that this shifts the focus from training and learning, and from L&D to the individual learner, and makes the entire process people-centric.
One concern with this, though, could be that the outcome of the requirements could be way too granular, and too tailored to individual needs, so as to be unviable from the delivery perspective. More about this later …
There is, for obvious interest, quite a bit of interest in the subject of subjects, or, rather, in the subject of examinations on those subjects. The newspapers recently reported that Central Advisory Board of Education has recommended re-introduction of the class X exams. Another subject the article talks about is the policy of student detention based on exam results.
This brings us to a basic question … what is the purpose of exams? While there is definitely a need within the education system to assess achievement of learning objectives, the problem begins when exams are seen as a mechanism to weed out students who may not meet the criterion of meeting learning objectives. If the intent is to ensure that students learn the things they are supposed to, would studying the same thing again help a student understand better than the first time? This is akin to repeating something in the hope that just by repeating it, the other person will understand it. If the student didnt understand it the first time, isnt it more than likely that he wont understand the next time either?
Instead of having the student go through the entire year, it would be more helpful for the student if the focus was to be on topics the student was facing difficulty in understanding. A quick look at the answer sheet for the exam will give the answer. This, though, wont scale without the use of technology to support this, and today, we have the technology to move assessment in this direction.
Another aspect is to find out what we are testing. Are we testing memory of the subject, or are we testing understanding? If we are trying to assess achievement of learning objectives, we need to focus on understanding. This means the pattern of testing needs to change towards application of concepts from simple recitation of concepts, and we, as a nation, probably need an examination/assessment policy to complement the education and learning frameworks in the country.
We are told that marks (or grades) and qualifications are signals which serve to tell prospective employers about the worthiness of candidates for jobs … this as per classical economic theory. However, reading this article makes one think … what are marks measuring in the contemporary examination system in India?
There are a few possible things one could deduce from here:
- Children graduating schools are made up of different stuff, and are extremely bright.
- The University folks have lost it.
- The exam system is not exactly measuring earning.
Back when we were in school (this is another millenium, remember!) getting 80% in English meant you were really, really good at the subject. Mere mortals managed anything in the low to mid-70s, with some folks managing the 60s. Today, we are seeing a cut-off of 100% for Computer Science courses. If this is based on PCM (Physics, Chemistry, Mathematics), then one can assume that the kids are graduating school with exceptional understanding of the subjects. However, by the time these kids graduate, we find that corporates struggle to meet their hiring numbers. On the other hand, scoring 90s in English today should mean the kids should have an exceptional grasp of the language, but that isnt borne by observation.
Personally, I believe that the exam system is barking up the wrong tree (for biologists), or climbing up the wrong pole (for the rest of us). Marks dont seem to be measuring learning, though I dont know what they are measuring. To get a real understanding, exams need to test the kids, not on straight application of formulae, but to ask questions two or three steps removed from the data. And this isnt quite difficult to do.
Whether you are a Talent Management practitioner, or a Learning & Development practitioner, you would have the question about how these two should align. The question is one of how one can enable the other. To answer this, one must explore the source of L&D initiatives, with which L&D initiatives must be aligned. This source is higher people performance. If we take this as the premise, then it stands to reason that L&D must be strongly aligned with TM strategy.
People performance is defined based on the performance management framework the organization would have in place. Broadly, the levels of this framework (in a theoretical scenarion, and many organizations differ widely from this) could be seen here, and one can also see the levels and ways in which L&D can align with, and enable this TM strategy.
As you can see, the inputs from L&D initiatives at different levels need to be aligned to the requirements of that level, and the learning objectives which need to be met at that level.
At the level of KCAs, where the need is to build behaviourial capability, the training requirement primarily is for soft-skills, the details of which are based typically on a combination of role and the level in the hierarchy of the employee (commonly called band).
At the employee-goals level, the requirements are either in terms of organization needs from the employee, or in terms of employee aspirations, and these are primarily met in the form of technical training, or in form of training designed to meet the needs of succession or progression. From the perspective of succession or progression, organizations usually have programs aimed at equipping people for meeting specific roles, wither at the same level or at a higher level, and these would typically form part of the training needs at this level of the framework.
At the project/operational level, the training needs are primarily project-focused, to build capability inventory aligned with the requirements of the project or operations, and this forms a large part of the training requirements, mostly technical or functional.