In todays L&D landscape, the way businesses determine who should participate in what training isnt far away from some sort of conjuring act. More often than not, the result of this is a mixed bag, and many of the L&D professionals I speak to tell me that the L1 scores (based on the Kirkpatrick model) are more often than not tending towards the lower end of the spectrum.
There are typically two ways a business determines training participation. One is based on mandated training (usually related to promotion/growth), while the other is nomination by the business manager. Both of these are based on picking up from a ‘menu’ of available programs, and neither really takes into consideration the actual learning needs of the individual.
This is where the idea of predictive learning comes in. The idea here is simple … today, with the technology available to us, especially in the Big Data/Analytics domains, the data about what has worked in the past in what context is available to the organization in a large scale. This data is available based on training, HR, and operations/business data. This rich data can be leveraged to determine what is the best training solution which would likely work in a particular employee context. Like Big Data, this neednt look at the reason (or connection) between cause and effect, rather, look at the linkages as they have been seen in the past.
An important aspect of this picture is that this shifts the focus from training and learning, and from L&D to the individual learner, and makes the entire process people-centric.
One concern with this, though, could be that the outcome of the requirements could be way too granular, and too tailored to individual needs, so as to be unviable from the delivery perspective. More about this later …
There is, for obvious interest, quite a bit of interest in the subject of subjects, or, rather, in the subject of examinations on those subjects. The newspapers recently reported that Central Advisory Board of Education has recommended re-introduction of the class X exams. Another subject the article talks about is the policy of student detention based on exam results.
This brings us to a basic question … what is the purpose of exams? While there is definitely a need within the education system to assess achievement of learning objectives, the problem begins when exams are seen as a mechanism to weed out students who may not meet the criterion of meeting learning objectives. If the intent is to ensure that students learn the things they are supposed to, would studying the same thing again help a student understand better than the first time? This is akin to repeating something in the hope that just by repeating it, the other person will understand it. If the student didnt understand it the first time, isnt it more than likely that he wont understand the next time either?
Instead of having the student go through the entire year, it would be more helpful for the student if the focus was to be on topics the student was facing difficulty in understanding. A quick look at the answer sheet for the exam will give the answer. This, though, wont scale without the use of technology to support this, and today, we have the technology to move assessment in this direction.
Another aspect is to find out what we are testing. Are we testing memory of the subject, or are we testing understanding? If we are trying to assess achievement of learning objectives, we need to focus on understanding. This means the pattern of testing needs to change towards application of concepts from simple recitation of concepts, and we, as a nation, probably need an examination/assessment policy to complement the education and learning frameworks in the country.
I am these days reading a book about Big Data, and going through some of the applications of the technology, I was thinking about some of the ways Big Data can be applied in people matters. I tried to google about usage of Big Data for Performance Management, and didnt quite find much (or maybe thats because the search terms show results for application performance management). One aspect of using technology in HR, I feel, is in the realm of Performance Management.
Today, appraisals are done in an objective manner, with ratings which try to capture achivements and performance. However, as we know, these are a sort of force-fit. What does a rating of “Exceeds Expectation” mean? Does this mean, for instance, that performance is high, or does this mean that expectations are low? Somehow, this seems to be like fitting a square peg in a round hole, or a round peg in a square hole, if you prefer it that way.
An alternative to this could be the usage of technologies like Big Data to handle this. To begin with, managers could have the option of writing their observations, along with specific examples or scenarios as part of the appraisal process. This kind of input gives us rich information about people performance. Instead of trying to fit performance into a quantitative scale, this has the possibility of giving us qualitative inputs into performance.
Add to this the fact that plenty of business-related data is available from finance, sales, and operations, and we have immense data, both quantitative and qualitative, with which to work. Using this data as the starting point, Big Data technologies could be used to build correlation between manager comments and business performance, and deriving employee performance based on this correlation. This has the benefit of giving a descriptive picture of performance, one which describes achievements in a more meaningful way which can be used to drive talent processes.
Theres much more that Big Data can be used for, as this post by @josh_bersin describes.