In todays L&D landscape, the way businesses determine who should participate in what training isnt far away from some sort of conjuring act. More often than not, the result of this is a mixed bag, and many of the L&D professionals I speak to tell me that the L1 scores (based on the Kirkpatrick model) are more often than not tending towards the lower end of the spectrum.
There are typically two ways a business determines training participation. One is based on mandated training (usually related to promotion/growth), while the other is nomination by the business manager. Both of these are based on picking up from a ‘menu’ of available programs, and neither really takes into consideration the actual learning needs of the individual.
This is where the idea of predictive learning comes in. The idea here is simple … today, with the technology available to us, especially in the Big Data/Analytics domains, the data about what has worked in the past in what context is available to the organization in a large scale. This data is available based on training, HR, and operations/business data. This rich data can be leveraged to determine what is the best training solution which would likely work in a particular employee context. Like Big Data, this neednt look at the reason (or connection) between cause and effect, rather, look at the linkages as they have been seen in the past.
An important aspect of this picture is that this shifts the focus from training and learning, and from L&D to the individual learner, and makes the entire process people-centric.
One concern with this, though, could be that the outcome of the requirements could be way too granular, and too tailored to individual needs, so as to be unviable from the delivery perspective. More about this later …
I am these days reading a book about Big Data, and going through some of the applications of the technology, I was thinking about some of the ways Big Data can be applied in people matters. I tried to google about usage of Big Data for Performance Management, and didnt quite find much (or maybe thats because the search terms show results for application performance management). One aspect of using technology in HR, I feel, is in the realm of Performance Management.
Today, appraisals are done in an objective manner, with ratings which try to capture achivements and performance. However, as we know, these are a sort of force-fit. What does a rating of “Exceeds Expectation” mean? Does this mean, for instance, that performance is high, or does this mean that expectations are low? Somehow, this seems to be like fitting a square peg in a round hole, or a round peg in a square hole, if you prefer it that way.
An alternative to this could be the usage of technologies like Big Data to handle this. To begin with, managers could have the option of writing their observations, along with specific examples or scenarios as part of the appraisal process. This kind of input gives us rich information about people performance. Instead of trying to fit performance into a quantitative scale, this has the possibility of giving us qualitative inputs into performance.
Add to this the fact that plenty of business-related data is available from finance, sales, and operations, and we have immense data, both quantitative and qualitative, with which to work. Using this data as the starting point, Big Data technologies could be used to build correlation between manager comments and business performance, and deriving employee performance based on this correlation. This has the benefit of giving a descriptive picture of performance, one which describes achievements in a more meaningful way which can be used to drive talent processes.
Theres much more that Big Data can be used for, as this post by @josh_bersin describes.
This is a topic which quite a few of us would have been thinking about … what are the implications of cloud for IT service providers? The reason this question gains importance is because with cloud paradigm, the levers of value for customers become different from what they have been. The days of mega implementations, for example, having 500 people teams working for 4 years to deliver a project are no longer to be seen. With cloud coming into the mainstream of technology, project profiles are changing further. Release cycles are much shorter, with larger number of releases coming out in quick succession. Project lifecycles are much shorter too, as is the scope of development or customization.
One is the fact that it is no longer possible for companies to differentiate themselves on the basis of IT as technology becomes commoditized. The paradox is that when IT was a specialized space, IT was almost an afterthought in organizational strategy, while today is becoming centre-stage in the strategy landscape.
As IT becomes more commoditized and more and more of the technology components in the organization, there is more reason for organizations to oursource more of their IT functions.
For enterprise apps, for instance, the cloud era seems to be one of short implementation lifecycles, far less customization, agile development, and accelerators. This means that for services organizations, this is a whole new paradigm, with the sales folks not keen on selling these engagements as the revenue potential from these is much less, and yet, organizations have more focus on cloud engagements. Services organizations would need to change the engagement model, probably with more shared-delivery in implementation projects, and reducing the distinction between implementation and support engagements from a delivery perspective.