In todays L&D landscape, the way businesses determine who should participate in what training isnt far away from some sort of conjuring act. More often than not, the result of this is a mixed bag, and many of the L&D professionals I speak to tell me that the L1 scores (based on the Kirkpatrick model) are more often than not tending towards the lower end of the spectrum.
There are typically two ways a business determines training participation. One is based on mandated training (usually related to promotion/growth), while the other is nomination by the business manager. Both of these are based on picking up from a ‘menu’ of available programs, and neither really takes into consideration the actual learning needs of the individual.
This is where the idea of predictive learning comes in. The idea here is simple … today, with the technology available to us, especially in the Big Data/Analytics domains, the data about what has worked in the past in what context is available to the organization in a large scale. This data is available based on training, HR, and operations/business data. This rich data can be leveraged to determine what is the best training solution which would likely work in a particular employee context. Like Big Data, this neednt look at the reason (or connection) between cause and effect, rather, look at the linkages as they have been seen in the past.
An important aspect of this picture is that this shifts the focus from training and learning, and from L&D to the individual learner, and makes the entire process people-centric.
One concern with this, though, could be that the outcome of the requirements could be way too granular, and too tailored to individual needs, so as to be unviable from the delivery perspective. More about this later …
One of the reasons that content needs taxonomy is to enable users to be able to search for content relevant to them easily. Theoretically, this is something that is quite straightforward. After all, Yahoo, Lycos, etc., have been doing this for more than a decade. And google, and now bing, have taken them to the next level of complexity when computing search results. So what are we talking about? Search is something which should be a given, and not something which should be question for discussion.
There are, though, a few things which we need to understand about search, which influence not just user interaction with content, but also the way knowledge managers look at search. To begin with, search is not the best possible option available to content or knowledge managers when they are trying to highlight content. Let me take an example to explain what i am trying to say. One of the things knowledge managers are trying to do … Highlight content which is relevant to a user so the content which users see on a portal is high impact for each user (at least thats the idea theoretically), something on the lines of personalization, the idea being that knowledge managers should be able to push relevant content to users depending on their profile of work or usage. But if there is a particular document which may be useful to users of a particular profile, how can that be highlighted to those users? Search would, after all, return a set of documents depending on what you search for, some of them relevant, and some not.
Add to this the complexity of taxonomy. This is something which is underestimated. Let me take an example to explain what i am trying to say. I was trying to search for a caller-tune. The song i was searching for is a song named Bulleya by Junoon. One of the parameters which is part of the taxonomy is language, and i searched for the song with Urdu, and with Punjabi, and couldnt find the song. Point is, with taxonomy, a user needs to understand the way the content manager would think, to be ableto find out the attribute values for specific content which the content manager would have thought up. And since i couldnt, i wasnt able to select the caller-tune. And this is where, as we know, folksonomy comes into the picture.
Of course, today, we find that the network can bring up content which may be relevant to you as a user. Of course, this depends on the quality and density of your network (ok, so i dont really know what density means, but what i am thinking is that since noone leads unidimensional lives, the network is based on a number of dimensions, and the knowledge a network would throw up along a particular dimension would depend upon the density of the network along this dimension, there should be some attribute of the network which can define this behaviour of the network, and maybe density is a term which is as apt as another). But as i have written before, the network would enable users to discover more easily the content which they find relevant, though search would still play an important role. This is because search is about pull. More like DIY.
An aspect which not many folks look at is that for a lot of contexts, its not always possible to classify content into specific, discrete values as defined by a content management team. An example for this is the song search that i mentioned. Any document resides at the intersection of specific values for a large number of attributes, and this is made more complicated by the fact that different people may look at the same document being at different intersections of values.
Add to this the idea that the easier you want to make it for users, the more you will probably make it complex for anyone who would try to contribute a document. All in all, a complex scenario. But if we try to see how users look at content, we can make it just a little simpler for users to search for documents. Thing is, different people have different ways or perspectives for looking at the same content, which means that you can only go so far as meeting user requirements for search. While this is where folksonomy comes in, i believe folksonomy cannot totally replace taxonomy for the structure of taxonomy does add to the usability of the repository.
A rather interesting post by Darcy Lemons over at APQC about the shelflife of knowledge … or, how long should we retain knowledge? Interesting question. This is a question which frequently comes up whenever there is a discussion about a content management system. People usually come up with the question of how old is too old. Especially if you talk to technology firms, this question becomes even more pertinent, because a particular solution which would work with a particular release of a software would, in all probability, not work in a newer release.
My take on this … its not the age, rather, the relevance of knowledge which matters. Again, lets take an example … or rather, lets extend the earlier example. While the solution which worked on an earlier version of the software may not work in the newer version, if you are supporting the older version of the software, this knowledge is important for you. In other words, this knowledge may not be useful for implementation projects, but quite useful for sustenance projects. Hence, its not about the age, but about relevance … or, shall we say usage? Because, in a content management system, for example, relevance can be determined by usage. If people are using some knowledge elements, then they are in all probability still relevant, even if they are dated. Lets take the example Darcy has taken … if tomorrow we decide to do away with cars and trucks, and decide to go back to horse drawn carriages, there are still parts of Delhi where the expertise is alive and kicking. More on that later …
The issue here is, the people who generated the knowledge in the first place, may ot longer be around. How, then, do we attempt to recreate something which, in all probability, has already been created. This is an area where the entire idea of social computing can be quite useful. Lets illustrate this … a lot of people, when writing books about history, or topics about which not much knowledge exists today, refer to papers, documents, plaques, photographs, archaeological remains, etc., of the topic they are writing about. For example, I just completed reading a book titled In the Shadow of the Great Game … by Narendra Singh Sarila (though i have no idea how some folks get the idea that this is a partisan book … its anything but that, but then, thats my opinion … you are free to have your own) … and here, the author has extensively quoted sources, including official archives, personal papers, etc. … Personal diaries, for example, are a description of the then current events … as such, they become a valuable source of information about things that were happening, as well as the opinions of people (stakeholders?) about these. Somewhat similar to blogs?
From this description, we could go on to the possibility that blogs, discussion fora, and other, similar platforms could be a good way of preserving the thoughts of people about contemporary events … whether outside, or within the organization.