In todays L&D landscape, the way businesses determine who should participate in what training isnt far away from some sort of conjuring act. More often than not, the result of this is a mixed bag, and many of the L&D professionals I speak to tell me that the L1 scores (based on the Kirkpatrick model) are more often than not tending towards the lower end of the spectrum.
There are typically two ways a business determines training participation. One is based on mandated training (usually related to promotion/growth), while the other is nomination by the business manager. Both of these are based on picking up from a ‘menu’ of available programs, and neither really takes into consideration the actual learning needs of the individual.
This is where the idea of predictive learning comes in. The idea here is simple … today, with the technology available to us, especially in the Big Data/Analytics domains, the data about what has worked in the past in what context is available to the organization in a large scale. This data is available based on training, HR, and operations/business data. This rich data can be leveraged to determine what is the best training solution which would likely work in a particular employee context. Like Big Data, this neednt look at the reason (or connection) between cause and effect, rather, look at the linkages as they have been seen in the past.
An important aspect of this picture is that this shifts the focus from training and learning, and from L&D to the individual learner, and makes the entire process people-centric.
One concern with this, though, could be that the outcome of the requirements could be way too granular, and too tailored to individual needs, so as to be unviable from the delivery perspective. More about this later …
One of the reasons that content needs taxonomy is to enable users to be able to search for content relevant to them easily. Theoretically, this is something that is quite straightforward. After all, Yahoo, Lycos, etc., have been doing this for more than a decade. And google, and now bing, have taken them to the next level of complexity when computing search results. So what are we talking about? Search is something which should be a given, and not something which should be question for discussion.
There are, though, a few things which we need to understand about search, which influence not just user interaction with content, but also the way knowledge managers look at search. To begin with, search is not the best possible option available to content or knowledge managers when they are trying to highlight content. Let me take an example to explain what i am trying to say. One of the things knowledge managers are trying to do … Highlight content which is relevant to a user so the content which users see on a portal is high impact for each user (at least thats the idea theoretically), something on the lines of personalization, the idea being that knowledge managers should be able to push relevant content to users depending on their profile of work or usage. But if there is a particular document which may be useful to users of a particular profile, how can that be highlighted to those users? Search would, after all, return a set of documents depending on what you search for, some of them relevant, and some not.
Add to this the complexity of taxonomy. This is something which is underestimated. Let me take an example to explain what i am trying to say. I was trying to search for a caller-tune. The song i was searching for is a song named Bulleya by Junoon. One of the parameters which is part of the taxonomy is language, and i searched for the song with Urdu, and with Punjabi, and couldnt find the song. Point is, with taxonomy, a user needs to understand the way the content manager would think, to be ableto find out the attribute values for specific content which the content manager would have thought up. And since i couldnt, i wasnt able to select the caller-tune. And this is where, as we know, folksonomy comes into the picture.
Of course, today, we find that the network can bring up content which may be relevant to you as a user. Of course, this depends on the quality and density of your network (ok, so i dont really know what density means, but what i am thinking is that since noone leads unidimensional lives, the network is based on a number of dimensions, and the knowledge a network would throw up along a particular dimension would depend upon the density of the network along this dimension, there should be some attribute of the network which can define this behaviour of the network, and maybe density is a term which is as apt as another). But as i have written before, the network would enable users to discover more easily the content which they find relevant, though search would still play an important role. This is because search is about pull. More like DIY.
An aspect which not many folks look at is that for a lot of contexts, its not always possible to classify content into specific, discrete values as defined by a content management team. An example for this is the song search that i mentioned. Any document resides at the intersection of specific values for a large number of attributes, and this is made more complicated by the fact that different people may look at the same document being at different intersections of values.
Add to this the idea that the easier you want to make it for users, the more you will probably make it complex for anyone who would try to contribute a document. All in all, a complex scenario. But if we try to see how users look at content, we can make it just a little simpler for users to search for documents. Thing is, different people have different ways or perspectives for looking at the same content, which means that you can only go so far as meeting user requirements for search. While this is where folksonomy comes in, i believe folksonomy cannot totally replace taxonomy for the structure of taxonomy does add to the usability of the repository.
A rather interesting post by Darcy Lemons over at APQC about the shelflife of knowledge … or, how long should we retain knowledge? Interesting question. This is a question which frequently comes up whenever there is a discussion about a content management system. People usually come up with the question of how old is too old. Especially if you talk to technology firms, this question becomes even more pertinent, because a particular solution which would work with a particular release of a software would, in all probability, not work in a newer release.
My take on this … its not the age, rather, the relevance of knowledge which matters. Again, lets take an example … or rather, lets extend the earlier example. While the solution which worked on an earlier version of the software may not work in the newer version, if you are supporting the older version of the software, this knowledge is important for you. In other words, this knowledge may not be useful for implementation projects, but quite useful for sustenance projects. Hence, its not about the age, but about relevance … or, shall we say usage? Because, in a content management system, for example, relevance can be determined by usage. If people are using some knowledge elements, then they are in all probability still relevant, even if they are dated. Lets take the example Darcy has taken … if tomorrow we decide to do away with cars and trucks, and decide to go back to horse drawn carriages, there are still parts of Delhi where the expertise is alive and kicking. More on that later …
The issue here is, the people who generated the knowledge in the first place, may ot longer be around. How, then, do we attempt to recreate something which, in all probability, has already been created. This is an area where the entire idea of social computing can be quite useful. Lets illustrate this … a lot of people, when writing books about history, or topics about which not much knowledge exists today, refer to papers, documents, plaques, photographs, archaeological remains, etc., of the topic they are writing about. For example, I just completed reading a book titled In the Shadow of the Great Game … by Narendra Singh Sarila (though i have no idea how some folks get the idea that this is a partisan book … its anything but that, but then, thats my opinion … you are free to have your own) … and here, the author has extensively quoted sources, including official archives, personal papers, etc. … Personal diaries, for example, are a description of the then current events … as such, they become a valuable source of information about things that were happening, as well as the opinions of people (stakeholders?) about these. Somewhat similar to blogs?
From this description, we could go on to the possibility that blogs, discussion fora, and other, similar platforms could be a good way of preserving the thoughts of people about contemporary events … whether outside, or within the organization.
A common repository means common taxonomy…a common way of identifying and locating artifacts. A common repository means a visually rich big picture that tells you about all sorts of possibilities….not one that is carved out of a narrow search term!!
I think she has got it bang on. Today, there are tools which can enable you to share documents right off your hard-disk. Create a document, put it into a shared folder, and there you are … others can find the documents right from there. Contrast this with a common repository … Create a document, save it on your hard-disk, and then upload it onto a repository. And, you dont even know what half those taxonomies mean. This being the key. And, this is where a repository plays an important role … that of getting people thinking about how others think, getting people thinking in terms of a common terminology, giving some form of uniformity (its nice, at times … remember McDonald’s … the same taste wherever you go). You got it … how could i not write about food?
There was a lot of discussion some time back about the assertion that KM is dead … Luis wrote about it … and the assertion that KM is moving more towards conversation from documentation. I have added to the words about conversation … But all this conversation about conversation doesnt answer one question … a question i am thinking about.
Lets take an organization which is yet to reach the “KM1.0” stage … they dont have a centralized document repository … they have siloes where information is stored, and retrieval of this information is largely a manual activity, because a lot of it is stored in team file-servers etc. Question is, should this organization move straight to a “KM2.0” scenario?
One way to look at this would be to say … sure! This would make sense in theory, given the fact that all knowledge is directly or indirectly tacit. So, the logic here would be that if we ca get people together, either into communities, or into an internal blogosphere, we can actually get people to share information more seamlessly even without resorting to a centralized repository.
Having said this, would this work in practice? I dont know, but i tend to believe, it wouldnt. To begin with, information which is not in a repository tends to be difficult to identify. Much more so than something which can be attached to a somewhat defined taxonomy (whether a regular taxonomy or folksonomy … i am including both in this). Second, and more important, a repository could be an important step towards building a mindset of sharing … where it is considered a nice thing to share documents with others, leading to a more ready acceptance of some of the social tools.
Any thoughts? Please do write in, to let me know what you think should be the approach here. Of course, there are pros and cons of both approaches, and would like to hear from you, what you feel are some of each.
There was a time when your prominence depended on the amount of things you knew about a variety of subjects. Those were the days when people actually memorized time-tables, knew the schedules of trains, buses, and flights by heart, and knew so much about so many things which they remembered. And this was all possible because there was the requirement to remember.
Today, we dont remember as many things as folks used to remember, say, in Dad’s generation. The reason, to my mind is that the requirement to remember is not there (no, it has nothing to do with shrinking brains, though there are folks who lend credence to that theory, too!). Probably this is why quizzing was an activity which was way high in prominence on college campuses, and the more you knew about different things, the better it was for you, because it meant you were really smart. Today, however, this is not the scene. And probably this is why i dont see so many quiz shows these days?
The other day, i was talking to my nephew, and our man had just been for an interview at reputed consulting firm. One of the things they asked him (maybe because he is straight out of college …) was the GDP of India in the last financial year. And this got me thinking. Here was a consulting firm (and they do a lot of projects on KM, too …), asking in an interview the GDP of India? Do kids need to remember this kind of information nowadays? Isnt it far simpler to just google it? Or, better still, with wikipedia, you could find the GDP of India here. Which is why it got me thinking … what were they thinking! Between the content and the conversation which is around in the virtual world, this is not even required.