Defining … Some Thoughts

This seems to be the season for fundamental re-thinks. It began with Dave Snowden’s post about alternative to CKO, which delved into the relationship between business units and KM. I had published a poll about the same topic (which is open till 10th October), and blogged about Dave’s thoughts. And something i have been thinking about for a few days (the reason i havent been able to blog about this earlier is simply laziness) … how could one define KM. And came across this post by Dave Snowden, defining KM, which i think is a very good description of what KM should be doing in an organization.
I think the definition Dave gives describes KM quite well:

The purpose of Knowledge Management is to provide support for improved decision-making and innovation throughout the organization. This is achieved through the effective management of human intuition and experience augmented by the provision of information, processes and technology together with training and mentoring program.

Improved decision-making … this is something which was promised by information systems more than a decade back. Though decisions did improve, there is still the possibility of decision-making being more improved. How, one may ask. Till now, the paradigm of decision-making hasnt considered that decision-making is not a perfectly rational process. In other words, decisions arent always made on perfectly rational assumptions, or on information available, and that, even if theoretically, all possible information were available (which it cant), there would still be that factor x which is not totally definable, and which cannot be externalized, which influences decision-making. Could we call this tacit knowledge? Probably. Could we call this experience? Maybe. No matter what we call this, this remains the major aspect of Knowledge Management.

Add to this the aspect that it is not usually possible for everyone to have access to all possible information required to make a decision. Not only is this because of systemic constraints, but also because there is usually no single definition about what information is relevant, or required, for making a decision. In some scenarios there is, but not in all. Given this, one aspect of KM is also to get people connected with sources of knowledge, whether repositories, or people, and to get them access to knowledge, whether directly or indirectly, which may be relevant for decision-making. This is the essential value-proposition for tools like social networking.

Another aspect which Dave mentioned is about the positioning of KM in the organization. The essence is that at a centralized level, KM needs to be synchronized with the strategic imperatives of the organization, while implementation should be done at localized level. Implementation of KM initiatives should be within the context of the localized business requirements. This has a number of benefits. One, this ensures that while overall KM is aligned with strategic requirements, at the point of implementation, KM is aligned with specifics of business requirements. Two, this also creates a level of ownership for  KM initiatives among business units. Three, it is easier to measure the impact of KM initiatives in highly localized context, where it is easy to define the way KM can impact the business, rather than at a generic level.

Conversations, Networks, Measurement

This is a question i have been thinking about for some time. The question of ROI … and, how this impacts the way we look at KM. The question is simply this … how does one measure the impact of KM initiatives on the financial health of the organization. This question can be answered depending on how you understand the question. Simply put, anything that has a financial impact, has an impact either on the revenues, or on the cost, whether directly or indirectly. Suresh Nair posted a commend over at the post where he refers to the whole discussion from the perspective of need. Do we need something? If yes, and it would have an impact on the performance of the organization, go for it. But then, another aspect to look for is, whether it is worth it, if you go for it. And that is probably the trickier question to answer. How does the CFO decide that its worth it investing in a Social Networking tool, for example.

This question can be looked at in two parts. One, content, and other, collaboration. Lets look at content first. This is a little simpler to address. To begin with, if you have a document, you could always look at the document, and look at how much effort this document would save. So, for example, if a document reduces rework, or reduces cyclc-time by, say, 10%, thats 10% reduction in cost for that particular process. Its a different matter that this kind of determination is by itself not something which is completely accurate, but if you get the opinion of enough people, you could come up with a number which is reasonably accurate, at least in theory.

This brings us to the second point about conversations. And this is where it gets tricky. How do you measure the impact of conversations? Here, lets look at it in two parts. One, when taking a decision to invest in a tool to enable conversation, how does the organization even know how, and in what form conversations are actually going to happen? There is probably no way to determine this, given one of the key factors is the adoption rate, and even once you move past that, the nature of the conversation, according to the basic paradigm, is something you cannot regulate. The other aspect, if you already have such a platform, is how to actually determine what value conversations are adding. This is where it seems the paradigm of ROI faces some resistance.

There was a recent post by John Husband about assessing productivity, where he describes some of the aspects of networks, and hence, conversations, which make measuring them tricky. To quote:

• They multiply rapidly because the value of a network increases exponentially with each additional connection.

• They become faster and faster because the denser the interconnections, the faster the cycle time.
• They subvert (unnecessary) hierarchy because previously scarce resources such as information are available to all.
• Network interactions yield volatile results because echo effects amplify signals.
• Networks connect with other networks to form complex adaptive systems whose outcomes are inherently unpredictable.

The interesting to see from these is that networks can open up ways of working which are new, which we havent yet seen in organizations. For example, the idea of bypassing hierarchies. This is something which is enabled by the network. Does this lead to quicker decision making? Probably, it does. What is the financial impact of quicker decision making? We dont know. Can this be measured in the context of specific decisions? I think so. An organization i was interacting with a few years ago, had a servicing scenario, where the service engineer, if facing a problem which he could not solve, would travel back to the office, consult his manager, who would tell him the solution to the problem, and then he would go back to the customer site, check if the spares required for solving the problem are there or not, and if not there, would come back to the office, order the spares, and when they are available, repair the product. With handheld devices, the engineer could interact directly with their counterparts across the country, and quickly get a solution, either through a Knowledgde Base, or through interactions with service engineers, and reduce the time taken to repair. At the same time, if the spares werent available, the engineer could broadcast a request for spares to other engineers who could provide them if they had stock which they didnt require. There is value in these conversations.

But, these are specific examples, and on the whole, it is not so simple to determine this kind of value from conversations, or from networks. But can at least define scenarios in which conversations can create value, in a specific context? As i have written before, measurements or possible improvements make sense more in a specific context, rather than being broad-based. And, if you can take this to the context of the business process, you can at least begin to understand the applicability. And, within this context, it is rather easy to identify how conversations could create value. Not that this exercise is feasible, especially if you try to do this across the organization, but it at least illustrates the value of conversations in the context of the organization.

Another thing that John mentions:

Continuous flows of information are the raw material of an organization’s value creation and overall performance.

This is the idea on which the concept of ERP was based, too. Of making the relevant information available to the relevant people, so they could take effective decisions, and processes could be streamlined. Only thing, ERPs focus on transaction processing, where data is made available across organization silos or departments, while we are talking about ideas and experiences being made available in a similar manner. It is a little easier to quantify the impact of data sharing (production planning cycle time reduced by 20%).

To take an example, i am having a conversation with Nirmala about something which we realized we were both thinking about. The whole interaction was sparked by a comment on twitter (and also on facebook), and brought out a conversation which could lead to ideas coming out of it. These ideas could have some form of value. Even so, it would be impossible to quantify this to begin with. And even then, if it is a new idea, then its easier to quantify the value, while if its just sharing of ideas, making people more effective in their work, then it is tricky to measure this, too. Add to this, that if nothing comes out of this idea, i would have at least learnt something, which again is very difficult to quantify.

Any thoughts, please feel free to comment.

A Conversation …

You must all be following the news, and all the goings-on at Satyam. Again, i am not a management expert, so better for all that i dont comment on that and expose my ignorance. Having said that, i would say that equating this to large-scale corruption in Indian industry would be quite similar to speculating about large-scale corruption in American industry in the wake of Enron. Which, to my mind, is not something which is reasonable.

Coming now to the point of this post … a few days back, Satyam stock was seeing quite a bit of fluctuations in trades on the stock exchange. I was having a talk with my friend, Arvind Dixit, and was asking him who are the people who are buying Satyam at this stage. I wont write here about his hypothesis, but an interesting thing that he mentioned … something to the effect that if it was a brick-and-mortar company, it would have been easy to figure out what should be the fair value of the Satyam stock, but because the nature of Satyam’s business is knowledge-based, its very difficult to determine the fair value of Satyam stock.

I would see this based on two considerations:

1. It is inherently very difficult to determine fair value based on intangibles.
2. Since knowledge is the main ingredient of the business, and this knowledge is inherently carried by people, it becomes all the more difficult to determine the fair value, because as people leave, they take away a large part of the resources of the organization with them.

And this is what i wanted to write about … the fact that it is very difficult to determine the value of knowledge to the organization, or to society at large. However, it would not be correct to assume from this that there is no value (nobody would agree even if you said that), but the fact is, while there are exhaustive mechanisms for valuing tangibles, there is still little to value intangibles. One could estimate value based on the projected revenues which could be generated based on this knowledge, but this at most a proxy measure.

Of Measurements … Again

A lot of people have written a lot about the value of measurements. Most of us know the dictum that whatever cant be measured cant be managed. And this interesting post by Moria Levy about Measurement also starts with this dictum. But thats where she moves away from what a lot of folks are saying.

A lot has been written about the utility, or futility of measurements especially when it comes to intangibles. This is because of the basic definition of something intangible, which is defined by the dictionary as …

existing only in connection with something else, as the goodwill of a business.

Now, if something exists only in connection with something else, how do we measure it? And, is it really important to measure it? Maybe it isnt.

The the example of knowledge … Moria has forcefully described how and why measurement may not be the best thing to have happened to humanity since … (fill in with whatever you like!). Apart from the usual issue that most measurements we do today are about where we have been, rather than where we are headed, an important thing to point is that in most scenarios, its not possible to identify cause and effect relationships between things. Its easy if we can keep all other variables constant, but thats easier said than done. As i have written before, KM is possibly one of the initiatives being run in the organization, and as such, its really difficult to identify cause and effect relationships which can define what led to which operational improvement. Like the swimmer’s dilemma i have written about earlier (although in the context of training, but its equally applicable here).

Another aspect to this definition is that if intangibles exist only in connection with something else, the only way to measure these is by measuring those something elses, which is why, i have been talking about the whole idea of proxy measures, which means that we cannot, and maybe should not, have a universal definition for measurement of KM, but rather, derive these definitions based on the context in which they are applicable.

Measurement And E2.0 …

Back after a week … and, Diwali! And here’s wishing all of you Happy Diwali and a Prosperous New Year. The Mahurat trading session yesterday had most stocks going up on the BSE, so thats a nice start.

Andrew McAfee has a rather interesting conversationg going … about a topic which tends to have about the most divergent views when it comes to social computing … yes, you got it … measurement. Andrew has written a rather interesting post about the whole idea of rating knowledge workers, encapsulating a large range of divergent views on the subject.

What i believe comes out of the entire discussion is that while the whole idea of putting a rating to someone’s contribution to a social computing platform is quite against the entire idea of social computing, there has to be a way this can be addressed. After all, when we look at anything in the organizational perspective, there has to be a way of finding out whether we are on the right track, and whether there need to be changes to the way things are being done.

There could be two ways of looking at this … one could be in terms of a performance appraisal type of rating on contributions and knowledge sharing efforts, and the other in terms of community feedback on these. While the first could end up stifling the entire effort (because this would look at it more quantitatively, rather than qualitatively … how many blog posts could your boss go through to give you a rating …), the second option is actually quite in line with the overall idea of social computing.

Lets take an example … when someone from your network posts something on their profile, say, on facebook, you, and lots of others have the means to comment on this. These comments are essentially feedback, and could work as a form of ranking on this contribution. Take this one step further, into the organizational context … if people had the possibility of giving you stars (ya, this is something i picked up from my son … they get stars for doing well at school), they could show their appreciation of whatever you have contributed. The nice part is that there is no limit to the supply of these stars … so, you dont necessarily rank someone to the exclusion of someone else, and considered over the larger audience, this could be a reasonable way for people to show their appreciation of your work, at the same time, work well in terms of recommending things to others.

In addition to this, different people look at the same contribution from different perspective. An expert looks at it trying to understand how well this could communicate a concept to a larger audience, a novice could look at it to learn something new, while someone who is simply trying to solve a problem would look at it from the perspective of relevance. Aggregating feedback from such diverse viewpoints would, i think, give an overall qualitative perspective.

In other words, if we take a scenario where feedback could be gathered by the larger community, this could be a reasonably nice way of understanding how the entire idea of social computing is working in the organization.

Leadership and Social Computing …

A rather interesting post by Rachel Happe … the distinction between wisdom of crowds and mob rule … interesting reading … more so because it brings in some form of sobering to the euphoria around social computing. Having said that, however, the key point i think Rachel brings out is the idea about leadership. And this is something which i have experienced in my interactions with different organizations.

Especially within the context of the organization leadership plays a critical role. As i have written before, the difference between succesful adoption pf and hence deriving benefits from social computing and Knowledge Management initiatives, and the other way round, comes, to a large extent from the leadership and the attitude of leadership towards these initiatives. Now, leadership is not the only parameter here, but it is definitely one of the most important parameters towards determining how an organization is going to take to the larger social computing picture.

If we have an organization where leaders look askance at blogs (there are quite a few organizations, where senior management, and i am equating them with leadership, look at blogging as a waste of time), then the probability of the organization adopting blogging on a large scale is quite low. Similarly, for communities … One of the paradoxes about communities is that while they are supposed to be self-forming, and self-governing, they really cannot sustain without some amount of stimulus provided by the organization itself, and when i say organization here, i am really talking about leadership.

Which brings us to the question … how to get the leadership to buy into these initiatives. Lot has been written about this, but more and more, the ROI concept comes in. Managers need to see what is the benefit the organization gets from investing time and effort into an initiative like adopting web 2.0 technologies, in order to justify the investment of resources into this, rather than into other initiatives which are competing for the same funding. Having said this, ROI is not a concept which lends itself easily to calculation when it comes to knowledge, for reasons which i have written about before. This is not to say that we can do without something which is as basic as this in the minds of the decision-makers. Now, i am not writing about a score-card here, but some measures for performance (which are usually already in place), and their relation with KM initiatives is something which needs to be developed. And this, to my mind, can be developed only within the context of a specific scenario, rather than being generalized.

ROI And Training … Again

A very interesting post by Jay Cross about ROI … it got me thinking. A question which has been coming up time and again in discussions I have been having with friends is about the extent to which we measure ROI has been responsible for the crisis the markets are facing. Or is it, at all? Hey … I am not a management guru, and hence, I don’t even claim to know whether it does or not.

There is, however, something which I have been thinking about, and this post actually brought this out quite well. Especially the part where he says …

Making strategic decisions is fundamentally different from making operating decisions. Senior leadership uses gut feel, informed judgment, and vision to set direction. Managers at lower levels decide what projects to fund by describing the logic of how they will help carry out the strategy; this is where running the numbers is useful. ROI hurdles help identify the projects with the greatest potential return. They don’t address the big picture.

This is an interesting thought, if we take this forward. When we talk about vision, we are not talking about this quarter, or the next. We are, instead, talking about a process of reaching from point A to point B, whatever these points may be. Question is, if, in this process, some of the measures take a hit for a quarter or two, sort of giving up on some short term gains for more long terms gains, do these trade-offs actually come into the radar, or the intelligence dashboards of business leaders?

Consider this … There are a number of construction projects going on in Delhi these days, in preparation for Commonwealth Games, 2010. Now, these project sites are not a pretty site as of now, but by the time these are completed, its going to be a different picture altogether. Should one give up on a not so pretty near-term picture in order to attain a nicer picture in the long term?

In this context, lets look at training. Lets remember … training is usually work in progress. When people come out of a training, they have learnt some things, and they are yet to learn some things more, which is where the experience of applying the concepts of what they have learnt on the job comes into the picture. The first question, hence, is what is the point at which we should measure the ROI of training? Traditional means are feedback forms which participants to trainings fill out at the end of the training, when they have no idea how relevant the training has been, and how well it has equipped them to deliver work on the job. So does this mean that effectiveness should be measured at a later point? Here, the question that comes up is, what is the extent that operating improvements can be attributed to training, and to what extent can they be attributed to experience, on the job learning, or collaboration?

Lets look at it this way … you could train someone to swim … or, they could learn to swim by themselves once pushed into the deep end of the pool (with the lifeguard around, of course …). The person who was trained to swim wouldn’t be able to appreciate the effectiveness of the training because he never experienced the effort required in learning to swim on your own, while the other person never really got trained, so again, he is not the right person.

Knowledge Scorecard …

No, i am not coming up with a new knowledge scorecard. Rather, some of the things i have been reading about … about measuring knowledge. Rather interesting reading, though i would think they are based on assumptions which we might want to question.

The first assumption of measuring the knowledge inventory of the organization, is that the knowledge, and the person who holds the knowledge are two separate, independant things. Not only does this treat knowledge as a thing, it also makes the assumption that you can have knowledge even if you abstract the knower from the scene. This may not be an assumption that may be quite valid. Of course, when we talk about explicity knowledge, we assume that this assumption is valid, but having said that, once we believe that all knowledge is directly or indirectly tacit, this assumption breaks down. The question that then comes up is how does one measure something which doesnt exist on its own.

Another assumption is that knowledge is a “thing” which can be measured. This assumes that knowledge is an object which can exist by itself, which, as we have seen, is not something which is necessarily correct. Add to this the idea that what you cannot measure, you cannot manage, and the mix becomes heady … but then, the question to ask here would be … is the term management apt when it comes to KM?

The answer to this measurement dilemma, though, can be simple … we can measure something based on its manifestation. What is the manifestation of knowledge? Improvements in the way things are done. Great … this is a nice, indirect way to measure … after all, if there is no mechanism to directly measure something, then we use something indirect to measure it … think dark matter! Only thing is, this indirect measurement must change in different scenarios. In other words, something which is relevant to the context in which we are measuring it, as i have written before!

Value of KM

Admittedly, theres plenty written about the subject. And, we are yet nowhere closer to what could be a framework for measuring the value of KM. So why am i writing about this? I came across an interesting blog by Jenna Sweeney about the idea of measurement of Training … and, look at it closely, Training and Knowledge Management are related, so i have thought for a long time.

The basic point that Jenna is making here is the fact that measurement must be done in the context of whatever you are measuring. And, this is quite valid for the entire question of value of KM. First of all, KM means different things to different people … and if this is so, it is quite difficult to come up with adequate measurement norms. Leave aside the fact that even if it were to be able to come up with these norms, it would still be very difficult to measure, because of the basic structure of knowledge. And this is something i have written about before … that when we are measuring something as nebulous as knowledge, it is a nice idea to not abstract it from its context, and try to build up something generic, but instead, stick to things which are specific to the context of the measurement.

Art Fry and Social Computing

I was reminded of the story of how the Post-Its were invented. Though this post is not about Post-Its. Or, you might find this an interesting read. Or, if you look closely at the story of the Post-Its … From what i read …

The marketing people did some surveys with potential customers, who said they didnt see the need for paper with a weak adhesive. Fry said, “Even though i felt that there would be demand for this product, i didnt know how to explain this in words. Even if i found the words to explain, no on would understand …” Instead, Fry distributed samples within 3M and asked people to try them out.The rest was history.

The part about not being able to explain in words, and even if one found the words to explain, no one would understand, reminds me of social computing. Strange how one thing could lead to another? This, to my mind, is the beauty of human thought. One doesnt know what thought might lead where. The interesting part here is that, like Post-Its, senior management usually doesnt see the need for sticky web-pages where people can scribble their thoughts. However, just give these pages to them, and one could come up with quite interesting uses. And, the interesting thing is, it may not just be restricted to the usual things.

Why should a wiki be used only for maintaining project plans and communications, or for preparing presentations? Why cant a systems administrator create a wiki for maintaining help and FAQs for the new system? Or, a sales guy create a blog to keep track of the orders he has closed this quarter, so that, for reporting, he doesnt have to go back asking for reports, but rather, just go to his blog, and get the numbers from there? Or, why cant just about anybody write down their objectives or targets for the year on a wiki page, an track their achievements against their targets, in a wiki, so that come appraisal time, one could just send the link of the wiki to their boss (if one is feeling adventurous, that is … otherwise, copy-paste and send it in an email!).

The point i am trying to make is that given the chance, people could come up with uses of social computing technology which were probably not even thought about. There are, of course, the usual, well-defined ways of using them, but these may just be one of the few.

Of course, if usage cannot be completely predicted, the next question that arises is whether anything like ROI can be predicted with any reasonable level of confidence? I dont think so. Of course, the question still remains whether one could tag ROI to something as intangible as social computing (simply because there is usually no causal relationship between the tools, and the outcome … the tools are the software, and the outcome occurs in the heads of people!). Though, of course, somethng which keeps coming back to me is that if a senior manager is to make an investment, surely, they would need to make sure that it is worth it. And this, to my mind, is where the catch lies. This ROI is to be experienced, not necessarily calculated, to begin with!