Nobody will argue that knowledge fuels businesses and circular economies. More precisely, not all knowledge - only know-how that rises above the its public content. Unlike the latter, know-how adds explicit value: it drives decision-making in business and commerce.
This value decays with time as know-how mutates into public form readily available on the internet. (No knowledge becomes irrelevant or destroyed or consumed or has a shelf-life.) By analogy with entropy phenomenon, this natural and chaotic process shall somehow be counteracted with the know-how deliberate creation to make businesses moving.
The law of know-how conservation immediately begs three simple questions – starting point of modern knowledge management system (KMS).
- Where does the boundary between know-how and public domain lie?
- Can the know-how decay be controlled?
- What are the conditions for the know-how creation?
I deliberately do not include the task of identifying and securing know-how. One cannot expect that KMS will sense the corporate knowledge gap or need for new knowledge; different risk-based approach is needed; it has been discussed by me in this article.
The above-mentioned open KMS focuses on the public-private interface of knowledge, while conventional KMS - on corporate internal activities and self-gained experience (product centric approach).
In 2015 I had a talk with a top engineering executive of the leading EPC contractor that lost a desalination mega-project in Chile. He authoritatively told me that the high price was the reason. He was genuinely shocked when I showed him the true price published on the winner website together with details of the project considered by this person as know-how. The price was sensibly higher. Open KMS is what transforms cheap-offer business into quality-offer one.
The inevitable staff turnover and knowledge sharing, and shedding non-core product businesses - "anti-diversification" chemotherapy for doomed companies - are major accelerators of the know-how decay.
All the know-how owned and authored by corporations are given off by individuals still keeping to themselves the implicit prerequisites for the know-how creation - the know-what and know-why parts forming so-called tacit knowledge. Know-why is an understanding of the principles underlying phenomena. Know-what helps us appreciate phenomena worth pursuing.
When such a person leaves the company, know-how reproduction silently dies. Statistics says that average life of an employee in such companies like Facebook, Google, Amazon, Oracle varies between 1.5 and 2.5 years. For heavy industries the figures are much higher - between 3 and 5 years and tend towards the first group.
The example of massive knowledge sharing is infrastructure mega-projects; the number of engineering services providers doubles after each executed project. Just compare resumes of the subcontractors before and after such projects as Carlsbad (USA, 2015) and Ashdod (Israel, 2016) desalination plants.
I think GE is the best illustration today of unexpected consequences of anti-diversification. To accelerate its conversion into software company, GE sold off its water business – viewing glass into utterly fragmented and stagnant water industry - its software potential customer crying for uberization.
Know-how does not reliably and economically produce new know-how - public knowledge does. This contrarian conclusion is backed by recent decision of the Xylem company to move from in-house R&D and startup accelerators to world-wide startup tracking and acquisition. Many companies already followed suite. In other words,
KMS shall follow disruption (which moved from discrete product and service technologies to e-ecosystems) and track public domain first, not corporate knowledge.
How to start tracking public knowledge?
Before answering this question we need to understand how this knowledge is consumed in a specific business. For example, in infrastructure projects, the EPC contractors are the biggest consumers. By my experience, over 50% of the project folders storage is allocated to the data downloaded from the internet. It is a working alternative to the limited capability of web browsers to store and navigate bookmarks.
How big is the volume of this data stored in plain file directories? For mid-size desalination project it is above 50 GB. This data is hardly reused in other projects.
CRENMARKS application and Chrome extension offered by Crenger.com are an ultimate solution to the problem in question. It prioritizes, categorizes, validates, navigates, and stores bookmarks (web links and images) in the cloud, making them shareable between projects and companies. User may comment or rank the bookmark or edit it online.