4.20 Technologies and explicit knowledge continued
In the future we will see the fusion of statistical analyses of documents, agents, ontologies, metadata and informal annotation/discussion. Ontological tagging with metadata would allow authors to express their own deep understanding of the domain which may draw on knowledge that is not in the text of documents. This would allow experts to set a document in context in the light of developments since the document was written, or to encode relationships between documents that show important connections . An organisational taxonomy can also be used to map relationships between documents and specific problems/processes of concern to the organisation (for example, ‘this report describes a solution to problem X’).markets text mining software agents with the claimed ability to extract the key concepts from any document on the web or an intranet.
Box 4.16 Technology update: sonification and audio spaces
We are constantly absorbing and processing multiple sources of auditory information, both in everyday and work contexts, without even thinking about it. So why not use sound, as well as visual techniques, to communicate complex information?
In the workplace we can use the sound of people gathering to determine when to join a meeting, or the particular whirr of a computer's disk drive to tell if a program is launching properly. We can process this information in parallel because of the modality difference: the types of information which are suitable for aural presentation are quite different from those presented visually. Sound is temporal and three-dimensional in nature, and this opens up the possibility of presenting multiple streams of time-varying information to the user simultaneously without cluttering the visual channel of communication.
Researchers are now exploring ways to integrate the two or replace visual cues with audio cues, particularly for visually impaired users. However, everyone should benefit from this research. As multimedia PCs become the standard, auditory cues are beginning to appear on our desktops to provide feedback on background processes that we do not want to visually monitor (for example, printing, copying or the arrival of email).
In the future we may see developments such as the following:
three-dimensional audio spaces which simulate (through speakers or headphones) sounds coming from different positions, making it possible to process more complex information
analysis of the performance of software by listening to ‘sonic signatures’ that can highlight errors (simulating digitally a common practice of the programmers of early valve computers!)
presenting complex, multidimensional data sets in sound, enabling patterns and trends in the data to be heard in a holistic fashion
the use of high-quality synthesised speech and non-speech sounds to enhance navigation around graphical user interfaces, for both sighted and visually impaired users new paradigms for visually impaired computer users which replace the paper-based metaphor with more appropriate metaphors for the audio dimension
audio-based web browsers.
Having now completed this unit, read the article ‘Rapid knowledge construction: a case study in corporate planning using collaborative hypermedia’ by Albert Selvin and Simon Buckingham Shum.
As you read, think about the following questions:
How does the ‘Rapid Knowledge Construction’ (RKC) approach position itself with respect to a key challenge highlighted in this unit: namely, the process of ‘formalising knowledge’?
How does the approach seek to address the needs of different communities of practice?
If asked to give an assessment of Compendium's potential for your situation, what would you say?
Compendium, as an example of RKC, is a semiformal approach to structuring knowledge. It is more formal than simply writing down ideas on a flipchart or slide, through the use of a notation (Questions, Ideas, Arguments) which are mapped as iconic nodes and linked to create trees and networks. But it is less formal than an expert system knowledge base designed for fully automated processing, since the labels and contents of nodes in the map can be anything. This ‘relaxes’ the constraints on what can be mapped, which is critical for a tool designed to be used during meetings, and, especially, to capture the perspectives and arguments between diverse stakeholders. Consequently, the strategy for tackling the ‘knowledge capture problem’ is to add a degree of structure which benefits both the people (clearer structure to ideas and meetings; immediate validation of what is going into the group memory) and the computer (which can process the structures to a degree without understanding anything about the content of nodes).
Different communities of practice are supported (a) by making it possible for a group to meet together and work on constructing an agreed definition of the key issues, options and trade-offs, using a simple notation which all can understand; and (b) because in this case study, ‘Question’ templates were used to drive the discussions, and the software was able to process these regular structures and transform them into the required organisational documents. Since Compendium uses a fundamentally dialogic approach, it scaffolds the process of making and taking perspectives, and offers visual boundary objects for groups to agree on.
Although not discussed at length in the article (but exemplified in the NASA collaboration tools case study – Box 4.4), an additional way in which Compendium recognises the needs of different communities of practice is that they each have their own specialist tools, and, indeed, not all can be expected to simply start using Compendium. A key function of a tool to support sensemaking is that it can work smoothly with an organisation's existing ICT infrastructure, providing conceptual and technical ‘glue’ between otherwise disconnected ideas, documents and data. Compendium has been engineered to make it technically open’ so that it can import and export data to other systems.