2.5 Design implications
The difficulties just described have very practical implications when it comes to designing technologies. Consider the following quotations:
in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world …
a knowledge representation is a set of ontological commitments. It is unavoidably so because of the inevitable imperfections of representations. It is usefully so because judicious selection of commitments provides the opportunity to focus attention on aspects of the world we believe to be relevant.
… In telling us what and how to see, they allow us to cope with what would otherwise be untenable complexity and detail. Hence the ontological commitment made by a representation can be one of the most important contributions it offers.
Classification systems provide both a warrant and a tool for forgetting.
The classification system tells you what to forget and how to forget it.
The argument comes down to asking not only what gets coded in but what gets read out of a given scheme.
The first quotation is from a group of knowledge engineers, the following three from anthropologists studying the impact of ‘professionalisation’ and information technology in organisations. All draw attention to the ontological commitments that we make in choosing a representation: it acts as a filter on the inevitable messy complexity of the world we wish to describe. In the process of simplifying a problem in order to codify it systematically, whether for human or computer analysis, we may also be systematically filtering out critical, tacit, situated knowledge, simply because it is hard to systematise and formalise. It is important not to generalise before understanding the particular. The art of representation raises two fundamental questions:
What am I going to represent? Do we understand the world we are trying to describe in enough depth to know what detail can be safely ignored?
How will this representation scheme be used, by whom, with what training? How can we assist interpretation through training (that is, changing the people), and/or by the careful design of representations (that is, changing the computer)?
In the light of our discussion, in Box 2.2 we re-express our conception of how information, knowledge and representations interrelate with some key design criteria.
Box 2.2 Criteria for knowledge representation
Human knowledge begins as tacit, uncodified and situated understanding, and evolves through interaction with the world and symbolic representations, which are subject to continuous, active interpretation. How can computers support the process of making human knowledge more explicit – and hence shareable – without in the process:
freezing the knowledge in an inert state which cannot keep up with the changing world it claims to describe?
distorting the knowledge because the representations used to codify it are not rich enough to express important aspects of the world?
disrupting the work that people have to do because of the difficulty of encoding knowledge in computer-readable form?
Essentially, we see knowledge as arising from the interaction between people and information, mediated via representations. Since individuals can read different interpretations into the same representation, we cannot talk about stored knowledge whose meaning is fixed and unambiguous. Meaning is the understanding that emerges as the result of an interpretative process.
An approach focused on representations and situated interpretation within communities of practice leads us to questions rarely raised by a technocentric perspective:
What communities of practice need to be considered? These are the generators and consumers of knowledge within the organisation.
What representations will help bridge the boundaries between communities?
What expertise is required to interpret a given information source appropriately?
Who gets to design the representations which will be embedded in the system?