The traditional way of thinking about learning and development in organisations is by considering training needs in terms of the gap between the organisation’s current capabilities and the desired capabilities for the organisation to develop. Arguably, the most basic of all toolkits for human resource development (HRD) professionals is a step-by-step approach that starts with the training need analysis, or TNA (sometimes called a training needs assessment), followed by the training design and delivery and ending in training evaluation. This process is illustrated in Figure 1.
Figure 1: A traditional training approach
For decades now, HRD theory has emphasised the value of this sort of systematic approach to training, which seeks to put human development activities into similar sorts of methodological frameworks as those used for business planning or IT systems design. Successful provision of training and development thereby sits alongside other key functions in business planning, and stakeholders are encouraged to focus on the following key issues:
Most commentators agree that the first step, the TNA, is the most important part of the training lifecycle. This is where the gaps between current and desired capabilities are assessed – that is, where the scale as well as the nature of the training requirement begins to become clear. A classic TNA will usually examine these needs at three levels – organisation, job-task and individual.
Organisational analysis: This is where the TNA links to corporate strategy (or equivalent for the non-corporate sector) and the HRD strategy. Here you will consider how well the organisation as a whole is equipped to deal not only with current challenges, but also with future skills needs, to the extent that these can be predicted based on developments in strategy, or, for example, the introduction of new technologies.
You may use data from your workforce planning activities to assess the impact on your organisation of a variety of issues such as employees reaching retirement age, getting promoted and therefore needing to be back-filled, or managing short- and long-term sick leave.
A key consideration at this level is to get input from leaders and other key stakeholders on the assumptions you are making about the future direction of the organisation, and the skills the organisation will therefore need to build, recruit, retain and potentially phase out.
Job-task analysis: This is where the analysis moves to individual jobs and roles to assess the gap between current and desired skills and capabilities. Examining job descriptions and specifications provides the basis of decisions about any gaps in capability levels.
There is an important link between this analysis and any business process reengineering (BPR) work that the organisation is undertaking. BPR often results in a significant demand for the development of new skills and/or the refinement of existing skills to adapt to new technologies and/or processes.
One further term you may hear in this context is ‘job family’. Job families are groups of jobs that involve the same or similar kinds of work, and which therefore require the same or similar skills, attitudes and behaviours. Clustering jobs into families can make training planning and delivery more efficient, as well as being useful for other HRD activities, such as remuneration, reward and career progression.
Individual analysis: This is where the link is made between each individual’s training needs and their overall performance management and appraisal. If an employee’s appraisal reveals problems with performance, then often the most obvious step is to recommend training to fill the gap and help the employee to meet the desired performance standard.
A competency-based approach
TNAs tend to reflect a ‘competency’ approach to learning and development. There are many different kinds of competency models, but the fundamental idea is that ‘competency’ is an umbrella term which encompasses different sorts of training needs, often categorised into the three areas: skills, attitudes and behaviours. These categories are intended to reflect the different aspects of workplace performance – that is, both what people do and how they do it. Competency approaches therefore attempt to reflect both ‘hard’ and ‘soft’ abilities and aptitudes required by the organisation.
In the past, HRD professionals drew a distinction between ‘competence’ and ‘competency’. The term ‘competence’ (plural competences) was used to describe what people need to do to perform a job, and was concerned with effect and output, rather than effort and input. ‘Competency’ (plural competencies) described what lies behind competent performance, such as critical thinking, analytical skills or interpersonal qualities. These days, however, there is growing awareness that job performance requires a mix of skills, attitudes and behaviours. The terms ‘competence’ and ‘competency’ are now used interchangeably to reflect this mix.
The main tools you can use to gather data for TNA work include the following:
Figure 2 shows how some of these different sources of data can be used to inform each of the levels of TNA work.
Figure 2: Data sources for different levels of TNA work
Once you have collected your data on current versus desired capabilities, you can begin to sketch out an overall design for the training programme. This will typically include:
At this stage, the programme design is provisional – a ‘sketch’. It reflects your first ideas and working assumptions about the training provision you think is required. At the TNA stage, you need to document these elements so that you can get feedback and endorsement for your approach from key organisational stakeholders. As you move into the subsequent training design, development and delivery phases, you may well need to revisit some of these ideas and assumptions.
An important step in TNA work is the selection of delivery method(s) for the training needs that you have identified. The criteria for selection of methods are likely to include:
Options for training delivery typically include:
One of the most vibrant debates among instructional designers concerns the use of technology for training. As students from this course, you will already have a sense of some of the advantages and disadvantages of technology-based training, such as distance learning, and if you are or have been an OU student you are likely to have come across computer simulations, webinars and certainly online discussion forums (synchronous and asynchronous). It used to be assumed that technology-based training would be cheaper than face-to-face methods. However, both empirical and anecdotal evidence suggest that any savings associated with reduced travel costs and facilitator time are normally offset by increased spending on IT equipment and support (Kraiger, 2003).
One technology-based approach that is attracting a lot of attention among theorists, practitioners and in the media is MOOCs (massive open online courses). MOOCs are designed for unlimited participation and open access via the web, and are built around the principle of sharing knowledge and resources.
The final stage in the standard approach to training (as shown in Figure 1) is evaluation. Although evaluation typically takes place at the end of the training cycle, deciding on the approach to evaluation is something which should normally be part of TNA work: evaluation criteria should be built into a training programme from the outset, and not as an afterthought!
A great deal of work in this area is based on the Kirkpatrick (1979) model, which has become a classic in the field of instructional design. It is easy to understand, well tested and forms something of a common currency among training evaluators and HRD professionals. The model proposes four levels of evaluation:
Reactions are usually captured using attitude questionnaires or surveys administered at the end of a course. These ask students what they thought of the programme, whether the setting was conducive to learning, which parts they particularly liked, and whether there were any aspects they did not like. Questionnaires measure subjective perceptions of training, not whether it will have any impact on behaviour or performance. This subjectivity is both a strength and a limitation. On the one hand, such surveys can capture rich, often qualitative, data on the student experience, sometimes revealing aspects of the training that course designers and facilitators may not have been aware of. On the other hand, by focusing on students’ likes and dislikes, such surveys may distort an instructional design towards what will be popular and/or enjoyable, rather than what will be most effective or informative.
Figure 3: Popularity or effectiveness?
Learning relates to the absorption of new knowledge and content. Evaluation at this level is usually undertaken using pre-test/post-test comparison – that is, a measurement of the changes in skills and/or knowledge that can be directly attributed to the training intervention. Formal assessments, qualifications and exams are all examples of measuring achievement at this level of evaluation.
Behaviour refers to the successful application of learning – that is, the transition from the classroom to the workplace. Level 3 evaluations can be performed using formal assessment or through more informal approaches, such as observation. This sort of evaluation normally needs to be conducted by someone with in-depth understanding of the job in question and the degree of performance improvement that can realistically be expected from the training. This type of training evaluation should be aligned with performance management reviews for the individual trainee.
Results refer to the link between impact on the job and impact on the organisation. If training has been well designed and has met its objectives in terms of individual performance (level 3), there should be a feed-through to enhanced business performance. It is at this level that training can start to be evaluated in terms of its return on investment (ROI). Thus, HRD strategy often involves gauging the rate of return for an organisation’s investment in its people. Training and development often make up substantial proportions of this investment; so level 4 evaluation is considered a crucial competency for HRD professionals in corporate and business strategy.
Instructional designers often try to work all four levels into their evaluation strategy for a programme. By progressing through each level, they can build a kind of ‘chain of evidence’ which can connect individual participant reactions with organisational performance. Having the right conditions for learning (level 1) enables the acquisition of new knowledge and skills (level 2). This lays the foundation for learning to be applied back in the workplace (level 3), which in turn should have an impact on organisational or business performance (level 4).
Although very basic (Holton, 1996), the Kirkpatrick model continues to form the basis of many decisions about evaluation. Other models have been developed more recently, and it is useful to view these as extensions or modifications of the classic Kirkpatrick approach. For instance, the CIPD recommends the ‘RAM’ approach (Bee and Bee, 2007), which focuses on the need for:
Contemporary discussions also highlight the crucial importance of the human skills of insight and intuition in HRD (Sadler-Smith, 2008). If we can supplement the formal criteria of the Kirkpatrick model and its successors with ‘gut feel’ about what will or will not work, we can move towards a more holistic approach to evaluation. After all, theories about the way we think have evolved to incorporate both our rational and our instinctive capabilities. You may have heard of, perhaps even read, Daniel Kahneman’s best-seller Thinking, Fast and Slow (Kahneman, 2011). It suggests that intuition may be fast; in other words, that it is not the result of systematic, logical evaluation, but it is a vital aspect of how we operate as human beings – a different kind of ‘expertise’. As Kahneman puts it:
[E]ach of us performs feats of intuitive expertise many times each day. Most of us are pitch perfect in detecting anger in the first word of a telephone call, recognise as we enter a room that we were the subject of the conversation, and quickly react to subtle signs that the driver of the car in the next lane is dangerous.
(Kahneman, 2011, p. 11)
Kahneman’s words remind us that the skills of any activity of evaluation involve this kind of intuitive fast thinking, as well as the more systematic, slow kind.
Recent developments in HRD thinking have highlighted the crucial significance of the ‘real world’ context of TNA activities. This involves acknowledging the social networks and power relations of organisational life, including the influence of self-interest, self-promotion and/or self-preservation among the organisational members whose opinions you are seeking in your TNA research. For instance, when employees are asked to describe the constitutive components of their jobs, they may be motivated to describe an ideal job performance rather than a realistic one, or to over-emphasise the complexity of the job in order to boost their own profile in the organisation. Questions such as whether a formal qualification is essential for a job’s performance may well be a matter of opinion, rather than unchallengeable fact, perhaps revealing organisational members’ personal prejudices.
Clarke (2003) presents a useful list of questions for HRD professionals, highlighting some of the key issues associated with the politics of TNAs. You may find some of these questions useful when completing Activity 2, which follows.
Self-interest
Organisational conflict
Allow around 60 minutes for this activity
Think about your current role, or one you may have in the future, or one with which you are familiar from your previous experience. Using the content of this course so far, note down in the text box below your answers to the following questions:
Keep hold of your notes, because you will need to refer to them in the next activity.