Week 9: 26th June – 2nd July (Activities: 4.1 and 4.2)
The traditional way of thinking about learning and development in organisations is by considering training needs in terms of the gap between the organisation’s current capabilities and the desired capabilities for the organisation to develop. Arguably the most basic of all toolkits for HRD professionals is a step-by-step approach to identifying and meeting these needs, such as the one shown in Figure 4.1.
For decades now, HRD theory has emphasised the value of this sort of systematic approach to training, which seeks to put human development activities into similar sorts of methodological frameworks as those used for business planning or IT systems design. Successful provision of training and development thereby sits alongside other key functions in business planning, and stakeholders are encouraged to focus on the following key issues:
- the importance of articulating the desired outcomes from the training, i.e. how will you know that the training has been a success?
- the need for congruence between individual and organisational goals
- the importance of practical issues of scheduling and costing time for participants and facilitators
- the need for stakeholder engagement, sponsorship and support for the training programme and their understanding of the impact it will have on the organisation, both short- and longer-term.
Training needs analysis (TNA)
Most commentators agree that the first step, the training needs analysis or TNA (sometimes called a training needs assessment), is the most important part of the training life cycle. This is where the gaps between current and desired capabilities are assessed; that is, where the scale as well as the nature of the training requirement begins to become clear. A classic TNA will usually examine these needs at three levels – organisational, job-task and individual.
Organisational analysis: This is where the TNA links to corporate strategy (or equivalent for the non-corporate sector) and the HRD strategy work that you covered in Unit 3. Here you will consider how well the organisation as a whole is equipped to deal not only with current challenges, but also with future skills needs, to the extent that these can be predicted based on developments in strategy, the introduction of new technologies, etc.
You may use data from your workforce planning activities to assess the impact on your organisation of employees reaching retirement age, employees getting promoted and therefore needing to be back-filled, anticipated levels of employee turnover, managing short- and long-term sick leave, etc. A key consideration at this level is to get input from leaders and other key stakeholders on the assumptions you are making about the future direction of the organisation, and the skills the organisation will therefore need to build, recruit, retain and potentially phase out.
Job-task analysis: This is where the analysis moves to individual jobs and roles to assess the gap between current and desired skills and capabilities. Examining job descriptions and specifications provides the basis of decisions about any gaps in capability levels.
There is an important link between this analysis and any business process reengineering (BPR) work that the organisation is undertaking. BPR often results in a significant demand for the development of new skills and/or the refinement of existing skills to adapt to new technologies and/or processes.
One further term you may hear in this context is ‘job family’. Job families are groups of jobs that involve the same or similar kinds of work, and which therefore require the same or similar skills, attitudes and behaviours. Clustering jobs into families can make training planning and delivery more efficient, as well as being useful for other HRD activities such as remuneration, reward and career progression.
Individual analysis: This is where the link is made between each individual’s training needs and their overall performance management and appraisal. If an employee’s appraisal reveals problems with performance, then often the most obvious step is to recommend training to fill the gap and help the employee to meet the desired performance standard.
A competency-based approach
TNAs tend to reflect a ‘competency’ approach to learning and development. There are many different kinds of competency models, but the fundamental idea is that ‘competency’ is an umbrella term which encompasses different sorts of training needs, often categorised into the three areas of skills, attitudes and behaviours. These categories are intended to reflect the different aspects of workplace performance, that is, both what people do and how they do it. Competency approaches therefore attempt to reflect both ‘hard’ and ‘soft’ abilities and aptitudes required by the organisation.
Box 4.1: Competence or competency?
In the past, HRD professionals drew a distinction between ‘competence’ and ‘competency’. The term ‘competence’ (plural competences) was used to describe what people need to do to perform a job and was concerned with effect and output rather than effort and input. ‘Competency’ (plural competencies) described what lies behind competent performance, such as critical thinking, analytical skills or interpersonal qualities. These days, however, there is growing awareness that job performance requires a mix of skills, attitudes and behaviours. The terms ‘competence’ and ‘competency’ are now used interchangeably to reflect this mix. Young, J. and Chapman, E. (2010): Generic Competency Frameworks (Table 1)
Tools for TNA
The main tools you can use to gather data for TNA work include the following:
- Surveys/questionnaires: These may be specifically designed for TNA work, or they may be surveys that are being administered for other purposes (e.g. to gauge employee engagement or staff satisfaction) where the data can also be used to identify training needs.
- Interviews: Instructional designers often decide to interview job-holders in order to build a richer picture of what a job entails than the one available in formal job descriptions. Interviewing job-holders can help to elicit the hidden and implicit aspects of the job, as well as the more obvious and explicit aspects.
- Assessment centres: These are often a good way of building a picture of employees’ development needs across a range of functions and activities. If used for TNA purposes, they need to be aligned with the overall performance management strategy.
- Observations: You might decide to collect data on what skills are deployed in a more naturalistic setting than that offered by interviews and assessment centres, that is, when people are engaged in their normal day-to-day activities.
- Document reviews: TNA work frequently involves examining key documents such as business plans and articulations of corporate strategy and values. This is especially useful for the organisational analysis, and for predicting future needs rather than just documenting current ones.
Figure 4.2 shows how some of these different sources of data can be used to inform each of the levels of TNA work.
Sketching out the training programme
Once you have collected your data on current versus desired capabilities, you can begin to sketch out an overall design for the training programme. This will typically include:
- expected learning outcomes
- assumptions about prior learning and current skill base
- key success factors and analysis of risks
- suggestions for delivery method
- ideas for training evaluation
- indicative timescales
- indicative costs
- stakeholder engagement strategy.
At this stage, the programme design is provisional – a ‘sketch’. It reflects your first ideas and working assumptions about the training provision you think is required. At the TNA stage, you need to document these elements so that you can get feedback and endorsement for your approach from key organisational stakeholders. As you move into the subsequent training design, development and delivery phases, you may well need to revisit some of these ideas and assumptions.
What kind of training?
An important step in TNA work is the selection of delivery method(s) for the training needs that you have identified. The criteria for selection of methods are likely to include:
- the priority and/or urgency of the learning and talent development needs
- the organisational culture and its attitude towards learning and talent development
- the type of occupation, level of seniority and qualifications/educational background of learners
- any data you have about learner styles and preferences
- any evaluation data you have of the effectiveness of previous learning and talent development interventions
- costs and budgets available.
Options for training delivery typically include:
- courses and classroom training
- in-house development programmes
- external courses and programmes, including formal qualifications
- on-the-job training, including shadowing and observation
- coaching and mentoring (which will be covered in Units 6 and 7)
- action inquiry (more of which later in this unit).
Using technology for training
One of the most vibrant debates among instructional designers concerns the use of technology for training. As participants in this programme, you will already have a sense of some of the advantages and disadvantages of technology-based training, such as distance learning (DL), computer-based training (CBT), computer simulations, webinars, online discussion forums, etc. It used to be assumed that technology-based training would be cheaper than face-to-face methods. However, both empirical and anecdotal evidence suggest that any savings associated with reduced travel costs and facilitator time are normally offset by increased spending on IT equipment and support (Kraiger, 2003).
One technology-based approach that is attracting a lot of attention among theorists, practitioners and in the media is MOOCs (massive open online courses). MOOCs are designed for unlimited participation and open access via the web, and are built around the principle of sharing knowledge and resources.
Activity 4.1: What is a MOOC? (15 minutes)
Training evaluation
The final stage in the standard approach to training (as shown in Figure 4.1) is evaluation. Although evaluation typically takes place at the end of the training cycle, deciding on the approach to evaluation is something which should normally be part of TNA work: evaluation criteria should be built into a training programme from the outset, and not as an after-thought!
A great deal of work in this area is based on the Kirkpatrick (1979) model, which has become a classic in the field of instructional design. It is easy to understand, well tested and forms something of a common currency among training evaluators and HRD professionals. The model proposes four levels of evaluation:
- Reactions
- Learning
- Behaviour (job impact)
- Results (business impact).
Level 1 – Reactions
Reactions are usually captured using attitude questionnaires or surveys administered at the end of a course. These ask students what they thought of the programme, whether the setting was conducive to learning, which parts they particularly liked, and whether there were any aspects they did not like. Questionnaires measure subjective perceptions of training, not whether it will have any impact on behaviour or performance. This subjectivity is both a strength and a limitation. On the one hand, such surveys can capture rich, often qualitative data on the student experience, sometimes revealing aspects of the training that course designers and facilitators may not have been aware of. On the other hand, by focusing on students’ likes and dislikes, such surveys may distort an instructional design towards what will be popular and/or enjoyable, rather than what will be most effective or informative. As shown in Figure 4.3, a session being good fun is not necessarily the same thing as it being useful!
Level 2 – Learning
Learning relates to the absorption of new knowledge and content. Evaluation at this level is usually undertaken using pre-test/post-test comparison; that is, a measurement of the changes in skills and/or knowledge that can be directly attributed to the training intervention. Formal assessments, qualifications and exams are all examples of measuring achievement at this level of evaluation.
Level 3 – Behaviour
Behaviour refers to the successful application of learning – that is, the transition from the classroom to the workplace. Level 3 evaluations can be performed using formal assessment or through more informal approaches, such as observation. This sort of evaluation normally needs to be conducted by someone with in-depth understanding of the job in question and the degree of performance improvement that can realistically be expected from the training. This sort of training evaluation should be aligned with performance management reviews for the individual trainee.
Level 4 – Results
Results refer to the link between impact on the job and impact on the organisation. If training has been well designed and has met its objectives in terms of individual performance (level 3), there should be a feed-through to enhanced business performance. It is at this level that training can start to be evaluated in terms of its return on investment (ROI). You will recall from Unit 3 that HRD strategy often involves gauging the rate of return for an organisation’s investment in its people. Training and development often make up substantial proportions of this investment, so level 4 evaluation is considered a crucial competency for HRD professionals in corporate and business strategy.
Current practices of evaluation
Instructional designers often try to work all four levels into their evaluation strategy for a programme. By progressing through each level, they can build a kind of ‘chain of evidence’ which can connect individual participant reactions with organisational performance. Having the right conditions for learning (level 1) enables the acquisition of new knowledge and skills (level 2). This lays the foundation for learning to be applied back in the workplace (level 3), which in turn should have an impact on organisational or business performance (level 4).
Although very basic (Holton, 1996), the Kirkpatrick model continues to form the basis of many decisions about evaluation. Other models have been developed more recently, and it is useful to view these as extensions or modifications of the classic Kirkpatrick approach. For instance, the CIPD recommends the ‘RAM’ approach (Bee and Bee, 2007), which focuses on the need for:
- Relevance: how training provision will meet the actual needs of the organisation, both now and in the future
- Alignment: how training is linked to other key HRD activities, such as performance management and reward and employee engagement, and to other functional areas, such as finance and strategy
- Measurement: how training metrics can be linked to other performance metrics and key performance indicators (KPIs).
Contemporary discussions also highlight the crucial importance of the human skills of insight and intuition in HRD (Sadler-Smith, 2008). If we can supplement the formal criteria of the Kirkpatrick model and its successors with ‘gut feel’ about what will or will not work, we can move towards a more holistic approach to evaluation. After all, theories about the way we think have evolved to incorporate both our rational and our instinctive capabilities. You may have heard of, perhaps even read, Daniel Kahneman’s best-seller Thinking, Fast and Slow (Kahneman, 2011). It suggests that intuition may be fast, in other words, that it is not the result of systematic, logical evaluation, but it is a vital aspect of how we operate as human beings – a different kind of ‘expertise’. As Kahneman puts it:
each of us performs feats of intuitive expertise many times each day. Most of us are pitch perfect in detecting anger in the first word of a telephone call, recognise as we enter a room that we were the subject of the conversation, and quickly react to subtle signs that the driver of the car in the next lane is dangerous.
Kahneman’s words remind us that the skills of any activity of evaluation involve this kind of intuitive fast thinking as well as the more systematic slow kind.
Critical thinking about TNAs
Recent developments in HRD thinking have highlighted the crucial significance of the ‘real world’ context of TNA activities. This involves acknowledging the social networks and power relations of organisational life, including the influence of self-interest, self-promotion and/or self-preservation among the organisational members whose opinions you are seeking in your TNA research. For instance, when employees are asked to describe the constitutive components of their jobs, they may be motivated to describe an ideal job performance rather than a realistic one, or to over-emphasise the complexity of the job in order to boost their own profile in the organisation. Questions such as whether a formal qualification is essential for a job’s performance may well be a matter of opinion rather than unchallengeable fact, perhaps revealing organisational members’ personal prejudices.
Clarke (2003) presents a useful list of questions for HRD professionals , highlighting some of the key issues associated with the politics of TNAs. You may find some of these questions useful when completing Activity 4.2, which follows.
Self interest
- Who are the key stakeholders in the TNA and what are their sources of influence?
- How might the TNA and its conclusions influence the current balance(s) of power?
- What are the expressed motives for the TNA?
- Can the nature of any undisclosed motives be identified?
- How might the TNA affect job security or career prospects?
Organisational conflict
- What is the degree of conflict between organisational members with a stake in the TNA?
- What is the nature of this conflict?
- Is there a climate of openness and trust?
- How do different stakeholder groups view each other?
- Are the goals of the TNA shared?