I am in charge of a relatively big training effort for a project (approximately 45 live training sessions in 10 weeks, as well as online training opportunities) to assist with the deployment of a new piece of software. The live training alone will involve over 450 people and will be quite in depth and hands on. Training often plays a critical role in the successful adoption of a new product. Without the proper knowledge of how to effectively use a system, clients can become frustrated, ambivalent or hostile to the change that’s occurring, which can ultimately affect the success of the change.
As a result effective training methods are essential to ensure the ultimate success of the product and the project from which the product was spawned. But how can we determine whether the training was useful, relevant, and appropriate for the given audience? Like any other part of a project or initiative, we’d like to have some accountability on the quality of work product. As a result it is important to come up with ways of measuring how successful your training activities are.
While surveys of trainees and the like can help measure the perceived quality of the training itself, on its own such information does not provide us with a true measure of the success of the training process. You need to map the output of the training to its end goals and from there develop measures that will provide an additional level of understanding as to the quality of the training.
Establish Your Outcomes
First, you need to establish your target outcomes, the end results that you foresee as a result of the training activities. Typically I start with qualitative outcome descriptions. For my project, we want to ensure that clients understand how to use the software well and that they can operate the software independently (i.e. no need to ask for help). We also want to ensure that clients realize how they can use the information available to them through the system to resolve data conflicts with each other rather than having to go through a third party. Lastly, we want to reduce the need for a third party monitoring group to have to contact clients and enforce data resolution and processing activities.
Define Your Measures
Once you have your outcomes, you need to develop ways to measure these outcomes. These must be quantifiable results that you can use to accurately demonstrate whether an outcome was met or not. For example, we could use the number of calls and e-mails to the Help desk regarding the product as a measure for how well clients understand the software once the training is deployed. We can also measure the number of times the third party is contacted to resolve a data issue between clients who work in different organizations. Lastly we can count the number of times the third party must contact clients in order for data conflicts to be resolved.
If the training is related to an existing set of processes, you would ideally use measures for which you already have (or can gather prior to) the implementation of the product/solution. This way you can have data not only post-implementation but also pre-implementation and analyze the impact of the product and the training.
Create Goals for Each Measure
Once your measures are in place, come up with target goals for each measure. If you have pre-deployment data, your goals can be about trending rather than absolute figures (for instance, you could set a goal for a 20% reduction in Help desk calls regarding process X instead of saying your goal is to have less than 200 calls regarding process X).
These goals should be evaluated by the relevant stakeholders, approved by the project sponsor and become part of the project’s success criteria.
Some Considerations
If you’re implementing a product that is designed to improve an existing process you probably have some measures and targets for the productivity of the process itself once the product is deployed (e.g. increase order taking throughput per hour by 50%). For these measures, training is only one of several factors going into the end result and cannot be easily removed to analyze the success of training on its own.
However, you could perform some basic ‘A/B’ tests with a limited subset of clients if you wish to determine how much of an impact training has on the product’s overall success. Train one group of clients as per usual and then train another group with either a different set of training materials, style, etc. or not at all. Assuming you control for other relevant factors that may impact the end result (e.g. level of experience of clients, demographics of clients, etc.) this can be an extremely useful to determine how important training was to the overall success of the project.
If not all of your clients have access to the same type or level of training try and segregate their data so you don’t make erroneous conclusions about the success of your training activities.
Deploying a new product, process, or system into the organization is only as effective as the individuals who implement the change. Making sure that training aligns with the end objectives and can be measured to determine its utility is an important aspect of the overall project’s outcomes.
Jarett Hailes
Larimar Consulting Inc.
http://www.larimarconsulting.com