False Proxies

Seth Godin coined this term last week, describing situations where we crete a proxy measure for success, because the true goal is more difficult to measure. It struck a cord with me, because it seemed very, very similar to things I’ve said previously about working in tech support. Namely, that if you measure “success” as the number of help desk tickets closed, you won’t get better support, you’ll get more tickets closed.

I can think of quite a few examples where we do this on a consistent basis in the business world, and the majority of them are because we are trying to measure something that is by it’s nature difficult to measure. How do you measure whether someone’s service is good, or could be better? You base it on things like customer surveys, but most customers don’t fill out surveys,  and generally speaking, those who do are the ones more likely to have something to complain about. The 99 people out of 100 who were relatively happy don’t have as much incentive to communicate back to you. as the one who was really angry, so is their feedback a good measure of the service all of your customers are receiving? Not especially.

Now that I’m training for a living, I see yet another example of false proxies. How do you measure the effectiveness of a training class? Truth be told, the effectiveness takes a long time to show itself. It shows in the every day work habits of the people who attended the training. If you’re training inside of an organization, there may be some reward in being able to identify students who have shown improvement in their skills after taking a class, although that is still open to interpretation. (Was it effective training, or some other factor that led to the improvement?)

For outside trainers, like myself, this gets even more complicated. How do we measure the success of a training class? Typically we look for feedback from the students. We do this in traditional methods, surveys, or evaluation forms. We also do it in non-traditional methods, by paying careful attention to the students as they participate, or don’t participate, in the class. However, do the evaluation forms tell us what we really want to know? I like to joke that the easiest way to get good evaluations is to finish class early. If that’s true, then I’m not measuring success, I’m measuring my ability to let the class go early. Not exactly the goal of training.

If the surveys are a false proxy, how do we measure the success of training? Do we measure repeat customers? After all, if they have us back that means they values the first training class we did, right? Of course, if they have us back to train the same people, maybe that’s not such a good sign? 😉

I do measure my own success on the immediate feedback I see in a class more than what comes back in surveys. From the front of the room, it’s easy to tell if people are checking out on me, or whether they are getting it. The interaction is an important part of understanding how to tweak the material to fit their needs better, and how to apply it to their workflow, and so when I see students giving me that interaction, I know they want to learn, and are applying what I am showing them. Of course, this makes it very difficult to measure the success of online training, where I don’t see the students and cannot measure the non-verbal feedback they are giving me. So that’s not a perfect solution either. It also doesn’t help that the people I report to aren’t in the class, so they need something else to measure success by, don’t they?

Lastly, one other really important aspect of measuring the success of training is actually outside the control of a trainer. If you send me a bunch of people who really, truly, do not want to learn, it doesn’t matter how good of a trainer I am, the class will not give you the effect you are looking for.

Given all of the variables, how would you measure the success of a training class?

Similar Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.