Assessing the Efficacy of Your Training Program

Posted on
Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone

Across all industries and specialties, training programs take a significant investment of both time and money. Yet, after the instructors have come and gone, it’s often difficult for companies to determine whether the sessions were worth the resources they consumed. Even with a complicated regime of surveys and evaluations, collecting objective data about the effectiveness of a training program can be problematic.

 

 

Many companies use exit surveys to gauge employee reactions to training programs. These polls often ask whether employees thought the training was useful, how they think they will use what they learned and how much they think their performance will improve. While this type of employee feedback is helpful, it is merely speculative and doesn’t provide companies with the reliable numbers they need to ensure success.

 

 

To address this issue, leaders at Sprint Nextel’s training organization, Sprint University, do two things.

 

 

First, they survey employees twice, once immediately after the training and then again 60 days later.

 

 

Second, they adjust the survey results to account for bias and overconfidence. Using the principles of estimation, isolation and adjustment espoused by Dr. Jack Phillips, they can gauge more accurately how the training helped their employees.

 

 

“We ask them to estimate their percent of improvement due to training, and then we have them isolate it by asking how critical this is to their jobs,” said Daniel Brown, Sprint Nextel program manager for the learning analytics measurement and reporting team. “Then, the third principle is adjustment, and that’s that taking off 35 percent of the previous quantifiable numbers that they gave us. That bottom line number is the API (adjusted percent of improvement due to training).”

 

 

Brown said it’s important to adjust the numbers that employees report to account for the natural learning curve. Because that curve generally peaks immediately after training, the program assessment has to account for the initial overconfidence of trainees, as well as the inevitable downturn, he said.

 

 

“The standards learning curve in academia shows that right after someone takes a course or goes through a program, their perception is, ‘I’m going to sell the world,’” he said. “They’ve got all this knowledge right at the front of their head, and they’re thinking, ‘OK, I can do all of this.’”

 

 

The API also helps Sprint Nextel interpret assessment data before it’s outdated. Because the products and technologies the company uses are updated several times each year, its training programs are too.

 

 

Therefore, learning leaders need to quickly gather information from all 62,000 employees and make improvements before the programs change, Brown said.

 

 

“In today’s world, things are moving at the speed of light — programs are changing, and more and more money is being spent, he said. “A typical ROI analysis could take upward of six to eight months or more, and we need something that’s faster than that.”

 

 

Most importantly, the API shows Sprint Nextel’s business leaders their money is being spent wisely — instead of offering approval ratings, the heads of Sprint University can show their executives real results.

 

 

“This API is kind of our golden key that shows that we are making a difference and that people are applying what they’re learning,” he said.

Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone
cmadmin

ABOUT THE AUTHOR

Posted in Archive|

Comment: