Learning Campaigns Part 2: Improving Learning Outcomes
In Part 1 of our Learning Campaigns Blog series, we highlighted data from a number of clients using Fort Hill’s 70-20 learning activation platform to support their learning initiatives. Some were designed as extended learning campaigns and others as standalone challenge initiatives.
In both cases, 70-20 was used to support a specific development program, topic or project. Across the board, the level of engagement in the learning campaigns was significantly higher than in the standalone challenges.
Part 2 – Use Learning Campaigns to Improve Performance Outcomes
In Part 2, we are comparing the data on performance outcomes to see if participants in learning campaigns are applying their learning and reporting specific achievements more consistently than those working on single challenges.
The table below shows the results from the five learning campaigns that were discussed in Part 1 of this Blog series:
Importance of Validation
One important observation from our five learning campaign examples is the level of participant engagement and performance results on the three campaigns that required manager or coach validation of completed challenges (i.e. the participant demonstrated improved performance on the job related to the skill or task being practiced).
The percent of completed challenges on the campaigns with validation was significantly higher than the two campaigns that did not involve verified outcomes: an average of 64% vs. an average of 17%, respectively. This speaks to the value of establishing a level of accountability for practice and application of new skills, and involving the individual’s manager or coach in the developmental support and review process.
When participants complete their challenges in 70-20, they are asked to identify areas in which their achievement had a positive outcome on the job (selecting up to five items from a list).
In looking at the above table, we can see that Improved Performance (Participant’s, a Direct Report’s, or the Team’s Effectiveness) and Organizational Results were the top selected outcomes across the five campaigns. Improved Efficiency also appeared in three of the five campaigns.
The right-hand column of the table provides examples of achievement outcomes for specific challenge topics, so we can see more targeted outcome areas such as increasing customer satisfaction and service quality in the Mid-Career Leadership Academy Client Mastery challenge. This is excellent qualitative data to provide to business stakeholders, in addition to the verified performance outcomes.
Learning Campaigns versus Standalone Challenges: Setting the Context for Performance
In comparison to the campaigns, clients who have used 70-20 for standalone challenges after unique programs or team initiatives had lower performance outcomes and less consistent engagement, as follows:
~ The Biotech client’s global L&D team had high engagement in the challenges but most people self-evaluated their progress at the 50 point LOI (actively applying the skill on the job) or 75 point LOI level (change is visible to others at work). In reviewing their standalone challenges, the tasks or assignments were focused more on generating ideas and examples of different approaches on projects that could be shared with the team in the social elements of 70-20. None of their challenges required validation upon completion, so there was little direct manager involvement or coaching.
~ Similarly, the Financial Services client’s challenges were oriented toward team initiatives that encouraged idea generation and creative input rather than specific performance improvement on skills and behaviors. If we evaluate the outcomes of a team’s social challenge using different criteria, such as the number of ideas posted and shared, the number of comments among people in the cohort, and the generation of creative new approaches that resulted from the group’s collaboration- then these challenges can be seen as successful. The performance metrics related to the Learning Outcomes Index are less applicable, and the social metrics are far more indicative of success on the job.
Conclusions on Performance Levels
Similar to the outcomes we observed in the higher engagement levels of participants taking action on challenges in campaigns vs. standalone challenges (see Blog Part 1), we have also seen higher levels of performance outcomes in the campaigns.
What is especially exciting is that among the 64% of participants who completed challenges that involved manager validation of results, we can see strong examples of performance improvement on an individual, team and organizational level.
Ideas for Your Action
1. Consider your business objectives for learning initiatives and be intentional about communicating expected outcomes. With a focus on performance improvement and keeping with the 70:20:10 framework, learning campaigns are an effective way to blend formal learning (the 10) with experiential (the 70) and social learning (the 20) directed toward building skills and capabilities on the job.
2. Adding a validation step, involving an individual’s manager or someone at work who can observe and verify behavior change and improved results, will greatly increase learner accountability for practice and application.
In Part 3 of this Blog series, we will take a look at the social aspects of learning in campaigns vs. standalone challenges, including the frequency of peer coaching and responses to the 70-20 Sentiment Index™.