Tag: evaluation

  • How do we measure the impact of informal and incidental learning on organizational performance?

    How do we measure the impact of informal and incidental learning on organizational performance?

    Evidence from learning science clearly identifies how to strengthen learning culture in ways that will drive performance. However, in a recent study conducted by Learning Strategies International (LSi), we quickly found limitations and gaps in the data available from the organization examined, despite the best effort by the organization’s staff to answer our questions and requests.

    We found two gaps that needed to be addressed before the most effective approaches to develop capabilities could  be applied usefully – and their impact measured:

    1. The gap between a commitment in principle to learning and skepticism about its actual value. (This gap surprised us.)
    2. Gaps in data and reporting needed to measure internal learning (and how to improve it).

    We believe that the first gap (skepticism about the value of learning) is the direct result of the second (lack of measurement).

    Without a measure of its impact on performance, internal (staff) learning is likely to be seen as a “nice-to-have” rather than a strategic priority.

    Measurement is needed to demonstrate the correlation between internal learning and performance.

    Measurement in learning is notoriously difficult. We recognise that although internal learning is critically important, many other variables determine organizational performance.

    It would be wonderful if it were possible to draw a straight line from internal learning to specific business outcomes, but it is not.

    Recognizing the value of informal learning further complicates measurement: self-directed learning, coaching, mentoring, and other informal learning strategies have this embedded capacity to allow us to learn much more than we intended or expected at the outset.

    This makes such learning more difficult to measure, but far more valuable to the participant, team, and organization. This is why we recommended:

    • the use of knowledge, mission, and financial performance of an organization or network as key metrics to correlate with learning culture; and
    • an evidence-based approach (already deployed in over 8,000 organisations and adapted by LSi for global, complex humanitarian networks) to measure these three performance variables and correlate them to the dimensions of learning culture.

    Featured image: Submarine control panel. Bowfin Submarine Museum, Pearl Harbor. Personal collection.

  • 7 key questions when designing a learning system

    7 key questions when designing a learning system

    In the design of a learning system for humanitarians, the following questions should be given careful consideration:

    1. Does each component of the system foster cross-cutting analysis and critical thinking competencies that are key to humanitarian leadership?
    2. Is the curriculum standardized across all components, with shared learning objectives and a common competency framework?
    3. Is the curriculum modular so that components may be tailored to focus on context-specific performance gaps?
    4. Does the system provide experiential learning (through scenario-based simulations) and foster collaboration (through social, peer-to-peer knowledge co-construction) in addition to knowledge transmission (instruction)?
    5. How are learning and performance outcomes evaluated?
    6. Are synergies between components of the learning system leveraged to minimized costs?
    7. Have the costs over time been correctly calculated by estimating both development and delivery costs?

    These questions emerged from the development of a learning system for market assessment last year, thinking through how to use learning innovation to achieve efficiency and effectiveness despite limited resources.

    Photo: The Infinity Room (The House on the Rock) (Justin Kern/Flickr)