Category: Theory

  • What is the pedagogy of Teach to Reach?

    What is the pedagogy of Teach to Reach?

    In a rural health center in Kenya, a community health worker develops an innovative approach to reaching families who have been hesitant about vaccination.

    Meanwhile, in a Brazilian city, a nurse has gotten everyone involved – including families and communities – onboard to integrate information about HPV vaccination into cervical cancer screening.

    These valuable insights might once have remained isolated, their potential impact limited to their immediate contexts.

    But through Teach to Reach – a peer learning platform, network, and community hosted by The Geneva Learning Foundation – these experiences become part of a larger tapestry of knowledge that transforms how health workers learn and adapt their practices worldwide.

    Since January 2021, the event series has grown to connect over 21,000 health professionals from more than 70 countries, reaching its tenth edition with 21,398 participants in June 2024.

    Scale matters, but this level of engagement begs the question: how and why does it work?

    The challenge in global health is not just about what people need to learn – it is about reimagining how learning happens and gets applied in complex, rapidly-changing environments to improve performance, improve health outcomes, and prepare the next generation of leaders.

    Traditional approaches to professional development, built around expert-led training and top-down knowledge transfer, often fail to create lasting change.

    They tend to ignore the rich knowledge that exists in practice – what we know when we are there every day, side-by-side with the community we serve – and the complex ways that learning actually occurs in professional networks and communities.

    Teach to Reach is one component in The Geneva Learning Foundation’s emergent model for learning and change.

    This article describes the pedagogical patterns that Teach to Reach brings to life.

    A new vision for digital-first, networked professional learning

    Teach to Reach represents a shift in how we think about professional learning in global health.

    Its pedagogical pattern draws from three complementary theoretical frameworks that together create a more complete understanding of how professionals learn and how that learning translates into improved practice.

    At its foundation lies Bill Cope’s and Mary Kalantzis’s New Learning framework, which recognizes that knowledge creation in the digital age requires new approaches to learning and assessment.

    Teach to Reach then integrates insights from Watkins and Marsick’s research on the strong relationship between learning culture (a measure of the capacity for change) and performance and George Siemens’s learning theory of connectivism to create something syncretic: a learning approach that simultaneously builds individual capability, organizational capacity, and network strength.

    Active knowledge making

    The prevailing model of professional development often treats learners as empty vessels to be filled with expert knowledge.

    Drawing from constructivist learning theory, it positions health workers as knowledge creators rather than passive recipients.

    When a community health worker in Kenya shares how they’ve adapted vaccination strategies for remote communities, they are not just describing their work – they’re creating valuable knowledge that others can learn from and adapt.

    The role of experts is even more significant in this model: experts become “Guides on the side”, listening to challenges and their contexts to identify what expert knowledge is most likely to be useful to a specific challenge and context.

    (This is the oft-neglected “downstream” to the “upstream” work that goes into the creation of global guidelines.)

    This principle manifests in how questions are framed.

    Instead of asking “What should you do when faced with vaccine hesitancy?” Teach to Reach asks “Tell us about a time when you successfully addressed vaccine hesitancy in your community.” This subtle shift transforms the learning dynamic from theoretical to practical, from passive to active.

    Collaborative intelligence

    The concept of collaborative intelligence, inspired by social learning theory, recognizes that knowledge in complex fields like global health is distributed across many individuals and contexts.

    No single expert or institution holds all the answers.

    By creating structures for health workers to share and learn from each other’s experiences, Teach to Reach taps into what cognitive scientists call “distributed cognition” – the idea that knowledge and understanding emerge from networks of people rather than individual minds.

    This plays out practically in how experiences are shared and synthesized.

    When a nurse in Brazil shares their approach to integrating COVID-19 vaccination with routine immunization, their experience becomes part of a larger tapestry of knowledge that includes perspectives from diverse contexts and roles.

    Metacognitive reflection

    Metacognition – thinking about thinking – is crucial for professional development, yet it is often overlooked in traditional training.

    Teach to Reach deliberately builds in opportunities for metacognitive reflection through its question design and response framework.

    When participants share experiences, they are prompted not just to describe what happened, but to analyze why they made certain decisions and what they learned from the experience.
    This reflective practice helps health workers develop deeper understanding of their own practice and decision-making processes.

    It transforms individual experiences into learning opportunities that benefit both the sharer and the wider community.

    Recursive feedback

    Learning is not linear – it is a cyclical process of sharing, reflecting, applying, and refining.

    Teach to Reach’s model of recursive feedback, inspired by systems thinking, creates multiple opportunities for participants to engage with and build upon each other’s experiences.

    This goes beyond communities of practice, because the community component is part of a broader, dynamic and ongoing process.

    Executing a complex pedagogical pattern

    The pedagogical pattern of Teach to Reach come to life through a carefully designed implementation framework over a six-month period, before, during, and after the live event.

    This extended timeframe is not arbitrary – it is based on research showing that sustained engagement over time leads to deeper learning and more lasting change than one-off learning events.
    The core of the learning process is the Teach to Reach Questions – weekly prompts that guide participants through progressively more complex reflection and sharing.

    These questions are crafted to elicit not just information, but insight and understanding.

    They follow a deliberate sequence that moves from description to analysis to reflection to application, mirroring the natural cycle of experiential learning.

    Communication as pedagogy

    In Teach to Reach, communication is not just about delivering information – it is an integral part of the learning process.

    The model uses what scholars call “pedagogical communication” – communication designed specifically to facilitate learning.

    This manifests in several ways:

    • Personal and warm tone that creates psychological safety for sharing
    • Clear calls to action that guide participants through the learning process
    • Multiple touchpoints that reinforce learning and maintain engagement
    • Progressive engagement that builds complexity gradually

    Learning culture and performance

    Watkins and Marsick’s work helps us understand why Teach to Reach’s approach is so effective.

    Learning culture – the set of organizational values, practices, and systems that support continuous learning – is crucial for translating individual insights into improved organizational performance.

    Teach to Reach deliberately builds elements of strong learning cultures into its design.

    Furthermore, the Geneva Learning Foundation’s research found that continuous learning is the weakest dimension of learning culture in immunization – and probably global health.

    Hence, Teach to Reach itself provides a mechanism to strengthen specifically this dimension.

    Take the simple act of asking questions about real work experiences.

    This is not just about gathering information – it’s about creating what Watkins and Marsick call “inquiry and dialogue,” a fundamental dimension of learning organizations.

    When health workers share their experiences, they are not just describing what happened.

    They are engaging in a form of collaborative inquiry that helps everyone involved develop deeper understanding.

    Networks of knowledge

    George Siemens’s connectivism theory provides another crucial lens for understanding Teach to Reach’s effectiveness.

    In today’s world, knowledge is not just what is in our heads – it is distributed across networks of people and resources.

    Teach to Reach creates and strengthens these networks through its unique approach to asynchronous peer learning.

    The process begins with carefully designed questions that prompt health workers to share specific experiences.

    But it does not stop there.

    These experiences become nodes in a growing network of knowledge, connected through themes, challenges, and solutions.

    When a health worker in India reads about how a colleague in Nigeria addressed a particular challenge, they are not just learning about one solution – they are becoming part of a network that makes everyone’s practice stronger.

    From theory to practice

    What makes Teach to Reach particularly powerful is how it fuses multiple theories of learning into a practical model that works in real-world conditions.

    The model recognizes that learning must be accessible to health workers dealing with limited connectivity, heavy workloads, and diverse linguistic and cultural contexts.

    New Learning’s emphasis on multimodal meaning-making supports the use of multiple communication channels ensuring accessibility.

    Learning culture principles guide the creation of supportive structures that make continuous learning possible even in challenging conditions.

    Connectivist insights inform how knowledge is shared and distributed across the network.

    Creating sustainable change

    The real test of any learning approach is whether it creates sustainable change in practice.

    By simultaneously building individual capability, organizational capacity, and network strength, it creates the conditions for continuous improvement and adaptation.

    Health workers do not just learn new approaches – they develop the capacity to learn continuously from their own experience and the experiences of others.

    Organizations do not just gain new knowledge – they develop stronger learning cultures that support ongoing innovation.

    And the broader health system gains not just a collection of good practices, but a living network of practitioners who continue to learn and adapt together.

    Looking forward

    As global health challenges have become more complex, the need for more effective approaches to professional learning becomes more urgent.

    Teach to Reach’s pedagogical model, grounded in complementary theoretical frameworks and proven in practice, offers valuable insights for anyone interested in creating impactful professional learning experiences.

    The model suggests that effective professional learning in complex fields like global health requires more than just good content or engaging delivery.

    It requires careful attention to how learning cultures are built, how networks are strengthened, and how individual learning connects to organizational and system performance.

    Most importantly, it reminds us that the most powerful learning often happens not through traditional training but through thoughtfully structured opportunities for professionals to learn from and with each other.

    In this way, Teach to Reach is a demonstration of what becomes possible when we reimagine how professional learning happens in service of better health outcomes worldwide.

    Image: The Geneva Learning Foundation Collection © 2024

  • Taking the pulse: why and how we change everything in response to learner signals

    Taking the pulse: why and how we change everything in response to learner signals

    The ability to analyze and respond to learner behavior as it happens is crucial for educators.

    In complex learning that takes place in digital spaces, task separation between the design of instruction and its delivery does not make sense.

    Here is the practical approach we use in The Geneva Learning Foundation’s learning-to-action model to implement responsive learning environments by listening to learner signals and adapting design, activities, and feedback accordingly.

    Listening for and interpreting learner signals

    Educators must pay close attention to various signals that learners emit throughout their learning journey. These signals appear in several key ways:

    1. Engagement levels: This includes participation rates, the quality of contributions in discussions, how learners interact with each other, and knowledge artefacts they produce.
    2. Emotional responses: The tone and content of learner feedback can indicate enthusiasm, frustration, or confusion.
    3. Performance patterns: Trends in speed and volume of responses tend to strongly correlate with more significant learning outcome indicators.
    4. Interaction dynamics: Learners can feel a facilitator’s conviction (or lack thereof) in the learning process. Observing the interaction should focus first on the facilitator’s own behavior: what are they modeling for learners?
    5. Technical interactions: The way learners navigate the learning platform, which resources they access most, and any technical challenges they face are important indicators.

    Making sense of learner signals

    Once these signals are identified, a nuanced approach to analysis is necessary:

    1. Contextual consideration: Understanding the broader context of learners’ experiences is vital. For example, differences between language cohorts might reflect varying levels of real-world experience and cultural contexts.
    2. Holistic view: Look beyond immediate learning objectives to understand all aspects of learners’ experiences, including factors outside the course that may affect their engagement.
    3. Temporal analysis: Track changes in learner behavior over time to reveal important trends and patterns as the course progresses.
    4. Comparative assessment: Compare behavior across different cohorts, language groups, or demographic segments to identify unique needs and preferences.
    5. Feedback loop analysis: Examine how learners respond to different types of feedback and instructional interventions to provide valuable insights.

    Adapting learning design in situ

    What can we change in response to learner behavior, signals, and patterns?

    1. Customized content: Tailor case studies, examples, and scenarios to match the real-world experiences and cultural contexts of different learner groups.
    2. Flexible pacing: Adjust the rhythm of content delivery and activities based on observed engagement patterns and feedback.
    3. Varied support mechanisms: Implement a range of support options, from technical assistance to emotional support, based on identified learner needs.
    4. Dynamic group formations: Adapt group activities and peer learning opportunities based on observed interaction dynamics and skill levels.
    5. Multimodal delivery: Offer content and activities in various formats to cater to different learning preferences and technical capabilities.

    Responding to learner signals

    Feedback plays a crucial role in the learning process:

    1. Comprehensive acknowledgment: Feedback mechanisms should demonstrate to learners that their input is valued and considered. This might involve creating, at least once, detailed summaries of learner feedback to show that every voice has been heard.
    2. Timely interventions: Using real-time feedback to address emerging issues or confusion quickly can prevent small challenges from becoming major obstacles.
    3. Personalized guidance: Tailor feedback to individual learners based on their unique progress, challenges, and goals.
    4. Peer feedback facilitation: Create opportunities for learners to provide feedback to each other to foster a collaborative learning environment.
    5. Metacognitive prompts: Incorporate feedback that encourages learners to reflect on their learning process to promote self-awareness and self-directed learning.

    Balancing act

    When combined, these analyses provide clues to inform decisions.

    Nothing should be set in stone.

    Decisions need to be pragmatic and rapid.

    In order to respond to the pattern formed by signals, what are the trade-offs?

    The digital economy of effort makes rapid changes possible.

    Nevertheless, we consider the cost of each change versus its benefit.

    This adaptive approach involves careful balancing of various factors:

    1. Depth versus speed: Navigate the tension between providing comprehensive feedback and maintaining a timely pace of instruction.
    2. Structure versus flexibility: Maintain a coherent course structure while allowing for adaptations based on learner needs.
    3. Individual versus group needs: Balance addressing individual learner challenges with maintaining the momentum of the entire cohort.
    4. Emotional support versus learning structure: Provide necessary emotional support, especially in challenging contexts, while maintaining focus on learning objectives.

    Learning is research

    Each learning experience should be treated as a research opportunity:

    1. Data collection: Systematically collect data on learner behavior, feedback, and outcomes.
    2. Team reflection: Conduct regular debriefs with the instructional team to share insights and adjust strategies.
    3. Iterative design: Use insights gained from each cohort to refine the learning design for future iterations.
    4. Cross-cohort learning: Apply lessons learned from one language or cultural group to enhance the experience of others, while respecting unique contextual differences.

    Image: The Geneva Learning Foundation Collection © 2024

  • Why asking learners what they want is a recipe for confusion

    Why asking learners what they want is a recipe for confusion

    A survey of learners on a large, authoritative global health learning platform has me pondering once again the perils of relying too heavily on learner preferences when designing educational experiences.

    One survey question intended to ask learners for their preferred learning method.

    The list of options provided includes a range of items.

    (Some would make the point that the list conflates learning resources and learning methods, but let us leave that aside for now.)

    Respondents’ top choices (source) were videos, slides, and downloadable documents.

    At first glance, this seems perfectly reasonable.

    After all, should we not give learners what they want?

    As it happens, the main resources offered by this platform are videos, slides, and other downloadable documents.

    (If we asked learners who participate in our peer learning programmes for their preference, they would likely say that they prefer… peer learning.)

    Beyond this availability bias, there is a more significant problem with this approach: learner preferences often have little correlation with actual learning outcomes.

    And learners are especially bad at self-evaluating what learning methods and resources are most conducive to effective learning.

    The scientific literature is quite clear on this point.

    Bjork’s 2013 article on self-regulated learning emphatically states that: “learners are often prone to illusions of competence during learning, and these illusions can be remarkably compelling.”

    The study by Deslauriers et al. (2019) provides a compelling demonstration that while students express a strong preference for traditional lectures over active learning methods, they actually learn significantly more from the active approaches they claim to dislike.

    This disconnect between preference and efficacy is not surprising when we consider how learning actually works.

    Effective learning requires effort, struggle, and sometimes discomfort as we grapple with new ideas and challenge our existing mental models.

    It is not always an enjoyable process in the moment, even if the long-term results are deeply rewarding.

    Furthermore, learners (like all of us) are subject to various cognitive biases that can lead them astray when evaluating their own learning.

    The illusion of explanatory depth, for example, can cause us to overestimate how well we understand a topic after passively consuming information about it.

    None of this is to say we should ignore learner perspectives entirely.

    Motivation and engagement do matter for learning.

    But we need to be thoughtful about how we solicit and interpret learner feedback.

    Asking about preferences for specific content formats (videos, slides, etc.) tells us very little about the actual learning activities and cognitive processes involved.

    A more productive approach might be to focus on understanding learners’ goals, challenges, and contexts.

    What are they trying to achieve?

    What obstacles do they face?

    What constraints shape their learning environment?

    With this information, we can design evidence-based learning experiences that truly meet their needs – even if they don’t always match their stated preferences.

    As learning professionals, our job is not to give learners what they think they want.

    It is to create the conditions for transformative learning experiences that expand their capabilities and perspectives.

    This often means pushing learners out of their comfort zones and challenging their assumptions about how learning should look and feel.

    References

    Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444. https://doi.org/10.1146/annurev-psych-113011-143823

    Deslauriers, L., McCarty, L.S., Miller, K., Callaghan, K., Kestin, G., 2019. Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences 201821936. https://doi.org/10.1073/pnas.1821936116

  • Learn health, but beware of the behaviorist trap

    Learn health, but beware of the behaviorist trap

    The global health community has long grappled with the challenge of providing effective, scalable training to health workers, particularly in resource-constrained settings.

    In recent years, digital learning platforms have emerged as a potential solution, promising to deliver accessible, engaging, and impactful training at scale.

    Imagine a digital platform intended to train health workers at scale.

    Their theory of change rests on a few key assumptions:

    1. Offering simplified, mobile-friendly courses will make training more accessible to health workers.
    2. Incorporating videos and case studies will keep learners engaged.
    3. Quizzes and knowledge checks will ensure learning happens.
    4. Certificates, continuing education credits, and small incentives will motivate course completion.
    5. Growing the user base through marketing and partnerships is the path to impact.

    On the surface, this seems sensible.

    Mobile optimization recognizes health workers’ technological realities.

    Multimedia content seems more engaging than pure text.

    Assessments appear to verify learning.

    Incentives promise to drive uptake.

    Scale feels synonymous with success.

    While well-intentioned, such a platform risks falling into the trap of a behaviorist learning agenda.

    This is an approach that, despite its prevalence, is a pedagogical dead-end with limited potential for driving meaningful, sustained improvements in health worker performance and health outcomes.

    It is a paradigm that views learners as passive recipients of information, where exposure equals knowledge acquisition.

    It is a model that privileges standardization over personalization, content consumption over knowledge creation, and extrinsic rewards over intrinsic motivation.

    It fails to account for the rich diversity of prior experiences, contexts, and challenges that health workers bring to their learning.

    Most critically, it neglects the higher-order skills – the critical thinking, the adaptive expertise, the self-directed learning capacity – that are most predictive of real-world performance.

    Clicking through screens of information about neonatal care, for example, is not the same as developing the situational judgment to adapt guidelines to a complex clinical scenario, nor the reflective practice to continuously improve.

    Moreover, the metrics typically prioritized by behaviorist platforms – user registrations, course completions, assessment scores – are often vanity metrics.

    They create an illusion of progress while obscuring the metrics that truly matter: behavior change, performance improvement, and health outcomes.

    A health worker may complete a generic course on neonatal care, for example, but this does not necessarily translate into the situational judgment to adapt guidelines to complex clinical scenarios, nor the reflective practice to continuously improve.

    The behaviorist paradigm’s emphasis on information transmission and standardized content may stem from an implicit assumption that health workers at the community level do not require higher-order critical thinking skills – that they simply need a predetermined set of knowledge and procedures.

    This view is not only paternalistic and insulting, but it is also fundamentally misguided.

    A robust body of scientific evidence on learning culture and performance demonstrates that the most effective organizations are those that foster continuous learning, critical reflection, and adaptive problem-solving at all levels.

    Health workers at the frontlines face complex, unpredictable challenges that demand situational judgment, creative thinking, and the ability to learn from experience.

    Failing to cultivate these capacities not only underestimates the potential of these health workers, but it also constrains the performance and resilience of health systems as a whole.

    Even if such a platform achieves its growth targets, it is unlikely to realize its impact goals.

    Health workers may dutifully click through courses, but genuine transformative learning remains elusive.

    The alternative lies in a learning agenda grounded in advances of the last three decades learning science.

    These advances remain largely unknown or ignored in global health.

    This approach positions health workers as active, knowledgeable agents, rich in experience and expertise.

    It designs learning experiences not merely to transmit information, but to foster critical reflection, dialogue, and problem-solving.

    It replaces generic content with authentic, context-specific challenges, and isolated study with collaborative sense-making in peer networks.

    It recognizes intrinsic motivation – the desire to grow, to serve, to make a difference – as the most potent driver of learning.

    Here, success is measured not in superficial metrics, but in meaningful outcomes: capacity to lead change in facilities and communities that leads to tangible improvements in the quality of care.

    Global health leaders faces a choice: to settle for the illusion of progress, or to invest in the deep, difficult work of authentic learning and systemic change, commensurate with the complexity and urgency of the task at hand.

    Image: The Geneva Learning Foundation Collection © 2024

  • Self-regulated learning: 8 things we know about learning across the lifespan in a complex world

    Self-regulated learning: 8 things we know about learning across the lifespan in a complex world

    The work by Robert A. Bjork and his colleagues is very helpful to make sense of the limitations of learners’ perceptions. Here are 8 summary points from their paper about self-regulated learning.

    1. Our complex and rapidly changing world increasingly requires self-initiated, self-managed, and self-regulated learning, not simply during the years associated with formal schooling, but across the lifespan.
    2. Learning how to learn is, therefore, a critical survival tool, but research on learning, memory, and metacognitive processes has demonstrated that learners are prone to intuitions and beliefs about learning that can impair, rather than enhance, their effectiveness as learners.
    3. Becoming sophisticated as a learner requires not only acquiring a basic understanding of the encoding and retrieval processes that characterize the storage and subsequent access to the to-be-learned knowledge and procedures, but also knowing what self-regulated learning activities and techniques support long-term retention and transfer.
    4. Managing one’s ongoing learning effectively requires accurate monitoring of the degree to which learning has been achieved, coupled with appropriate selection and control of one’s learning activities in response to that monitoring.
    5. Assessing whether learning has been achieved is difficult because conditions that enhance performance during learning can fail to support long-term retention and transfer, whereas other conditions that appear to create difficulties and slow the acquisition process can enhance long-term retention and transfer.
    6. Learners’ judgments of their own degree of learning are also influenced by subjective indices, such as the sense of fluency in perceiving or recalling to-be-learned information, but such fluency can be a product of low-level priming and other factors that are unrelated to whether learning has been achieved.
    7. Becoming maximally effective as a learner requires interpreting errors and mistakes as an essential component of effective learning rather than as a reflection of one’s inadequacies as a learner.
    8. To be maximally effective also requires an appreciation of the incredible capacity humans have to learn and avoiding the mindset that one’s learning abilities are fixed.

    Reference:

    Bjork, R.A., Dunlosky, J., Kornell, N., 2013. Self-Regulated Learning: Beliefs, Techniques, and Illusions. Annu. Rev. Psychol. 64, 417–444. https://doi.org/10.1146/annurev-psych-113011-143823

  • Why lack of continuous learning is the Achilles heel of immunization 

    Why lack of continuous learning is the Achilles heel of immunization 

    Continuous learning is lacking in immunization learning culture, a measure of the capacity for change..

    This lack may be an underestimated barrier to the “Big Catch-Up” and reaching zero-dose children

    This was a key finding presented at Gavi’s Zero-Dose Learning Hub (ZDLH) webinar “Equity in Action: Local Strategies for Reaching Zero-Dose Children and Communities” on 24 January 2024.

    The finding is based on analysis large-scale learning culture measurements conducted by the Geneva Learning Foundation in 2020 and 2022, with more than 10,000 immunization staff from all levels of the health system, job categories, and contexts, responding from over 90 countries.

    YearnContinuous learningDialogue & InquiryTeam learningEmbedded SystemsEmpowered PeopleSystem ConnectionStrategic Leadership
    202038303.614.684.814.685.104.83
    202261853.764.714.864.934.725.234.93
    TGLF global measurements (2020 and 2022) of learning culture in immunization, using the Dimensions of Learning Organization Questionnaire (DLOQ)

    What does this finding about continuous learning actually mean?

    In immunization, the following gaps in continuous learning are likely to be hindering performance.

    1. Relatively few learning opportunities for immunization staff
    2. Limitations on the ability for staff to experiment and take risks 
    3. Low tolerance for failure when trying something new
    4. A focus on completing immunization tasks rather than developing skills and future capacity
    5. Lack of encouragement for on-the-job learning 

    This gap hurts more than ever when adapting strategies to reach “zero-dose” children.

    These are children who have not been reached when immunization staff carry out what they usually do.

    The traditional learning model is one in which knowledge is codified into lengthy guidelines that are then expected to trickle down from the national team to the local levels, with local staff competencies focused on following instructions, not learning, experimenting, or preparing for the future.

    For many immunization staff, this is the reference model that has helped eradicate polio, for example, and to achieve impressive gains that have saved millions of children’s lives.

    It can therefore be difficult to understand why closing persistent equity gaps and getting life-saving vaccines to every child would now require transforming this model.

    Yet, there is growing evidence that peer learning and experience sharing between health workers does help surface creative, context-specific solutions tailored to the barriers faced by under-immunized communities. 

    Such learning can be embedded into work, unlike formal training that requires staff to stop work (reducing performance to zero) in order to learn.

    Yet the predominant culture does little to motivate or empower these workers to recognize or reward such work-based learning.

    Furthermore, without opportunities to develop skills, try new approaches, and learn from both successes and failures, staff may become demotivated and ineffective. 

    This is not an argument to invest in formal training.

    Investment in formal training has failed to measurably translate into improved immunization performance.

    Worse, the per diem economy of extrinsic incentives for formal training has, in some places, led to absurdity: some health workers may earn more by sitting in classrooms than from doing their work.

    With a weak culture of learning, the system likely misses out on practices that make a difference.

    This is the “how” that bridges the gap between best practice and what it takes to apply it in a specific context.

    The same evidence also demonstrates a consistently-strong correlation between strengthened continuous learning and performance.

    Investment in continuous learning is simple, costs surprisingly little given its scalability and effectiveness.

    Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    That means investment in continuous learning is already proven to result in improved performance.

    We call this “learning-based work”.

    References

    Watkins, K.E. and Marsick, V.J., 2023. Chapter 4. Learning informally at work: Reframing learning and development. In Rethinking Workplace Learning and Development. Edward Elgar Publishing. Excerpt: https://stories.learning.foundation/2023/11/04/how-we-reframed-learning-and-development-learning-based-complex-work/

    The Geneva Learning Foundation. From exchange to action: Summary report of Gavi Zero-Dose Learning Hub inter-country exchanges. Geneva: The Geneva Learning Foundation, 2023. https://doi.org/10.5281/zenodo.10132961

    The Geneva Learning Foundation. Motivation, Learning Culture and Immunization Programme Performance: Practitioner Perspectives (IA2030 Case Study 7) (1.0); Geneva: The Geneva Learning Foundation, 2022. https://doi.org/10.5281/zenodo.7004304

    Image: The Geneva Learning Foundation Collection © 2024

  • What is the relationship between leadership and performance?

    What is the relationship between leadership and performance?

    In their article “What Have We Learned That Is Critical in Understanding Leadership Perceptions and Leader-Performance Relations?”, Robert G. Lord and Jessica E. Dinh review research on leadership perceptions and performance, and provide research-based principles that can provide new directions for future leadership theory and research.

    What is leadership? 

    Leadership is tricky to define. The authors state: “Leadership is an art that has significant impact on individuals, groups, organizations, and societies”.

    It is not just about one person telling everyone else what to do. Leadership happens in the connections between people – it is something that grows between a leader and followers, almost like a partnership. And it usually does not involve just one leader either. There can be leadership shared across a whole team or organization.

    The big question is: how does all this connecting and partnering actually get a team to perform well? That is what researchers are still trying to understand.

    What we do know about leadership

    Researchers have learned a lot about what makes a leader “seem” effective to the people around them. Certain personality traits, behaviors, speaking styles and even body language can make people think “oh, that person is a good leader.” 

    But figuring out how those leaders actually influence performance over months and years is tougher. It is hard for scientists to measure stuff that happens slowly over time. More research is still needed to connect the dots between leaders’ actions today and results years later.

    How people think about leadership matters 

    Learning science shows that how people process information shapes their perceptions, emotions and behaviors. So to understand leadership, researchers are now looking into things like:

    • How do the automatic, gut-level parts of people’s brains affect leadership moments? (This means how emotions and instincts influence leadership)
    • How do leaders’ and followers’ thinking interact?  
    • How do emotions and body language play a role?

    This research might help explain why leadership works or does not work in real teams.  

    Some pitfalls to avoid 

    There are a few assumptions that could mislead leadership research:   

    1. Surveys might not catch real leadership behavior, because people’s memories are messy. Their responses involve lots of other stuff beyond just the facts.  
    2. What worked well for leaders in the past might not keep working in a fast-changing world. They cannot just keep doing the same thing.
    3. Leaders actually have less control than we think. Their organization’s success depends on unpredictable factors way beyond what they do.

    The future of leadership research has to focus more on the complex thinking and system-wide stuff that is hard to see but really important. The human brain and human groups are just too complicated for simple explanations.

    Reference: Lord, R.G., Dinh, J.E., 2014. What Have We Learned That Is Critical in Understanding Leadership Perceptions and Leader-Performance Relations? Industrial and Organizational Psychology 7, 158–177.

  • How to overcome limitations of expert-led fellowships for global health

    How to overcome limitations of expert-led fellowships for global health

    Coaching and mentoring programs sometimes called “fellowships” have been upheld as the gold standard for developing leaders in global health.

    For example, a fellowship in the field of immunization was recently advertised in the following manner.

    • Develop your skills and become an advocate and leader: The fellowship will begin with two months of weekly mandatory live engagements led by [global] staff and immunization experts around topics relating to rebuilding routine immunization, including catch-up vaccination, integration and life course immunization. […]
    • Craft an implementation plan: Throughout the live engagement series, fellows will develop, revise and submit a COVID-19 recovery strategic plan.
    • Receive individualized mentoring: Participants with strong plans will be considered for a mentorship program to work 1:1 with experts in the field to further develop and implement their strategies and potentially publish their case studies.

    We will not dwell here on the ‘live engagements’, which are expert-led presentations of technical knowledge. We already know that such ‘webinars’ have very limited learning efficacy, and unlikely impact on outcomes. (This may seem like a harsh statement to global health practitioners who have grown comfortable with webinars, but it is substantiated by decades of evidence from learning science research.)

    On the surface, the rest of the model sounds highly effective, promising personalized attention and expert guidance.

    The use of a project-based learning approach is promising, but it is unclear what support is provided once the implementation plan has been crafted.

    It is when you consider the logistical aspects that the cracks begin to show.

    The essence of traditional coaching lies in the quality of the one-to-one interaction, making it an inherently limited resource.

    Take, for example, a fellowship programme where interest outstrips availability—say, 1,600 aspiring global health leaders are interested, but only 30 will be selected for one-on-one mentoring.

    Tailored, one-on-one coaching can be incredibly effective in small, controlled environments.

    While these 30 may receive an invaluable experience, what happens to those left behind?

    There is an ‘elitist spiral’.

    Coaching and mentoring, while intensive, remain exclusive by design, limited to the select few.

    This not only restricts scale but also concentrates knowledge among the selected group, perpetuating hierarchies.

    Those chosen gain invaluable support.

    The majority left out are denied access and implicitly viewed as passive recipients rather than partners in a collective solution.

    Doubling the number of ‘fellows’ only marginally improves this situation.

    Even if the mentor pool were to grow exponentially, the personalized nature of the engagement limits the rate of diffusion.

    When we step back and look at the big picture, we realize there is a problem: these programs are expensive and difficult to scale.

    And, in global health, if it does not scale, it is not solving the problem.

    How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    So while these programs can make a real difference for a small group of people, they are unlikely to move the needle on a global scale.

    That is like trying to fill a swimming pool with a teaspoon—you might make some progress, but you will never get the job done.

    The model creates a paradox: the attributes making it effective for individuals intrinsically limit systemic impact.

    There is another paradox related to complexity.

    Global health issues are inextricably tied to cultural, political and economic factors unique to each country and community.

    Complex problems require nuanced solutions.

    Yet coaching promotes generalized expertise from a few global, centralized institutions rather than fostering context-specific knowledge.

    Even the most brilliant, experienced coach or mentor cannot single-handedly impart the multifaceted understanding needed to drive impact across diverse settings.

    A ‘fellowship’ structure also subtly perpetuates the existing hierarchies within global health.

    It operates on the tacit assumption that the necessary knowledge and expertise reside in certain centralized locations and among a select cadre of experts.

    This sends an implicit message that knowledge flows unidirectionally—from the seasoned experts to the less-experienced practitioners who are perceived as needing to be “coached.”

    Learn more: How does peer learning compare to expert-led coaching ‘fellowships’?

    Peer learning: Collective wisdom, collective progress

    In global health, no one individual or institution can be expected to possess solutions for all settings.

    Sustainable change requires mobilizing collective intelligence, not just centralized expertise.

    Learn more: The COVID-19 Peer Hub as an example of Collective Intelligence (CI) in practice

    This means transitioning from hierarchical, top-down development models to flexible platforms amplifying practitioners’ contextual insights.

    The gap between need and availability of quality training in global health is too vast for conventional approaches to ever bridge alone.

    Instead of desperately chasing an asymptote of expanding elite access, we stand to gain more by embracing approaches that democratize development.

    Complex challenges demand platforms unleashing collective wisdom through collaboration. The technologies exist.

    In the “fellowship” example, less than five percent of participants were selected to receive feedback from global experts.

    A peer learning platform can provide high-quality peer feedback for everyone.

    • Such a platform democratizes access to knowledge and disrupts traditional hierarchies.
    • It also moves away from the outdated notion that expertise is concentrated in specific geographical or institutional locations.

    What learning science underpins peer learning for global health? Watch this 14-minute presentation at the 2023 annual meeting of the American Society for Tropical Medicine and Hygiene (ASTMH).

    What about the perceived trade-off between quality and scale?

    Effective digital peer learning platforms negate this zero-sum game.

    Research on MOOCs (massive open online courses) has conclusively demonstrated that giving and receiving feedback to and from three peers through structured, rubric-based peer review, achieves reliability comparable, when properly supported, to that of expert feedback alone.

    If we are going to make a dent in the global health crises we face, we have to shift from a model that relies on the expertise of the few to one that harnesses the collective wisdom of the many.

    • Peer learning isn’t a Band-Aid. It is an innovative leap forward that disrupts the status quo, and it’s exactly what the global health sector needs.
    • Peer learning is not just an incremental improvement. It is a seismic shift in the way we think about learning and capacity-building in global health.
    • Peer learning is not a compromise. It is an upgrade. We move from a model of scarcity, bound by the limits of individual expertise, to one of collective wisdom.
    • Peer learning is more than just a useful tool. It is a challenge to the traditional epistemology of global health education.

    Read about a practical example: Movement for Immunization Agenda 2030 (IA2030): grounding action in local realities to reach the unreached

    As we grapple with urgent issues in global health—from pandemic recovery to routine immunization—it is clear that we need collective intelligence and resource sharing on a massive scale.

    And for that, we need to move beyond the selective, top-down models of the past.

    The collective challenges we face in global health require collective solutions.

    And collective solutions require us to question established norms, particularly when those norms serve to maintain existing hierarchies and power imbalances.

    Now it is up to us to seize this opportunity and move beyond outmoded, hierarchical models.

    There is a path – now, not tomorrow – to truly democratize knowledge, make meaningful progress, and tackle the global health challenges that confront us all.

  • How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    By connecting practitioners to learn from each other, peer learning facilitates collaborative development. ow does it compare to expert-led coaching and mentoring “fellowships” that are seen as the ‘gold standard’ for professional development in global health?

    Scalability in global health matters. (See this article for a comparison of other aspects.)

    Simplified mathematical modeling can compare the scalability of expert coaching (“fellowships”) and peer learning

    Let N be the total number of learners and M be the number of experts available. Assuming that each expert can coach K learners effectively:

    $latex \text{Total Number of Coached Learners} = M \times K&s=3$

    For N>>M×KN>>M×K, it is evident that expert coaching is costly and difficult to scale.

    Expert coaching “fellowships” require the availability of experts, which is often optimistic in highly specialized fields.

    The number of learners (N) greatly exceeds the product of the number of experts (M) and the capacity per expert (K).

    Scalability of one-to-one peer learning

    By comparison, peer learning turns the conventional model on its head by transforming each learner into a potential coach who can provide peer feedback.

    This has significant advantages in scalability.

    Let N be the total number of learners. Assuming a peer-to-peer model, where each learner can learn from any other learner:

    $latex \text{Total Number of Learning Interactions} = \frac{N \times (N – 1)}{2}&s=3$

    $latex \text{The number of learning interactions scales with: } O(N^2)&s=3$

    In this context, the number of learning interactions scales quadratically with the number of learners. This means that if the number of learners doubles, the total number of learning interactions increases by a factor of four. This quadratic relationship highlights the significant increase in interactions (and potential scalability challenges) as more learners participate in the model.

    However, this one-to-one model is difficult to implement: not every learner is going to interact with every other learner in meaningful ways.

    A more practical ‘triangular’ peer learning model with no upper limit to scalability

    In The Geneva Learning Foundation’s peer learning model, learners give feedback to three peers, and receive feedback from three peers. This is a structured, time-bound process of peer review, guided by an expert-designed rubric.

    When each learner gives feedback to 3 different learners and receives feedback from 3 different learners, the model changes significantly from the one-to-one model where every learner could potentially interact with every other learner. In this specific configuration, the total number of interactions can be calculated based on the number of learners N, with each learner being involved in 6 interactions (3 given + 3 received).

    The total number of interactions per learner is six. However, since each interaction involves two learners (the giver and the receiver of feedback), we do not need to double-count these interactions for the total count in the system. Hence, the total number of interactions for each learner is directly 6, without further adjustments for double-counting.

    Therefore, the total number of learning interactions in the system can be represented as:

    $latex \text{Total Number of Learning Interactions} = N \times 3&s=3$

    Given this setup, the complexity or scalability of the system in terms of learning interactions relative to the number of participants N is linear. This is because the total number of interactions increases directly in proportion to the number of learners. Thus, the Big O notation would be:

    $latex O(N)&s=3$

    This indicates that the total number of learning interactions scales linearly with the number of learners. In this configuration, as the number of learners increases, the total number of interactions increases at a linear rate, which is more scalable and manageable than the quadratic rate seen in the peer-to-peer model where every learner interacts with every other learner. Learn more: There is no scale.

    Illustration: The Geneva Learning Foundation © 2024

  • Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    A formula for calculating learning efficacy, (E), considering the importance of each criterion and the specific ratings for peer learning, is:

    $latex \text{Efficacy} = \frac{S \cdot w_S + I \cdot w_I + C \cdot w_C + F \cdot w_F + U \cdot w_U}{w_S + w_I + w_C + w_F + w_U}&s=3$

    This abstract formula provides a way to quantify learning efficacy, considering various educational criteria and their relative importance (weights) for effective learning.

    Variable DefinitionDescription 
    SScalabilityAbility to accommodate a large number of learners 
    IInformation fidelityQuality and reliability of information 
    CCost effectivenessFinancial efficiency of the learning method 
    FFeedback qualityQuality of feedback received 
    UUniformityConsistency of learning experience 
    Summary of five variables that contribute to learning efficacy

    Weights for each variables are derived from empirical data and expert consensus.

    All values are on a scale of 0-4, with a “4” representing the highest level.

    ScalabilityInformation fidelityCost-benefitFeedback qualityUniformity
    $latex w_S&s=3$$latex w_I&s=3$$latex w_C&s=3$$latex w_F&s=3$$latex w_U&s=3$
    4.003.004.003.001.00
    Assigned weights

    Here is a summary table including all values for each criterion, learning efficacy calculated with weights, and Efficacy-Scale Score (ESS) for peer learning, cascade training, and expert coaching.

    The Efficacy-Scale Score (ESS) can be calculated by multiplying the efficacy (E) of a learning method by the number of learners (N).

    $latex \text{ESS} = E \times N&s=3$

    This table provides a detailed comparison of the values for each criterion across the different learning methods, the calculated learning efficacy values considering the specified weights, and the Efficacy-Scale Score (ESS) for each method.

    Type of learningScalabilityInformation fidelityCost effectivenessFeedback qualityUniformityLearning efficacy# of learnersEfficacy-Scale Score
    Peer learning4.002.504.002.501.003.2010003200
    Cascade training2.001.002.000.500.501.40500700
    Expert coaching0.504.001.004.003.002.2060132

    Of course, there are many nuances in individual programmes that could affect the real-world effectiveness of this simple model. The model, grounded in empirical data and simplified to highlight core determinants of learning efficacy, leverages statistical weighting to prioritize key educational factors, acknowledging its abstraction from the multifaceted nature of educational effectiveness and assumptions may not capture all nuances of individual learning scenarios.

    Peer learning

    The calculated learning efficacy for peer learning, $latex (E_{\text {peer}})&s=2$ , is 3.20. This value reflects the weighted assessment of peer learning’s strengths and characteristics according to the provided criteria and their importance.

    By virtue of scalability, ESS for peer learning is 24 times higher than expert coaching.

    Cascade training

    For Cascade Training, the calculated learning efficacy, $latex (E_{\text {cascade}})&s=2$, is approximately 1.40. This reflects the weighted assessment based on the provided criteria and their importance, indicating lower efficacy compared to peer learning.

    Cascade training has a higher ESS than expert coaching, due to its ability to achieve scale.

    Learn more: Why does cascade training fail?

    Expert coaching

    For Expert Coaching, the calculated learning efficacy, $latex (E_{\text {expert}})&s=2$, is approximately 2.20. This value indicates higher efficacy than cascade training but lower than peer learning.

    However, the ESS is the lowest of the three methods, primarily due to its inability to scale. Read this article for a scalability comparison between expert coaching and peer learning.

    Image: The Geneva Learning Foundation Collection © 2024