Tag: peer learning

  • What is a “rubric” and why use rubrics in global health education?

    What is a “rubric” and why use rubrics in global health education?

    Rubrics are well-established, evidence-based tools in education, but largely unknown in global health.

    At the Geneva Learning Foundation (TGLF), the rubric is a key tool that we use – as part of a comprehensive package of interventions – to transform high-cost, low-volume training dependent on the limited availability of global experts into scalable peer learning to improve accessquality, and outcomes.

    The more prosaic definition of the rubric – reduced from any pedagogical questioning – is “a type of scoring guide that assesses and articulates specific components and expectations for an assignment” (Source).

    The rubric is a practical solution to a number of complex issues that prevent effective teaching and learning in global health.

    Developing a rubric provides a practical method for turning complex content and expertise into a learning process in which learners will learn primarily from each other.

    Hence, making sense of a rubric requires recognizing and appreciating the value of peer learning.

    This may be difficult to understand for those working in global health, due to a legacy of scientifically and morally wrong norms for learning and teaching primarily through face-to-face training.

    The first norm is that global experts teach staff in countries who are presumed to not know.

    The second is that the expert who knows (their subject) also necessarily knows how to teach, discounting or dismissing the science of pedagogy.

    Experts consistently believe that they can just “wing it” because they have the requisite technical knowledge.

    This ingrained belief also rests on the third mistaken assumption: that teaching is the job of transmitting information to those who lack it.

    (Paradoxically, the proliferation of online information modules and webinars has strengthened this norm, rather than weakened it).

    Indeed, although almost everyone agrees in principle that peer learning is “great”, there remains deep skepticism about its value.

    Unfortunately, learner preferences do not correlate with outcomes.

    Given the choice, learners prefer sitting passively to listen to a great lecture from a globally-renowned figure, rather than the drudgery of working in a group of peers whose level of expertise is unknown and who may or may not be engaged in the activities.

    (Yet, when assessed formally, the group that works together will out-perform the group that was lectured.) For subject matter experts, there can even be an existential question: if peers can learn without me, the expert, then am I still needed? What is my value to learners? What is my role?

    Developing a rubric provides a way to resolve such tensions and augment rather than diminish the significance of expertise.

    This requires, for the subject matter expert, a willingness to rethink and reframe their role from sage on the stage to guide on the side.

    Rubric development requires:

    1. expert input and review to think through what set of instructions and considerations will guide learners in developing useful knowledge they can use; and
    2. expertise to select the specific resources (such as guidance documents, case studies, etc.) that will help the learner as they develop this new knowledge.

    In this approach, an information module, a webinar, a guidance document, or any other piece of knowledge becomes a potential resource for learning that can be referenced into a rubric, with specific indications to when and how it may be used to support learning.

    In a peer learning context, a rubric is also a tool for reflection, stirring metacognition (thinking about thinking) that helps build critical thinking “muscles”.

    Our rubrics combine didactic instructions (“do this, do that”), reflective and exploratory questions, and as many considerations as necessary to guide the development of high-quality knowledge.

    These instructions are organized into versatile, specific criterion that can be as simple as “Calculate sample size” (where there will be only one correct answer), focus on practicalities (“Formulate your three top recommendations to your national manager”), or allow for exploration (“Reflect on the strategic value of your vaccination coverage survey for your country’s national immunization programme”).

    Yes, we use a scoring guide on a 0-4 scale, where the 4 out of 4 for each criterion summarizes what excellent work looks like.

    This often initially confuses both learners and subject matter experts, who assume that peers (whose prior expertise has not been evaluated) are being asked to grade each other.

    It turns out that, with a well-designed rubric, a neophyte can provide useful, constructive feedback to a seasoned expert – and vice versa.

    Both are using the same quality standard, so they are not sharing their personal opinion but applying that standard by using their critical thinking capabilities to do so.

    Before using the rubric to review the work of peers, each learner has had to use it to develop their own work.

    This ensures a kind of parity between peers: whatever the differences in experience and expertise, countries, or specializations, everyone has first practiced using the rubric for their own needs.

    In such a context, the key is not the rating, but the explanation that the peer reviewer will provide to explain it, with the requirements that she provides constructive, practical suggestions for how the author can improve their work.

    In some cases, learners are surprised to receive contradictory feedback: two reviewers give opposite ratings – one very high, and the other very low – together with conflicting explanations for these ratings.

    In such cases, it is an opportunity for learners to review the rubric, again, while critically examining the feedbacks received, in order to adjudicate between them.

    Ultimately, rubric-based feedback allows for significantly more learner agency in making the determination of what to do with the feedback received – as the next task is to translate this feedback into practical revisions to improve their work.

    This is, in and of itself, conducive to significant learning.

    Learn more about rubrics as part of effective teaching and learning from Bill Cope and Mary Kalantzis, two education pioneers who taught me to use them.

    Image: Mondrian’s classroom. The Geneva Learning Foundation Collection © 2024

  • What is the Movement for Immunization Agenda 2030 (IA2030)?

    What is the Movement for Immunization Agenda 2030 (IA2030)?

    The Immunization Agenda 2030 (IA2030) and the Movement for Immunization Agenda 2030 represent two interconnected but distinct aspects of a global effort to enhance immunization coverage and impact.

    What is Immunization Agenda 2030?

    Immunization Agenda 2030 or “IA2030” is a global strategy endorsed by the World Health Assembly, aiming to maximize the lifesaving impact of vaccines over the decade from 2021 to 2030.

    • It sets an ambitious vision for a world where everyone, everywhere, at every age, fully benefits from vaccines for good health and well-being.
    • The strategy was designed before the COVID-19 pandemic, with the goal of saving 50 million lives through increased vaccine coverage and addresses several strategic priorities, including making immunization services accessible as part of primary care, ensuring everyone is protected by immunization regardless of location or socioeconomic status, and preparing for disease outbreaks.
    • IA2030 emphasizes country ownership, broad partnerships, and data-driven approaches. It seeks to integrate immunization with other essential health services, ensuring a reliable supply of vaccines and promoting innovation in immunization programs.

    Watch the Immunization Agenda 2030 (IA2030) inaugural lecture by Anne Lindstrand (WHO) and Robin Nandy (UNICEF)

    What is the Movement for Immunization Agenda 2030?

    The Movement for Immunization Agenda 2030, on the other hand, is a collaborative, community-driven effort to operationalize the goals of IA2030 at the local and national – and to foster double-loop learning for international partners.

    It emerged in response to the Director-General’s call for a “groundswell of support” for immunization and combines a network, platform, and community of action.

    The Movement focuses on turning the commitment to IA2030 into locally-led, context-specific actions, encouraging peer exchange, and sharing progress and results to foster a sense of ownership among immunization practitioners and the communities they serve. It has:

    • has demonstrated a scalable model for facilitating peer exchange among thousands of motivated immunization practitioners.
    • emphasizes locally-developed solutions, connecting local innovation to global knowledge, and is instrumental in resuscitating progress towards more equitable immunization coverage.
    • operates as a platform for learning, sharing, and collaboration, aiming to ground action in local realities to reach the unreached and accelerate progress towards the IA2030 goals.

    In April 2021, over 5,000 immunization professionals came together during World Immunization Week to listen and learn from challenges faced by immunization colleagues from all over the world. Watch the Special Event to hear practitioners from all over the world share the challenges they face. Learn more

    What is the difference between the Agenda for IA2030 and the Movement for IA2030?

    • Scope and Nature: IA2030 is a strategic framework with a global vision for immunization over the decade, while the Movement for IA2030 is a dynamic, community-driven effort to implement that vision through local action and global collaboration.
    • Operational Focus: IA2030 outlines the strategic priorities and goals for immunization efforts by global funders and agencies, whereas the Movement focuses on mobilizing support, facilitating peer learning, and sharing innovative practices to achieve those goals.
    • Engagement and Collaboration: While IA2030 is a product of global consensus and sets the agenda for immunization, the Movement actively engages immunization professionals, stakeholders, and communities in a bottom-up approach to foster ownership and tailor strategies to local contexts.

    What is the role of The Geneva Learning Foundation (TGLF)?

    The Geneva Learning Foundation (TGLF) plays a pivotal role in facilitating the Movement for Immunization Agenda 2030 (IA2030). A Swiss non-profit organization with the mission to research and develop new ways to learn and lead, TGLF is instrumental in implementing large-scale, collaborative efforts to support the goals of IA2030. Here are the key roles TGLF fulfills within the Movement:

    1. Facilitation and leadership: TGLF leads the facilitation of the Movement for IA2030, providing a platform for immunization professionals to collaborate, share knowledge, and drive action towards the IA2030 goals.
    2. Learning-to-action approach: TGLF contributes to transforming technical assistance (TA) to strengthen immunization programs. This involves challenging traditional power dynamics and empowering immunization professionals to apply local knowledge to solve problems, support peers in doing the same, and contribute to global knowledge.
    3. Peer learning scaffolding and facilitation: TGLF has demonstrated the feasibility of establishing a global peer learning platform for immunization practitioners. This platform enables health professionals to contribute knowledge, share experiences, and learn from each other, thereby fostering a community of practice that spans across borders.
    4. Advocacy and mobilization: TGLF calls on immunization professionals to join the Movement for IA2030, aiming to mobilize a global community to share experiences and work collaboratively towards the IA2030 objectives. This includes engaging over 60,000 immunization professionals from 99 countries.
    5. Governance, code of conduct, and ethical standards: Participants in TGLF’s programs are required to adhere to a strict Code of Conduct that emphasizes integrity, honesty, and the highest ethical, scientific, and intellectual standards. This includes accurate attribution of sources and appropriate collection and use of data. Movement Members are also expected respect and abide by any restrictions, requirements, and regulations of their employer and government.
    6. Research and evaluation: TGLF may facilitate the connections between peers, for example to help them give and receive feedback on their local projects and other knowledge produced by learners. Insights and evidence from local action may also contribute in communication, advocacy, and training efforts. TGLF also invites learners to participate in research and evaluation to further the understanding of effective learning and performance management approaches for frontline health workers.
  • General Assembly of the Movement for Immunization Agenda 2030 on 14 March 2022

    Summary of highlights from the Full Learning Cycle, Monday 14 March

    1. # of participants: By Monday, 6,319 immunization professionals accepted to the Full Learning Cycle, including 3,592 Anglophones and 2,727 Francophones.
    2. Participation: Scholars are participating with high motivation and bringing an incredible energy to build the IA2030 Movement. You can read their first-person perspectives on why they are participating in the Full Learning Cycle on slides 81-99. These slides show only a selected few quotes from more than 2,000 Scholars’ feedback to our “barometer”, a tool for them to share how they are doing in the Full Learning Cycle, which helps us to get the “pulse” of the whole group and adapt support.
    3. By Monday, 313 ideas and practices submitted over the course of one week in the Ideas Engine. This number has now gone up to 559. You can see a breakdown of these ideas by country and by SP on slides 32-80.
    4. Scholars are sharing with peers their immunization experiences in short 30-minute sessions with François Gasse and Charlotte. You can see slides 102-105 for a summary of experiences shared last week.

    Resources

    Anglophones: link to slidedecklink to recording

    Francophones: link to slidedecklink to recording

  • Missed opportunities (2): How one selfish learner can undermine peer learning

    Missed opportunities (2): How one selfish learner can undermine peer learning

    The idea that adult learners have much to learn from each other is fairly consensual. The practice of peer learning, however, requires un-learning much of what has been ingrained over years of schooling. We have internalized the conviction that significant learning requires expert feedback.

    In a recent course organized by the Geneva Learning Foundation in partnership with an international NGO, members of the group initially showed little or no interest in learning from each other. Even the remote coffee, an activity in which we randomly twin participants who then connect informally, generated only moderate enthusiasm… where in other courses, we have to remind folks to stop socializing and focus on the course work. One participant told us that “peer support was quite unexpected”, adding that “it is the first time I see it in a course.” When we reached out to participants to help those among them who had not completed the first week’s community assignment, another wrote in to explain she was “really uncomfortable with this request”…

    That participant turned out to be the same one demanding validation from an expert, speaking not just for herself but in the name of the group to declare: “We do not feel we are really learning, because we do not know if what we are producing is of any quality”.

    Yet, by the third week, other participants had begun to recognize the value of peer feedback as they experienced it. One explained: “I found reviewing other people’s work was particularly interesting this week because we all took the same data and presented it in so many different ways – in terms of what we emphasised, what we left out and the assertions we made.” Another reported: “ I am still learning a lot from doing the assignments and reading what others have done [emphasis mine].”

    Here is how one learner summed up her experience: “Fast and elaborative response to the queries. […] The peer system is really great arrangement [emphasis mine]. The course is live where you can also learn from the comments and inputs from course participants. I feel like I am taking this course in a class room with actual physical presence with the rest.” (She also acknowledged the “follow-up from the organizers and course leaders in case of any lag”.)

    This is about more than Daphne Koller’s 2012 TED Talk assertion (quoted in Glance et al.’s 2013 article on the pedagogical foundations of MOOCs) that “more often than not students were responding to each other’s posts before a moderator was able to”, which addresses the concern that peers may not be able to find the one correct answer (when there is one). It is not only about peers learning from each other, but also about the relevance of artefact creation for learning.

    Week after week, I observed participation grow. Discussion threads grew organically from this shared solidarity in learning, leading to self-directed exploration and, in a few instances, serendipitous discovery. This helped above and beyond my own expectations: “The more we work with peers and get validation, [the more] confidence grows.” After having peer reviewed three projects, one participant wrote: “This is a great experience. Every time I comment to a peer, I actually feel that I am telling the same thing to myself.”

    And, yet, that one lone wolf who displayed negatives attitudes stuck to her guns, reiterating her demands: “I would really like to get more feedback on the assignments. I know individual feedback might not be feasible but it would be great to see a good example to see what we could have done better. I would like to learn how I could improve.” Furthermore, she then ascribed her negative attitudes to the entire group… while completely ignoring, denying, or dismissing the group’s experience. (A request for expert feedback is entirely legitimate, but this does not require disparaging the value of peer feedback.)

    Admittedly, for various logistical reasons, the course’s subject matter experts were not as present as we had intended in the first three weeks of the course. This, combined with aggressive, negative clamoring for expert feedback, put the course team on the defensive.

    That led to a week in which subject matter experts impressively scrambled to prepare, compile, and share a ton of expert feedback. That they were able to do so, above and beyond expectations, is to their credit. As for me, it was startling to realize that I felt too insecure about peer learning to respond effectively. There are substantive questions about the limitations of peer learning, especially when there is only one right answer. “Peer learning” sounds nice but also vague. Can it be trusted? How do you know that everyone else is not also making the same mistake? Who would rather learn from peers with uncertain and disparate expertise rather than from an established expert? Doubts lingered despite my own experience in recent courses, where I observed peers teaching each other how to improve action planning for routine immunization, analyze safer access for humanitarians, improve remote partnering, or develop sampling procedures for vaccination coverage surveys.

    Learning technologists‘ interest in peer review is premised on the need for a scalable solution for grading. They have mostly failed to acknowledge much less leverage its pedagogical significance. Reviewing the education research literature, I find mostly anecdotal studies on K-12 schooling, interesting but unproven theories, and very little evidence that I can use. This is strange, given that peer education is nothing new.

    This reinforces my conviction that we are breaking new ground with #DigitalScholar. Building on Scholar’s ground-breaking system for structured, rubric-based peer review and feedback, we are adding new layers of activity and scaffolding that can more fully realize the potential of peers as learners and teachers. I do not know where this exploration will take us. It feels like uncharted territory. That is precisely what makes it interesting and exciting. And, following this most recent course, my own confidence has grown, thanks to the audacity and invention of those learners who learned to trust and support each other.

    Image: Two trees in Manigot. Personal collection.