Tag: expertise

  • Why guidelines fail: on consequences of the false dichotomy between global and local knowledge in health systems

    Why guidelines fail: on consequences of the false dichotomy between global and local knowledge in health systems

    Global health continues to grapple with a persistent tension between standardized, evidence-based interventions developed by international experts and the contextual, experiential local knowledge held by local health workers. This dichotomy – between global expertise and local knowledge – has become increasingly problematic as health systems face unprecedented complexity in addressing challenges from climate change to emerging diseases.

    The limitations of current approaches

    The dominant approach privileges global technical expertise, viewing local knowledge primarily through the lens of “implementation barriers” to be overcome. This framework assumes that if only local practitioners would correctly apply global guidance, health outcomes would improve.

    This assumption falls short in several critical ways:

    1. It fails to recognize that local health workers often possess sophisticated understanding of how interventions need to be adapted to work in their contexts.
    2. It overlooks the way that local knowledge, built through direct experience with communities, often anticipates problems that global guidance has yet to address.
    3. It perpetuates power dynamics that systematically devalue knowledge generated outside academic and global health institutions.

    The hidden costs of privileging global expertise

    When we examine actual practice, we find that privileging global over local knowledge can actively harm health system performance:

    • It creates a “capability trap” where local health workers become dependent on external expertise rather than developing their own problem-solving capabilities.
    • It leads to the implementation of standardized solutions that may not address the real needs of communities.
    • It demoralizes community-based staff who see their expertise and experience consistently undervalued.
    • It slows the spread of innovative local solutions that could benefit other contexts.

    Evidence from practice

    Recent experiences from the COVID-19 pandemic provide compelling evidence for the importance of local knowledge. While global guidance struggled to keep pace with evolving challenges, local health workers had to figure out how to keep health services going:

    • Community health workers in rural areas adapted strategies.
    • District health teams created new approaches to maintain essential services during lockdowns.
    • Facility staff developed creative solutions to manage PPE shortages.

    These innovations emerged not from global technical assistance, but from local practitioners applying their deep understanding of community needs and system constraints, and by exploring new ways to connect with each other and contribute to global knowledge.

    Towards a new synthesis

    Rather than choosing between global and local knowledge, we need a new synthesis that recognizes their complementary strengths. This requires three fundamental shifts:

    1. Reframing local knowledge

    • Moving from viewing local knowledge as merely contextual to seeing it as a source of innovation.
    • Recognizing frontline health workers as knowledge creators, not just knowledge recipients.
    • Valuing experiential learning alongside formal evidence.

    2. Rethinking technical assistance

    • Shifting from knowledge transfer to knowledge co-creation.
    • Building platforms for peer learning and exchange.
    • Supporting local problem-solving capabilities.

    3. Restructuring power relations

    • Creating mechanisms for local knowledge to inform global guidance.
    • Developing new metrics that value local innovation.
    • Investing in local knowledge documentation and sharing.

    Practical implications

    This new synthesis has important practical implications for how we approach health system strengthening:

    Investment priorities

    • Funding mechanisms need to support local knowledge creation and sharing
    • Technical assistance should focus on building local problem-solving capabilities
    • Technology investments should enable peer learning and knowledge exchange

    Capacity building

    Knowledge management (KM)

    New paths forward

    Moving beyond the false dichotomy between global and local knowledge opens new possibilities for strengthening health systems. By recognizing and valuing both forms of knowledge, we can create more effective, resilient, and equitable health systems.

    The challenges facing health systems are too complex for any single source of knowledge to address alone. Only by bringing together global expertise and local knowledge can we develop the solutions needed to improve health outcomes for all.

    References

    Braithwaite, J., Churruca, K., Long, J.C., Ellis, L.A., Herkes, J., 2018. When complexity science meets implementation science: a theoretical and empirical analysis of systems change. BMC Med 16, 63. https://doi.org/10.1186/s12916-018-1057-z

    Farsalinos, K., Poulas, K., Kouretas, D., Vantarakis, A., Leotsinidis, M., Kouvelas, D., Docea, A.O., Kostoff, R., Gerotziafas, G.T., Antoniou, M.N., Polosa, R., Barbouni, A., Yiakoumaki, V., Giannouchos, T.V., Bagos, P.G., Lazopoulos, G., Izotov, B.N., Tutelyan, V.A., Aschner, M., Hartung, T., Wallace, H.M., Carvalho, F., Domingo, J.L., Tsatsakis, A., 2021. Improved strategies to counter the COVID-19 pandemic: Lockdowns vs. primary and community healthcare. Toxicology Reports 8, 1–9. https://doi.org/10.1016/j.toxrep.2020.12.001

    Jerneck, A., Olsson, L., 2011. Breaking out of sustainability impasses: How to apply frame analysis, reframing and transition theory to global health challenges. Environmental Innovation and Societal Transitions 1, 255–271. https://doi.org/10.1016/j.eist.2011.10.005

    Salve, S., Raven, J., Das, P., Srinivasan, S., Khaled, A., Hayee, M., Olisenekwu, G., Gooding, K., 2023. Community health workers and Covid-19: Cross-country evidence on their roles, experiences, challenges and adaptive strategies. PLOS Glob Public Health 3, e0001447. https://doi.org/10.1371/journal.pgph.0001447

    Yamey, G., 2012. What are the barriers to scaling up health interventions in low and middle income countries? A qualitative study of academic leaders in implementation science. Global Health 8, 11. https://doi.org/10.1186/1744-8603-8-11

  • What is a “rubric” and why use rubrics in global health education?

    What is a “rubric” and why use rubrics in global health education?

    Rubrics are well-established, evidence-based tools in education, but largely unknown in global health.

    At the Geneva Learning Foundation (TGLF), the rubric is a key tool that we use – as part of a comprehensive package of interventions – to transform high-cost, low-volume training dependent on the limited availability of global experts into scalable peer learning to improve accessquality, and outcomes.

    The more prosaic definition of the rubric – reduced from any pedagogical questioning – is “a type of scoring guide that assesses and articulates specific components and expectations for an assignment” (Source).

    The rubric is a practical solution to a number of complex issues that prevent effective teaching and learning in global health.

    Developing a rubric provides a practical method for turning complex content and expertise into a learning process in which learners will learn primarily from each other.

    Hence, making sense of a rubric requires recognizing and appreciating the value of peer learning.

    This may be difficult to understand for those working in global health, due to a legacy of scientifically and morally wrong norms for learning and teaching primarily through face-to-face training.

    The first norm is that global experts teach staff in countries who are presumed to not know.

    The second is that the expert who knows (their subject) also necessarily knows how to teach, discounting or dismissing the science of pedagogy.

    Experts consistently believe that they can just “wing it” because they have the requisite technical knowledge.

    This ingrained belief also rests on the third mistaken assumption: that teaching is the job of transmitting information to those who lack it.

    (Paradoxically, the proliferation of online information modules and webinars has strengthened this norm, rather than weakened it).

    Indeed, although almost everyone agrees in principle that peer learning is “great”, there remains deep skepticism about its value.

    Unfortunately, learner preferences do not correlate with outcomes.

    Given the choice, learners prefer sitting passively to listen to a great lecture from a globally-renowned figure, rather than the drudgery of working in a group of peers whose level of expertise is unknown and who may or may not be engaged in the activities.

    (Yet, when assessed formally, the group that works together will out-perform the group that was lectured.) For subject matter experts, there can even be an existential question: if peers can learn without me, the expert, then am I still needed? What is my value to learners? What is my role?

    Developing a rubric provides a way to resolve such tensions and augment rather than diminish the significance of expertise.

    This requires, for the subject matter expert, a willingness to rethink and reframe their role from sage on the stage to guide on the side.

    Rubric development requires:

    1. expert input and review to think through what set of instructions and considerations will guide learners in developing useful knowledge they can use; and
    2. expertise to select the specific resources (such as guidance documents, case studies, etc.) that will help the learner as they develop this new knowledge.

    In this approach, an information module, a webinar, a guidance document, or any other piece of knowledge becomes a potential resource for learning that can be referenced into a rubric, with specific indications to when and how it may be used to support learning.

    In a peer learning context, a rubric is also a tool for reflection, stirring metacognition (thinking about thinking) that helps build critical thinking “muscles”.

    Our rubrics combine didactic instructions (“do this, do that”), reflective and exploratory questions, and as many considerations as necessary to guide the development of high-quality knowledge.

    These instructions are organized into versatile, specific criterion that can be as simple as “Calculate sample size” (where there will be only one correct answer), focus on practicalities (“Formulate your three top recommendations to your national manager”), or allow for exploration (“Reflect on the strategic value of your vaccination coverage survey for your country’s national immunization programme”).

    Yes, we use a scoring guide on a 0-4 scale, where the 4 out of 4 for each criterion summarizes what excellent work looks like.

    This often initially confuses both learners and subject matter experts, who assume that peers (whose prior expertise has not been evaluated) are being asked to grade each other.

    It turns out that, with a well-designed rubric, a neophyte can provide useful, constructive feedback to a seasoned expert – and vice versa.

    Both are using the same quality standard, so they are not sharing their personal opinion but applying that standard by using their critical thinking capabilities to do so.

    Before using the rubric to review the work of peers, each learner has had to use it to develop their own work.

    This ensures a kind of parity between peers: whatever the differences in experience and expertise, countries, or specializations, everyone has first practiced using the rubric for their own needs.

    In such a context, the key is not the rating, but the explanation that the peer reviewer will provide to explain it, with the requirements that she provides constructive, practical suggestions for how the author can improve their work.

    In some cases, learners are surprised to receive contradictory feedback: two reviewers give opposite ratings – one very high, and the other very low – together with conflicting explanations for these ratings.

    In such cases, it is an opportunity for learners to review the rubric, again, while critically examining the feedbacks received, in order to adjudicate between them.

    Ultimately, rubric-based feedback allows for significantly more learner agency in making the determination of what to do with the feedback received – as the next task is to translate this feedback into practical revisions to improve their work.

    This is, in and of itself, conducive to significant learning.

    Learn more about rubrics as part of effective teaching and learning from Bill Cope and Mary Kalantzis, two education pioneers who taught me to use them.

    Image: Mondrian’s classroom. The Geneva Learning Foundation Collection © 2024

  • Missed opportunities (2): How one selfish learner can undermine peer learning

    Missed opportunities (2): How one selfish learner can undermine peer learning

    The idea that adult learners have much to learn from each other is fairly consensual. The practice of peer learning, however, requires un-learning much of what has been ingrained over years of schooling. We have internalized the conviction that significant learning requires expert feedback.

    In a recent course organized by the Geneva Learning Foundation in partnership with an international NGO, members of the group initially showed little or no interest in learning from each other. Even the remote coffee, an activity in which we randomly twin participants who then connect informally, generated only moderate enthusiasm… where in other courses, we have to remind folks to stop socializing and focus on the course work. One participant told us that “peer support was quite unexpected”, adding that “it is the first time I see it in a course.” When we reached out to participants to help those among them who had not completed the first week’s community assignment, another wrote in to explain she was “really uncomfortable with this request”…

    That participant turned out to be the same one demanding validation from an expert, speaking not just for herself but in the name of the group to declare: “We do not feel we are really learning, because we do not know if what we are producing is of any quality”.

    Yet, by the third week, other participants had begun to recognize the value of peer feedback as they experienced it. One explained: “I found reviewing other people’s work was particularly interesting this week because we all took the same data and presented it in so many different ways – in terms of what we emphasised, what we left out and the assertions we made.” Another reported: “ I am still learning a lot from doing the assignments and reading what others have done [emphasis mine].”

    Here is how one learner summed up her experience: “Fast and elaborative response to the queries. […] The peer system is really great arrangement [emphasis mine]. The course is live where you can also learn from the comments and inputs from course participants. I feel like I am taking this course in a class room with actual physical presence with the rest.” (She also acknowledged the “follow-up from the organizers and course leaders in case of any lag”.)

    This is about more than Daphne Koller’s 2012 TED Talk assertion (quoted in Glance et al.’s 2013 article on the pedagogical foundations of MOOCs) that “more often than not students were responding to each other’s posts before a moderator was able to”, which addresses the concern that peers may not be able to find the one correct answer (when there is one). It is not only about peers learning from each other, but also about the relevance of artefact creation for learning.

    Week after week, I observed participation grow. Discussion threads grew organically from this shared solidarity in learning, leading to self-directed exploration and, in a few instances, serendipitous discovery. This helped above and beyond my own expectations: “The more we work with peers and get validation, [the more] confidence grows.” After having peer reviewed three projects, one participant wrote: “This is a great experience. Every time I comment to a peer, I actually feel that I am telling the same thing to myself.”

    And, yet, that one lone wolf who displayed negatives attitudes stuck to her guns, reiterating her demands: “I would really like to get more feedback on the assignments. I know individual feedback might not be feasible but it would be great to see a good example to see what we could have done better. I would like to learn how I could improve.” Furthermore, she then ascribed her negative attitudes to the entire group… while completely ignoring, denying, or dismissing the group’s experience. (A request for expert feedback is entirely legitimate, but this does not require disparaging the value of peer feedback.)

    Admittedly, for various logistical reasons, the course’s subject matter experts were not as present as we had intended in the first three weeks of the course. This, combined with aggressive, negative clamoring for expert feedback, put the course team on the defensive.

    That led to a week in which subject matter experts impressively scrambled to prepare, compile, and share a ton of expert feedback. That they were able to do so, above and beyond expectations, is to their credit. As for me, it was startling to realize that I felt too insecure about peer learning to respond effectively. There are substantive questions about the limitations of peer learning, especially when there is only one right answer. “Peer learning” sounds nice but also vague. Can it be trusted? How do you know that everyone else is not also making the same mistake? Who would rather learn from peers with uncertain and disparate expertise rather than from an established expert? Doubts lingered despite my own experience in recent courses, where I observed peers teaching each other how to improve action planning for routine immunization, analyze safer access for humanitarians, improve remote partnering, or develop sampling procedures for vaccination coverage surveys.

    Learning technologists‘ interest in peer review is premised on the need for a scalable solution for grading. They have mostly failed to acknowledge much less leverage its pedagogical significance. Reviewing the education research literature, I find mostly anecdotal studies on K-12 schooling, interesting but unproven theories, and very little evidence that I can use. This is strange, given that peer education is nothing new.

    This reinforces my conviction that we are breaking new ground with #DigitalScholar. Building on Scholar’s ground-breaking system for structured, rubric-based peer review and feedback, we are adding new layers of activity and scaffolding that can more fully realize the potential of peers as learners and teachers. I do not know where this exploration will take us. It feels like uncharted territory. That is precisely what makes it interesting and exciting. And, following this most recent course, my own confidence has grown, thanks to the audacity and invention of those learners who learned to trust and support each other.

    Image: Two trees in Manigot. Personal collection.