Category: Learning

  • How to overcome limitations of expert-led fellowships for global health

    How to overcome limitations of expert-led fellowships for global health

    Coaching and mentoring programs sometimes called “fellowships” have been upheld as the gold standard for developing leaders in global health.

    For example, a fellowship in the field of immunization was recently advertised in the following manner.

    • Develop your skills and become an advocate and leader: The fellowship will begin with two months of weekly mandatory live engagements led by [global] staff and immunization experts around topics relating to rebuilding routine immunization, including catch-up vaccination, integration and life course immunization. […]
    • Craft an implementation plan: Throughout the live engagement series, fellows will develop, revise and submit a COVID-19 recovery strategic plan.
    • Receive individualized mentoring: Participants with strong plans will be considered for a mentorship program to work 1:1 with experts in the field to further develop and implement their strategies and potentially publish their case studies.

    We will not dwell here on the ‘live engagements’, which are expert-led presentations of technical knowledge. We already know that such ‘webinars’ have very limited learning efficacy, and unlikely impact on outcomes. (This may seem like a harsh statement to global health practitioners who have grown comfortable with webinars, but it is substantiated by decades of evidence from learning science research.)

    On the surface, the rest of the model sounds highly effective, promising personalized attention and expert guidance.

    The use of a project-based learning approach is promising, but it is unclear what support is provided once the implementation plan has been crafted.

    It is when you consider the logistical aspects that the cracks begin to show.

    The essence of traditional coaching lies in the quality of the one-to-one interaction, making it an inherently limited resource.

    Take, for example, a fellowship programme where interest outstrips availability—say, 1,600 aspiring global health leaders are interested, but only 30 will be selected for one-on-one mentoring.

    Tailored, one-on-one coaching can be incredibly effective in small, controlled environments.

    While these 30 may receive an invaluable experience, what happens to those left behind?

    There is an ‘elitist spiral’.

    Coaching and mentoring, while intensive, remain exclusive by design, limited to the select few.

    This not only restricts scale but also concentrates knowledge among the selected group, perpetuating hierarchies.

    Those chosen gain invaluable support.

    The majority left out are denied access and implicitly viewed as passive recipients rather than partners in a collective solution.

    Doubling the number of ‘fellows’ only marginally improves this situation.

    Even if the mentor pool were to grow exponentially, the personalized nature of the engagement limits the rate of diffusion.

    When we step back and look at the big picture, we realize there is a problem: these programs are expensive and difficult to scale.

    And, in global health, if it does not scale, it is not solving the problem.

    How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    So while these programs can make a real difference for a small group of people, they are unlikely to move the needle on a global scale.

    That is like trying to fill a swimming pool with a teaspoon—you might make some progress, but you will never get the job done.

    The model creates a paradox: the attributes making it effective for individuals intrinsically limit systemic impact.

    There is another paradox related to complexity.

    Global health issues are inextricably tied to cultural, political and economic factors unique to each country and community.

    Complex problems require nuanced solutions.

    Yet coaching promotes generalized expertise from a few global, centralized institutions rather than fostering context-specific knowledge.

    Even the most brilliant, experienced coach or mentor cannot single-handedly impart the multifaceted understanding needed to drive impact across diverse settings.

    A ‘fellowship’ structure also subtly perpetuates the existing hierarchies within global health.

    It operates on the tacit assumption that the necessary knowledge and expertise reside in certain centralized locations and among a select cadre of experts.

    This sends an implicit message that knowledge flows unidirectionally—from the seasoned experts to the less-experienced practitioners who are perceived as needing to be “coached.”

    Learn more: How does peer learning compare to expert-led coaching ‘fellowships’?

    Peer learning: Collective wisdom, collective progress

    In global health, no one individual or institution can be expected to possess solutions for all settings.

    Sustainable change requires mobilizing collective intelligence, not just centralized expertise.

    Learn more: The COVID-19 Peer Hub as an example of Collective Intelligence (CI) in practice

    This means transitioning from hierarchical, top-down development models to flexible platforms amplifying practitioners’ contextual insights.

    The gap between need and availability of quality training in global health is too vast for conventional approaches to ever bridge alone.

    Instead of desperately chasing an asymptote of expanding elite access, we stand to gain more by embracing approaches that democratize development.

    Complex challenges demand platforms unleashing collective wisdom through collaboration. The technologies exist.

    In the “fellowship” example, less than five percent of participants were selected to receive feedback from global experts.

    A peer learning platform can provide high-quality peer feedback for everyone.

    • Such a platform democratizes access to knowledge and disrupts traditional hierarchies.
    • It also moves away from the outdated notion that expertise is concentrated in specific geographical or institutional locations.

    What learning science underpins peer learning for global health? Watch this 14-minute presentation at the 2023 annual meeting of the American Society for Tropical Medicine and Hygiene (ASTMH).

    What about the perceived trade-off between quality and scale?

    Effective digital peer learning platforms negate this zero-sum game.

    Research on MOOCs (massive open online courses) has conclusively demonstrated that giving and receiving feedback to and from three peers through structured, rubric-based peer review, achieves reliability comparable, when properly supported, to that of expert feedback alone.

    If we are going to make a dent in the global health crises we face, we have to shift from a model that relies on the expertise of the few to one that harnesses the collective wisdom of the many.

    • Peer learning isn’t a Band-Aid. It is an innovative leap forward that disrupts the status quo, and it’s exactly what the global health sector needs.
    • Peer learning is not just an incremental improvement. It is a seismic shift in the way we think about learning and capacity-building in global health.
    • Peer learning is not a compromise. It is an upgrade. We move from a model of scarcity, bound by the limits of individual expertise, to one of collective wisdom.
    • Peer learning is more than just a useful tool. It is a challenge to the traditional epistemology of global health education.

    Read about a practical example: Movement for Immunization Agenda 2030 (IA2030): grounding action in local realities to reach the unreached

    As we grapple with urgent issues in global health—from pandemic recovery to routine immunization—it is clear that we need collective intelligence and resource sharing on a massive scale.

    And for that, we need to move beyond the selective, top-down models of the past.

    The collective challenges we face in global health require collective solutions.

    And collective solutions require us to question established norms, particularly when those norms serve to maintain existing hierarchies and power imbalances.

    Now it is up to us to seize this opportunity and move beyond outmoded, hierarchical models.

    There is a path – now, not tomorrow – to truly democratize knowledge, make meaningful progress, and tackle the global health challenges that confront us all.

  • How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    How does the scalability of peer learning compare to expert-led coaching ‘fellowships’?

    By connecting practitioners to learn from each other, peer learning facilitates collaborative development. ow does it compare to expert-led coaching and mentoring “fellowships” that are seen as the ‘gold standard’ for professional development in global health?

    Scalability in global health matters. (See this article for a comparison of other aspects.)

    Simplified mathematical modeling can compare the scalability of expert coaching (“fellowships”) and peer learning

    Let N be the total number of learners and M be the number of experts available. Assuming that each expert can coach K learners effectively:

    $latex \text{Total Number of Coached Learners} = M \times K&s=3$

    For N>>M×KN>>M×K, it is evident that expert coaching is costly and difficult to scale.

    Expert coaching “fellowships” require the availability of experts, which is often optimistic in highly specialized fields.

    The number of learners (N) greatly exceeds the product of the number of experts (M) and the capacity per expert (K).

    Scalability of one-to-one peer learning

    By comparison, peer learning turns the conventional model on its head by transforming each learner into a potential coach who can provide peer feedback.

    This has significant advantages in scalability.

    Let N be the total number of learners. Assuming a peer-to-peer model, where each learner can learn from any other learner:

    $latex \text{Total Number of Learning Interactions} = \frac{N \times (N – 1)}{2}&s=3$

    $latex \text{The number of learning interactions scales with: } O(N^2)&s=3$

    In this context, the number of learning interactions scales quadratically with the number of learners. This means that if the number of learners doubles, the total number of learning interactions increases by a factor of four. This quadratic relationship highlights the significant increase in interactions (and potential scalability challenges) as more learners participate in the model.

    However, this one-to-one model is difficult to implement: not every learner is going to interact with every other learner in meaningful ways.

    A more practical ‘triangular’ peer learning model with no upper limit to scalability

    In The Geneva Learning Foundation’s peer learning model, learners give feedback to three peers, and receive feedback from three peers. This is a structured, time-bound process of peer review, guided by an expert-designed rubric.

    When each learner gives feedback to 3 different learners and receives feedback from 3 different learners, the model changes significantly from the one-to-one model where every learner could potentially interact with every other learner. In this specific configuration, the total number of interactions can be calculated based on the number of learners N, with each learner being involved in 6 interactions (3 given + 3 received).

    The total number of interactions per learner is six. However, since each interaction involves two learners (the giver and the receiver of feedback), we do not need to double-count these interactions for the total count in the system. Hence, the total number of interactions for each learner is directly 6, without further adjustments for double-counting.

    Therefore, the total number of learning interactions in the system can be represented as:

    $latex \text{Total Number of Learning Interactions} = N \times 3&s=3$

    Given this setup, the complexity or scalability of the system in terms of learning interactions relative to the number of participants N is linear. This is because the total number of interactions increases directly in proportion to the number of learners. Thus, the Big O notation would be:

    $latex O(N)&s=3$

    This indicates that the total number of learning interactions scales linearly with the number of learners. In this configuration, as the number of learners increases, the total number of interactions increases at a linear rate, which is more scalable and manageable than the quadratic rate seen in the peer-to-peer model where every learner interacts with every other learner. Learn more: There is no scale.

    Illustration: The Geneva Learning Foundation © 2024

  • Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

    A formula for calculating learning efficacy, (E), considering the importance of each criterion and the specific ratings for peer learning, is:

    $latex \text{Efficacy} = \frac{S \cdot w_S + I \cdot w_I + C \cdot w_C + F \cdot w_F + U \cdot w_U}{w_S + w_I + w_C + w_F + w_U}&s=3$

    This abstract formula provides a way to quantify learning efficacy, considering various educational criteria and their relative importance (weights) for effective learning.

    Variable DefinitionDescription 
    SScalabilityAbility to accommodate a large number of learners 
    IInformation fidelityQuality and reliability of information 
    CCost effectivenessFinancial efficiency of the learning method 
    FFeedback qualityQuality of feedback received 
    UUniformityConsistency of learning experience 
    Summary of five variables that contribute to learning efficacy

    Weights for each variables are derived from empirical data and expert consensus.

    All values are on a scale of 0-4, with a “4” representing the highest level.

    ScalabilityInformation fidelityCost-benefitFeedback qualityUniformity
    $latex w_S&s=3$$latex w_I&s=3$$latex w_C&s=3$$latex w_F&s=3$$latex w_U&s=3$
    4.003.004.003.001.00
    Assigned weights

    Here is a summary table including all values for each criterion, learning efficacy calculated with weights, and Efficacy-Scale Score (ESS) for peer learning, cascade training, and expert coaching.

    The Efficacy-Scale Score (ESS) can be calculated by multiplying the efficacy (E) of a learning method by the number of learners (N).

    $latex \text{ESS} = E \times N&s=3$

    This table provides a detailed comparison of the values for each criterion across the different learning methods, the calculated learning efficacy values considering the specified weights, and the Efficacy-Scale Score (ESS) for each method.

    Type of learningScalabilityInformation fidelityCost effectivenessFeedback qualityUniformityLearning efficacy# of learnersEfficacy-Scale Score
    Peer learning4.002.504.002.501.003.2010003200
    Cascade training2.001.002.000.500.501.40500700
    Expert coaching0.504.001.004.003.002.2060132

    Of course, there are many nuances in individual programmes that could affect the real-world effectiveness of this simple model. The model, grounded in empirical data and simplified to highlight core determinants of learning efficacy, leverages statistical weighting to prioritize key educational factors, acknowledging its abstraction from the multifaceted nature of educational effectiveness and assumptions may not capture all nuances of individual learning scenarios.

    Peer learning

    The calculated learning efficacy for peer learning, $latex (E_{\text {peer}})&s=2$ , is 3.20. This value reflects the weighted assessment of peer learning’s strengths and characteristics according to the provided criteria and their importance.

    By virtue of scalability, ESS for peer learning is 24 times higher than expert coaching.

    Cascade training

    For Cascade Training, the calculated learning efficacy, $latex (E_{\text {cascade}})&s=2$, is approximately 1.40. This reflects the weighted assessment based on the provided criteria and their importance, indicating lower efficacy compared to peer learning.

    Cascade training has a higher ESS than expert coaching, due to its ability to achieve scale.

    Learn more: Why does cascade training fail?

    Expert coaching

    For Expert Coaching, the calculated learning efficacy, $latex (E_{\text {expert}})&s=2$, is approximately 2.20. This value indicates higher efficacy than cascade training but lower than peer learning.

    However, the ESS is the lowest of the three methods, primarily due to its inability to scale. Read this article for a scalability comparison between expert coaching and peer learning.

    Image: The Geneva Learning Foundation Collection © 2024

  • Why does cascade training fail?

    Why does cascade training fail?

    Cascade training remains widely used in global health.

    Cascade training can look great on paper: an expert trains a small group who, in turn, train others, thereby theoretically scaling the knowledge across an organization.

    It attempts to combine the advantages of expert coaching and peer learning by passing knowledge down a hierarchy.

    However, despite its promise and persistent use, cascade training is plagued by several factors that often lead to its failure.

    This is well-documented in the field of learning, but largely unknown (or ignored) in global health.

    What are the mechanics of this known inefficacy?

    Here are four factors that contribute to the failure of cascade training

    1. Information loss

    Consider a model where an expert holds a knowledge set K. In each subsequent layer of the cascade, α percentage of the knowledge is lost:

    $latex K_n = K \cdot \alpha^n&s=3$

    • Where $latex K_n$ is the knowledge at the nth level of the cascade. As n grows, $latex K_n$ exponentially decreases, leading to severe information loss.
    • Each layer in the cascade introduces a potential for misunderstanding the original information, leading to the training equivalent of the ‘telephone game’.

    2. Lack of feedback

    In a cascade model, only the first layer receives feedback from an actual expert.

    • Subsequent layers have to rely on their immediate ‘trainers,’ who might not have the expertise to correct nuanced mistakes.
    • The hierarchical relationship between trainer and trainee is different from peer learning, in which it is assumed that everyone has something to learn from others, and expertise is produced through collaborative learning.

    3. Skill variation

    • Not everyone is equipped to teach others.
    • The people who receive the training first are not necessarily the best at conveying it to the next layer, leading to unequal training quality.

    4. Dilution of responsibility

    • As the cascade flows down, the sense of responsibility for the quality and fidelity of the training dilutes.
    • The absence of feedback to drive a quality development process exacerbates this.

    Image: The Geneva Learning Foundation Collection © 2024

  • The capability trap: Nobody ever gets credit for fixing problems that never happened

    The capability trap: Nobody ever gets credit for fixing problems that never happened

    Here is a summary of the key points about the capability trap, from the article “Nobody ever gets credit for fixing problems that never happened: creating and sustaining process improvement”.

    What is the capability trap?

    • Many companies invest heavily in process improvement programs, yet few efforts actually produce significant results. This is called the “improvement paradox”.
    • The problem lies not with the specific tools, but rather how the introduction of new programs interacts with existing organizational structures and dynamics.
    • Using system dynamics modeling, the authors studied implementation challenges in depth through over a dozen case studies. Their models reveal insights into why improvement programs often fail.

    Core causal loops

    • The “Work Harder” loop – managers pressure people to spend more time working to immediately boost throughput and close performance gaps. But this is only temporary.
    • The “Work Smarter” loop – managers encourage improvement activities which enhance process capability over time for more enduring gains, but there is a delay before benefits are seen.
    • The “Reinvestment” reinforcing loop – successfully improving capability frees up more time for further improvement. But the reverse vicious cycle often dominates instead.
    • The “Shortcuts” loop – facing pressure, people cut corners on improvement activities which temporarily frees up more time for work. But this gradually erodes capability.

    The capability trap

    • Short-term “Work Harder” and “Shortcuts” decisions eventually hurt capability and require heroic work efforts to maintain performance, creating a downward spiral.
    • However, because capability erodes slowly, managers fail to connect problems to past decisions and blame poor worker motivation instead, leading to a self-confirming cycle.
    • Even improvement programs just increase pressure and drive more shortcuts, making stereotypes and conflicts worse. This “capability trap” causes programs to fail.

    The “capability trap” refers to the downward spiral organizations can get caught in, where attempting to boost performance by pressuring people to “work harder” actually erodes process capability over time. This trap works through a few key mechanisms:

    1. Facing pressure, people cut corners and reduce time spent on improvement activities in order to free up more time for immediate work. This temporarily boosts throughput.
    2. However, this comes at a cost of gradually declining process capability, as less time is invested in maintenance, training, and problem solving.
    3. Capability erosion then reduces performance, widening the gap versus desired performance levels.
    4. Managers falsely attribute this to poor motivation or effort from the workforce. They lack awareness of the capability trap dynamics, and the delays between pressing people to “work harder” and the capability declines that eventually ensue.
    5. Management increases pressure further, demanding heroic work efforts, which causes workers to cut even more corners. This spirals capability downward while confirming management’s incorrect attribution even more.

    Key takeaway for learning leaders

    Learning leaders must understand the systemic traps identified in the article that underly failed improvement initiatives and facilitate mental model shifts. This help build sustainable, effective learning programs to be realized through productive capability-enhancing cycles.

    Key takeaway for immunization leaders

    It is reasonable to hypothesize that poor health worker performance is a symptom rather than the cause of poor immunization programme performance. Short-term decisions, often responding to top-down targets and donor requirements, hurt capability and require, as the authors say, “heroic work efforts to maintain performance, creating a downward spiral.” Managers then incorrectly diagnose this as a performance problem due to motivation.

    How to escape the capability trap

    The key to avoiding or escaping this trap is therefore shifting the mental models that reinforce the incorrect attributions about motivation. Some ways to do this include:

    • Educating managers on the systemic structures causing the capability trap through methods like system dynamics modeling
    • Allowing time for capability-enhancing improvements to take effect before judging performance
    • Incentivizing quality and sustainability of throughput rather than just short-term volume alone
    • Seeking input from workers on the barriers to improvement they face

    With awareness of the structural causes and delays, managers can avoid erroneously attributing blame. Patience and a systems perspective are critical for companies to invest their way out of the capability trap.

    • Shift mental models to recognize system structures leading to the capability trap, rather than blaming people. Then improvement tools can work.
    • A useful example could be system dynamics workshops that achieved this shift and enabled successful programs, dramatically enhancing performance.

    Reference

    Repenning, N.P., Sterman, J.D., 2001. Nobody ever gets credit for fixing problems that never happened: creating and sustaining process improvement. California management review 43, 64–88. https://doi.org/10.2307/41166101

    Illustration: The Geneva Learning Foundation Collection © 2024

  • Making sense of sensemaking

    Making sense of sensemaking

    In her article “A Shared Lens for Sensemaking in Learning Analytics”, Sasha Poquet argues that the field of learning analytics lacks a shared conceptual language to describe the process of sensemaking around educational data. She reviews prominent theories of sensemaking, delineating tensions between assumptions in dominant paradigms. Poquet then demonstrates the eclectic use of sensemaking frameworks across empirical learning analytics research. For instance, studies frequently conflate noticing dashboard information with interpreting its significance. To advance systematic inquiry, she calls for revisiting epistemic assumptions to reconcile tensions between cognitive and sociocultural traditions. Adopting a transactional perspective, Poquet suggests activity theory, conceptualizations of perceived situational definitions, and ecological affordance perception can jointly illuminate subjective and objective facets of sensemaking. This preliminary framework spotlights the interplay of internal worldviews, external systemic contexts, and emergent perceptual processes in appropriating analytics.

    The implications span research and practice. The proposed constructs enable precise characterization of variability in stakeholder sensemaking to inform dashboard design. They also facilitate aggregating insights across implementations. Moreover, explicitly mapping situational landscapes and tracking affording relations between users and tools reveals rapid shifts in adoption phenomena frequently obscured in learning analytics. Capturing sensemaking dynamics through this multidimensional lens promises more agile, context-sensitive interventions. It compels a human-centered orientation to analytics aligned with longstanding calls to catalyze latent systemic wisdom rather than control complex educational processes.

    The Geneva Learning Foundation’s mission centers on fostering embedded peer learning networks scaling across boundaries. This vision resonates deeply with calls to transition from fragmented insights towards fostering collective coherence. The Foundation already employs a complexity meta-theory treating learning as an emergent phenomenon arising from cross-level interactions between minds and cultures. Adopting Poquet’s shared vocabulary for examining sensemaking processes driving appropriation of insights can help, as we continue to explore how to describe, explain, and understand our own work, large parts of which remain emergent. For instance, analysis could trace how contextual definitions interact with perceived affordances and activity systems to propagate innovative practices during Teach to Reach events spanning thousands worldwide. More broadly, the lens proposed mobilizes analytics to illuminate rather than dictate stakeholder wayfinding through complex challenges.

    Poquet, O. (2024). A shared lens around sensemaking in learning analytics: What activity theory, definition of a situation and affordances can offer. British Journal of Educational Technology, 00, 1–21.

    Illustration: The Geneva Learning Foundation Collection © 2024

  • Education as a system of systems: rethinking learning theory to tackle complex threats to our societies

    Education as a system of systems: rethinking learning theory to tackle complex threats to our societies

    In their 2014 article, Jacobson, Kapur, and Reimann propose shifting the paradigm of learning theory towards the conceptual framework of complexity science. They argue that the longstanding dichotomy between cognitive and situative theories of learning fails to capture the intricate dynamics at play. Learning arises across a “bio-psycho-social” system involving interactive feedback loops linking neuronal processes, individual cognition, social context, and cultural milieu. As such, what emerges cannot be reduced to any individual component.

    To better understand how macro-scale phenomena like learning manifest from micro-scale interactions, the authors invoke the notion of “emergence” prominent in the study of complex adaptive systems. Discrete agents interacting according to simple rules can self-organize into sophisticated structures through across-scale feedback.

    For instance, the formation of a traffic jam results from the cumulative behavior of individual drivers. The jam then constrains their ensuing decisions.

    Similarly, in learning contexts, the construction of shared knowledge, norms, values and discourses proceeds through local interactions, which then shape future exchanges. Methodologically, properly explicating emergence requires attending to co-existing linear and non-linear dynamics rather than viewing the system exclusively through either lens.

    By adopting a “trees-forest” orientation that observes both proximal neuronal firing and distal cultural evolution, researchers can transcend outmoded dichotomies. Beyond scrutinizing whether learner or environment represents the more suitable locus of analysis, the complex systems paradigm directs focus towards their multifaceted transactional synergy, which gives rise to learning. This avoids ascribing primacy to any single level, as well as positing reductive causal mechanisms, instead elucidating circular self-organizing feedback across hierarchically nested systems.

    The implications are profound. Treating learning as emergence compels educators to appreciate that curricular inputs and pedagogical techniques designed based upon linear extrapolation will likely yield unexpected results. Our commonsense notions that complexity demands intricacy fail to recognize that simple nonlinear interactions generate elaborate outcomes. This epistemic shift suggests practice should emphasize creating conditions conducive for adaptive growth rather than attempting to directly implant mental structures. Specifically, adopting a complexity orientation may entail providing open-ended creative experiences permitting self-guided exploration, establishing a learning culture that values diversity, dissent and ambiguity as catalysts for sensemaking, and implementing distributed network-based peer learning.

    Overall, the article explores how invoking a meta-theory grounded in complex systems science can dissolve dichotomies that have plagued the field. It compels implementing flexible, decentralized and emergent pedagogies far better aligned to the nonlinear complexity of learner development in context.

    Sophisticated learning theories often fail to translate into meaningful practice. Yet what this article describes closely corresponds to how The Geneva Learning Foundation (TGLF) is actually implementing its vision of education as a philosophy for change, in the face of complex threats to our societies. The Foundation conceives of learning as an emergent phenomenon arising from interactions between individuals, their social contexts, and surrounding systems. Our programs aim to catalyze this emergence by connecting practitioners facing shared challenges to foster collaborative sensemaking. For example, our Teach to Reach events connect tens of thousands of health professionals to share experience on their own terms, in relation to their own contextual needs. This emphasis on open-ended exploration and decentralized leadership exemplifies the flexible pedagogy demanded by a complexity paradigm. Overall, the Foundation’s work – deliberately situated outside the constraints of vestigial Academy – embodies the turn towards nonlinear models that can help transcend stale dichotomies. Our practice demonstrates the concrete value of recasting learning as the product of embedded agents interacting to generate systemic wisdom greater than their individual contributions.

    Jacobson, M.J., Kapur, M., Reimann, P., 2014. Towards a complex systems meta-theory of learning as an emergent phenomenon: Beyond the cognitive versus situative debate. Boulder, Colorado: International Society of the Learning Sciences. https://doi.dx.org/10.22318/icls2014.362

    Illustration © The Geneva Learning Foundation Collection (2024)

  • The design of intelligent environments for education

    The design of intelligent environments for education

    Warren M. Brodey, writing in 1967, advocated for “intelligent environments” that evolve in tandem with inhabitants rather than rigidly conditioning behaviors. The vision described deeply interweaves users and contexts, enabling environments to respond in real-time to boredom and changing needs with shifting modalities.

    Core arguments state that industrial-model education trains obedience over creativity through standardized, conformity-demanding environments that waste potential. Optimal learning requires tuning instruction to each student. Rigid spaces reflecting hard architecture must give way to soft, living systems adaptively promoting growth. His article categorizes environment and system intelligence across axes like passive/active, simple/complex, stagnant/self-improving.

    Significant themes include emancipating achievement through tailored guidance per preferences and abilities, architecting feedback loops between human and machine, and progressing through predictive insight rather than blunt insistence. Overarching takeaways reveal that intelligence emerges from environments and inhabitants synergistically improving one another, not stationary enforcement of tradition.

    For education, this analysis indicates transformative power in platforms sensing needs and seamlessly adjusting in response. Systems incorporating complex feedback architectures could gently reengage before boredom or fatigue arise. Structures may transform to suit changing activities and aptitudes. As described for next-generation spacecraft, education environments might proactively provide implements predicted as useful.

    The breakthrough conceptually resides in transitioning from monolithic demands constraining uniformity, to intimate learning partnerships actively fostering growth along personalized trajectories. The implications suggest education serving each student as they are, not as imposed expectations require them to be at given ages. Flexibility, enrichment, and jointly elevating potential represent primary goals rather than regimented metrics. Realizing this future demands evolving connections of those who teach and learn with their environment, recognizing the potential of such connections unlocking self-actualization.

    Brodey, W.M., 1967. The design of intelligent environments: Soft architecture.

  • How do we reframe health performance management within complex adaptive systems?

    How do we reframe health performance management within complex adaptive systems?

    We need a conceptual framework that situates health performance management within complex adaptive systems.

    This is a summary of an important paper by Tom Newton-Lewis et al. It describes such a conceptual framework that identifies the factors that determine the appropriate balance between directive and enabling approaches to health performance management in complex systems.

    Existing health performance management approaches in many low- and middle-income country health systems are largely directive, aiming to control behaviour using targets, performance monitoring, incentives, and answerability to hierarchies.

    Health systems are complex and adaptive: performance outcomes arise from interactions between many interconnected system actors and their ability to adapt to pressures for change.

    In my view, this paper mends an important broken link in theories of change that try to consider learning beyond training.

    The complex, dynamic, multilevel nature of health systems makes outcomes difficult to control, so directive approaches to performance management need to be balanced with enabling approaches that foster collective responsibility and empower teams to self-organise and use data for shared sensemaking and decision-making.

    Directive approaches may be more effective where workers are primarily extrinsically motivated, in less complex systems where there is higher certainty over how outcomes should be achieved, where there are sufficient resources and decision space, and where informal relationships do not subvert formal management levers.

    Enabling approaches may be more effective in contexts of higher complexity and uncertainty and where there are higher levels of trust, teamwork, and intrinsic motivation, as well as appropriate leadership.

    Directive and enabling approaches are not ‘either-or’: designers of health performance management systems must strive for an appropriate balance between them.

    The greater the dissonance between designing a health performance management system and the real context in which it is implemented, the more likely it is to trigger perverse, unintended consequences.

    Interventions must be carefully calibrated to the context of the health system, the culture of its organisations, and the motivations of its individuals.

    By considering each factor and their interdependencies, actors can minimise perverse unintended consequences while attaining a contextually appropriate balance between directive or enabling approaches in complex adaptive systems.

    The complexity of the framework and the interdependencies it describes reinforce that there is no ‘one-size-fits-all’ blueprint for health performance management.

    For higher-order learning and whole-system improvement to occur, practical and tacit knowledge needs to flow among complex adaptive systems’ actors and organisations, thus leveraging the power of networks and social connections (eg, learning exchanges and communities of practice).

    Reference

    Newton-Lewis, T., Munar, W., Chanturidze, T., 2021. Performance management in complex adaptive systems: a conceptual framework for health systems. BMJ Glob Health 6, e005582. https://doi.org/10.1136/bmjgh-2021-005582l

  • What is a “rubric” and why use rubrics in global health education?

    What is a “rubric” and why use rubrics in global health education?

    Rubrics are well-established, evidence-based tools in education, but largely unknown in global health.

    At the Geneva Learning Foundation (TGLF), the rubric is a key tool that we use – as part of a comprehensive package of interventions – to transform high-cost, low-volume training dependent on the limited availability of global experts into scalable peer learning to improve accessquality, and outcomes.

    The more prosaic definition of the rubric – reduced from any pedagogical questioning – is “a type of scoring guide that assesses and articulates specific components and expectations for an assignment” (Source).

    The rubric is a practical solution to a number of complex issues that prevent effective teaching and learning in global health.

    Developing a rubric provides a practical method for turning complex content and expertise into a learning process in which learners will learn primarily from each other.

    Hence, making sense of a rubric requires recognizing and appreciating the value of peer learning.

    This may be difficult to understand for those working in global health, due to a legacy of scientifically and morally wrong norms for learning and teaching primarily through face-to-face training.

    The first norm is that global experts teach staff in countries who are presumed to not know.

    The second is that the expert who knows (their subject) also necessarily knows how to teach, discounting or dismissing the science of pedagogy.

    Experts consistently believe that they can just “wing it” because they have the requisite technical knowledge.

    This ingrained belief also rests on the third mistaken assumption: that teaching is the job of transmitting information to those who lack it.

    (Paradoxically, the proliferation of online information modules and webinars has strengthened this norm, rather than weakened it).

    Indeed, although almost everyone agrees in principle that peer learning is “great”, there remains deep skepticism about its value.

    Unfortunately, learner preferences do not correlate with outcomes.

    Given the choice, learners prefer sitting passively to listen to a great lecture from a globally-renowned figure, rather than the drudgery of working in a group of peers whose level of expertise is unknown and who may or may not be engaged in the activities.

    (Yet, when assessed formally, the group that works together will out-perform the group that was lectured.) For subject matter experts, there can even be an existential question: if peers can learn without me, the expert, then am I still needed? What is my value to learners? What is my role?

    Developing a rubric provides a way to resolve such tensions and augment rather than diminish the significance of expertise.

    This requires, for the subject matter expert, a willingness to rethink and reframe their role from sage on the stage to guide on the side.

    Rubric development requires:

    1. expert input and review to think through what set of instructions and considerations will guide learners in developing useful knowledge they can use; and
    2. expertise to select the specific resources (such as guidance documents, case studies, etc.) that will help the learner as they develop this new knowledge.

    In this approach, an information module, a webinar, a guidance document, or any other piece of knowledge becomes a potential resource for learning that can be referenced into a rubric, with specific indications to when and how it may be used to support learning.

    In a peer learning context, a rubric is also a tool for reflection, stirring metacognition (thinking about thinking) that helps build critical thinking “muscles”.

    Our rubrics combine didactic instructions (“do this, do that”), reflective and exploratory questions, and as many considerations as necessary to guide the development of high-quality knowledge.

    These instructions are organized into versatile, specific criterion that can be as simple as “Calculate sample size” (where there will be only one correct answer), focus on practicalities (“Formulate your three top recommendations to your national manager”), or allow for exploration (“Reflect on the strategic value of your vaccination coverage survey for your country’s national immunization programme”).

    Yes, we use a scoring guide on a 0-4 scale, where the 4 out of 4 for each criterion summarizes what excellent work looks like.

    This often initially confuses both learners and subject matter experts, who assume that peers (whose prior expertise has not been evaluated) are being asked to grade each other.

    It turns out that, with a well-designed rubric, a neophyte can provide useful, constructive feedback to a seasoned expert – and vice versa.

    Both are using the same quality standard, so they are not sharing their personal opinion but applying that standard by using their critical thinking capabilities to do so.

    Before using the rubric to review the work of peers, each learner has had to use it to develop their own work.

    This ensures a kind of parity between peers: whatever the differences in experience and expertise, countries, or specializations, everyone has first practiced using the rubric for their own needs.

    In such a context, the key is not the rating, but the explanation that the peer reviewer will provide to explain it, with the requirements that she provides constructive, practical suggestions for how the author can improve their work.

    In some cases, learners are surprised to receive contradictory feedback: two reviewers give opposite ratings – one very high, and the other very low – together with conflicting explanations for these ratings.

    In such cases, it is an opportunity for learners to review the rubric, again, while critically examining the feedbacks received, in order to adjudicate between them.

    Ultimately, rubric-based feedback allows for significantly more learner agency in making the determination of what to do with the feedback received – as the next task is to translate this feedback into practical revisions to improve their work.

    This is, in and of itself, conducive to significant learning.

    Learn more about rubrics as part of effective teaching and learning from Bill Cope and Mary Kalantzis, two education pioneers who taught me to use them.

    Image: Mondrian’s classroom. The Geneva Learning Foundation Collection © 2024