Category: Theory

  • Why does cascade training fail?

    Why does cascade training fail?

    Cascade training remains widely used in global health.

    Cascade training can look great on paper: an expert trains a small group who, in turn, train others, thereby theoretically scaling the knowledge across an organization.

    It attempts to combine the advantages of expert coaching and peer learning by passing knowledge down a hierarchy.

    However, despite its promise and persistent use, cascade training is plagued by several factors that often lead to its failure.

    This is well-documented in the field of learning, but largely unknown (or ignored) in global health.

    What are the mechanics of this known inefficacy?

    Here are four factors that contribute to the failure of cascade training

    1. Information loss

    Consider a model where an expert holds a knowledge set K. In each subsequent layer of the cascade, α percentage of the knowledge is lost:

    $latex K_n = K \cdot \alpha^n&s=3$

    • Where $latex K_n$ is the knowledge at the nth level of the cascade. As n grows, $latex K_n$ exponentially decreases, leading to severe information loss.
    • Each layer in the cascade introduces a potential for misunderstanding the original information, leading to the training equivalent of the ‘telephone game’.

    2. Lack of feedback

    In a cascade model, only the first layer receives feedback from an actual expert.

    • Subsequent layers have to rely on their immediate ‘trainers,’ who might not have the expertise to correct nuanced mistakes.
    • The hierarchical relationship between trainer and trainee is different from peer learning, in which it is assumed that everyone has something to learn from others, and expertise is produced through collaborative learning.

    3. Skill variation

    • Not everyone is equipped to teach others.
    • The people who receive the training first are not necessarily the best at conveying it to the next layer, leading to unequal training quality.

    4. Dilution of responsibility

    • As the cascade flows down, the sense of responsibility for the quality and fidelity of the training dilutes.
    • The absence of feedback to drive a quality development process exacerbates this.

    Image: The Geneva Learning Foundation Collection © 2024

  • Making sense of sensemaking

    Making sense of sensemaking

    In her article “A Shared Lens for Sensemaking in Learning Analytics”, Sasha Poquet argues that the field of learning analytics lacks a shared conceptual language to describe the process of sensemaking around educational data. She reviews prominent theories of sensemaking, delineating tensions between assumptions in dominant paradigms. Poquet then demonstrates the eclectic use of sensemaking frameworks across empirical learning analytics research. For instance, studies frequently conflate noticing dashboard information with interpreting its significance. To advance systematic inquiry, she calls for revisiting epistemic assumptions to reconcile tensions between cognitive and sociocultural traditions. Adopting a transactional perspective, Poquet suggests activity theory, conceptualizations of perceived situational definitions, and ecological affordance perception can jointly illuminate subjective and objective facets of sensemaking. This preliminary framework spotlights the interplay of internal worldviews, external systemic contexts, and emergent perceptual processes in appropriating analytics.

    The implications span research and practice. The proposed constructs enable precise characterization of variability in stakeholder sensemaking to inform dashboard design. They also facilitate aggregating insights across implementations. Moreover, explicitly mapping situational landscapes and tracking affording relations between users and tools reveals rapid shifts in adoption phenomena frequently obscured in learning analytics. Capturing sensemaking dynamics through this multidimensional lens promises more agile, context-sensitive interventions. It compels a human-centered orientation to analytics aligned with longstanding calls to catalyze latent systemic wisdom rather than control complex educational processes.

    The Geneva Learning Foundation’s mission centers on fostering embedded peer learning networks scaling across boundaries. This vision resonates deeply with calls to transition from fragmented insights towards fostering collective coherence. The Foundation already employs a complexity meta-theory treating learning as an emergent phenomenon arising from cross-level interactions between minds and cultures. Adopting Poquet’s shared vocabulary for examining sensemaking processes driving appropriation of insights can help, as we continue to explore how to describe, explain, and understand our own work, large parts of which remain emergent. For instance, analysis could trace how contextual definitions interact with perceived affordances and activity systems to propagate innovative practices during Teach to Reach events spanning thousands worldwide. More broadly, the lens proposed mobilizes analytics to illuminate rather than dictate stakeholder wayfinding through complex challenges.

    Poquet, O. (2024). A shared lens around sensemaking in learning analytics: What activity theory, definition of a situation and affordances can offer. British Journal of Educational Technology, 00, 1–21.

    Illustration: The Geneva Learning Foundation Collection © 2024

  • Education as a system of systems: rethinking learning theory to tackle complex threats to our societies

    Education as a system of systems: rethinking learning theory to tackle complex threats to our societies

    In their 2014 article, Jacobson, Kapur, and Reimann propose shifting the paradigm of learning theory towards the conceptual framework of complexity science. They argue that the longstanding dichotomy between cognitive and situative theories of learning fails to capture the intricate dynamics at play. Learning arises across a “bio-psycho-social” system involving interactive feedback loops linking neuronal processes, individual cognition, social context, and cultural milieu. As such, what emerges cannot be reduced to any individual component.

    To better understand how macro-scale phenomena like learning manifest from micro-scale interactions, the authors invoke the notion of “emergence” prominent in the study of complex adaptive systems. Discrete agents interacting according to simple rules can self-organize into sophisticated structures through across-scale feedback.

    For instance, the formation of a traffic jam results from the cumulative behavior of individual drivers. The jam then constrains their ensuing decisions.

    Similarly, in learning contexts, the construction of shared knowledge, norms, values and discourses proceeds through local interactions, which then shape future exchanges. Methodologically, properly explicating emergence requires attending to co-existing linear and non-linear dynamics rather than viewing the system exclusively through either lens.

    By adopting a “trees-forest” orientation that observes both proximal neuronal firing and distal cultural evolution, researchers can transcend outmoded dichotomies. Beyond scrutinizing whether learner or environment represents the more suitable locus of analysis, the complex systems paradigm directs focus towards their multifaceted transactional synergy, which gives rise to learning. This avoids ascribing primacy to any single level, as well as positing reductive causal mechanisms, instead elucidating circular self-organizing feedback across hierarchically nested systems.

    The implications are profound. Treating learning as emergence compels educators to appreciate that curricular inputs and pedagogical techniques designed based upon linear extrapolation will likely yield unexpected results. Our commonsense notions that complexity demands intricacy fail to recognize that simple nonlinear interactions generate elaborate outcomes. This epistemic shift suggests practice should emphasize creating conditions conducive for adaptive growth rather than attempting to directly implant mental structures. Specifically, adopting a complexity orientation may entail providing open-ended creative experiences permitting self-guided exploration, establishing a learning culture that values diversity, dissent and ambiguity as catalysts for sensemaking, and implementing distributed network-based peer learning.

    Overall, the article explores how invoking a meta-theory grounded in complex systems science can dissolve dichotomies that have plagued the field. It compels implementing flexible, decentralized and emergent pedagogies far better aligned to the nonlinear complexity of learner development in context.

    Sophisticated learning theories often fail to translate into meaningful practice. Yet what this article describes closely corresponds to how The Geneva Learning Foundation (TGLF) is actually implementing its vision of education as a philosophy for change, in the face of complex threats to our societies. The Foundation conceives of learning as an emergent phenomenon arising from interactions between individuals, their social contexts, and surrounding systems. Our programs aim to catalyze this emergence by connecting practitioners facing shared challenges to foster collaborative sensemaking. For example, our Teach to Reach events connect tens of thousands of health professionals to share experience on their own terms, in relation to their own contextual needs. This emphasis on open-ended exploration and decentralized leadership exemplifies the flexible pedagogy demanded by a complexity paradigm. Overall, the Foundation’s work – deliberately situated outside the constraints of vestigial Academy – embodies the turn towards nonlinear models that can help transcend stale dichotomies. Our practice demonstrates the concrete value of recasting learning as the product of embedded agents interacting to generate systemic wisdom greater than their individual contributions.

    Jacobson, M.J., Kapur, M., Reimann, P., 2014. Towards a complex systems meta-theory of learning as an emergent phenomenon: Beyond the cognitive versus situative debate. Boulder, Colorado: International Society of the Learning Sciences. https://doi.dx.org/10.22318/icls2014.362

    Illustration © The Geneva Learning Foundation Collection (2024)

  • The design of intelligent environments for education

    The design of intelligent environments for education

    Warren M. Brodey, writing in 1967, advocated for “intelligent environments” that evolve in tandem with inhabitants rather than rigidly conditioning behaviors. The vision described deeply interweaves users and contexts, enabling environments to respond in real-time to boredom and changing needs with shifting modalities.

    Core arguments state that industrial-model education trains obedience over creativity through standardized, conformity-demanding environments that waste potential. Optimal learning requires tuning instruction to each student. Rigid spaces reflecting hard architecture must give way to soft, living systems adaptively promoting growth. His article categorizes environment and system intelligence across axes like passive/active, simple/complex, stagnant/self-improving.

    Significant themes include emancipating achievement through tailored guidance per preferences and abilities, architecting feedback loops between human and machine, and progressing through predictive insight rather than blunt insistence. Overarching takeaways reveal that intelligence emerges from environments and inhabitants synergistically improving one another, not stationary enforcement of tradition.

    For education, this analysis indicates transformative power in platforms sensing needs and seamlessly adjusting in response. Systems incorporating complex feedback architectures could gently reengage before boredom or fatigue arise. Structures may transform to suit changing activities and aptitudes. As described for next-generation spacecraft, education environments might proactively provide implements predicted as useful.

    The breakthrough conceptually resides in transitioning from monolithic demands constraining uniformity, to intimate learning partnerships actively fostering growth along personalized trajectories. The implications suggest education serving each student as they are, not as imposed expectations require them to be at given ages. Flexibility, enrichment, and jointly elevating potential represent primary goals rather than regimented metrics. Realizing this future demands evolving connections of those who teach and learn with their environment, recognizing the potential of such connections unlocking self-actualization.

    Brodey, W.M., 1967. The design of intelligent environments: Soft architecture.

  • What is a “rubric” and why use rubrics in global health education?

    What is a “rubric” and why use rubrics in global health education?

    Rubrics are well-established, evidence-based tools in education, but largely unknown in global health.

    At the Geneva Learning Foundation (TGLF), the rubric is a key tool that we use – as part of a comprehensive package of interventions – to transform high-cost, low-volume training dependent on the limited availability of global experts into scalable peer learning to improve accessquality, and outcomes.

    The more prosaic definition of the rubric – reduced from any pedagogical questioning – is “a type of scoring guide that assesses and articulates specific components and expectations for an assignment” (Source).

    The rubric is a practical solution to a number of complex issues that prevent effective teaching and learning in global health.

    Developing a rubric provides a practical method for turning complex content and expertise into a learning process in which learners will learn primarily from each other.

    Hence, making sense of a rubric requires recognizing and appreciating the value of peer learning.

    This may be difficult to understand for those working in global health, due to a legacy of scientifically and morally wrong norms for learning and teaching primarily through face-to-face training.

    The first norm is that global experts teach staff in countries who are presumed to not know.

    The second is that the expert who knows (their subject) also necessarily knows how to teach, discounting or dismissing the science of pedagogy.

    Experts consistently believe that they can just “wing it” because they have the requisite technical knowledge.

    This ingrained belief also rests on the third mistaken assumption: that teaching is the job of transmitting information to those who lack it.

    (Paradoxically, the proliferation of online information modules and webinars has strengthened this norm, rather than weakened it).

    Indeed, although almost everyone agrees in principle that peer learning is “great”, there remains deep skepticism about its value.

    Unfortunately, learner preferences do not correlate with outcomes.

    Given the choice, learners prefer sitting passively to listen to a great lecture from a globally-renowned figure, rather than the drudgery of working in a group of peers whose level of expertise is unknown and who may or may not be engaged in the activities.

    (Yet, when assessed formally, the group that works together will out-perform the group that was lectured.) For subject matter experts, there can even be an existential question: if peers can learn without me, the expert, then am I still needed? What is my value to learners? What is my role?

    Developing a rubric provides a way to resolve such tensions and augment rather than diminish the significance of expertise.

    This requires, for the subject matter expert, a willingness to rethink and reframe their role from sage on the stage to guide on the side.

    Rubric development requires:

    1. expert input and review to think through what set of instructions and considerations will guide learners in developing useful knowledge they can use; and
    2. expertise to select the specific resources (such as guidance documents, case studies, etc.) that will help the learner as they develop this new knowledge.

    In this approach, an information module, a webinar, a guidance document, or any other piece of knowledge becomes a potential resource for learning that can be referenced into a rubric, with specific indications to when and how it may be used to support learning.

    In a peer learning context, a rubric is also a tool for reflection, stirring metacognition (thinking about thinking) that helps build critical thinking “muscles”.

    Our rubrics combine didactic instructions (“do this, do that”), reflective and exploratory questions, and as many considerations as necessary to guide the development of high-quality knowledge.

    These instructions are organized into versatile, specific criterion that can be as simple as “Calculate sample size” (where there will be only one correct answer), focus on practicalities (“Formulate your three top recommendations to your national manager”), or allow for exploration (“Reflect on the strategic value of your vaccination coverage survey for your country’s national immunization programme”).

    Yes, we use a scoring guide on a 0-4 scale, where the 4 out of 4 for each criterion summarizes what excellent work looks like.

    This often initially confuses both learners and subject matter experts, who assume that peers (whose prior expertise has not been evaluated) are being asked to grade each other.

    It turns out that, with a well-designed rubric, a neophyte can provide useful, constructive feedback to a seasoned expert – and vice versa.

    Both are using the same quality standard, so they are not sharing their personal opinion but applying that standard by using their critical thinking capabilities to do so.

    Before using the rubric to review the work of peers, each learner has had to use it to develop their own work.

    This ensures a kind of parity between peers: whatever the differences in experience and expertise, countries, or specializations, everyone has first practiced using the rubric for their own needs.

    In such a context, the key is not the rating, but the explanation that the peer reviewer will provide to explain it, with the requirements that she provides constructive, practical suggestions for how the author can improve their work.

    In some cases, learners are surprised to receive contradictory feedback: two reviewers give opposite ratings – one very high, and the other very low – together with conflicting explanations for these ratings.

    In such cases, it is an opportunity for learners to review the rubric, again, while critically examining the feedbacks received, in order to adjudicate between them.

    Ultimately, rubric-based feedback allows for significantly more learner agency in making the determination of what to do with the feedback received – as the next task is to translate this feedback into practical revisions to improve their work.

    This is, in and of itself, conducive to significant learning.

    Learn more about rubrics as part of effective teaching and learning from Bill Cope and Mary Kalantzis, two education pioneers who taught me to use them.

    Image: Mondrian’s classroom. The Geneva Learning Foundation Collection © 2024

  • Which is better for global health: online, blended, or face-to-face learning?

    Which is better for global health: online, blended, or face-to-face learning?

    Question 1. Does supplementing face-to-face instruction with online instruction enhance learning?

    No. Positive effects associated with blended learning should not be attributed to the media, per se. (It is more likely that positive effects are due to people doing more work in blended learning, once online and then again in a physical space.)

    This is the conclusion of the U.S. Department of Education’s “Evaluation of evidence-based practices in online learning: a meta-analysis and review of online learning studies” in September 2010. You can find the full document here.

    Question 2. Is the final academic performance of students in distance learning programs better than that of those enrolled in traditional FTF programs, in the last twenty-year period?

    Yes. Distance learning results in increasingly better learning outcomes since 1991 – when learning technologies to support distance learning were far more rudimentary than they are now.

    This is the meta-analysis done by Mickey Shachar and Yoram Nuemann reviewing twenty years of research on the academic performance differences between traditional and distance learning: summative meta-analysis and trend examination in the Merlot Journal of Online Learning and Teaching. Vol 6, No. 2, June 2010.

    A long time ago, I asked Bill Cope what the evidence says about the superiority of online learning over blended and face-to-face. My experience had already consistently been that you could achieve so much more with the confines and constraints of physical space removed.

    Of course, it is complicated. But Bill pointed me to the two meta-analyses published in 2010 that provided fair and definitive evidence to answer two questions. Yet, in the field of global health, the underlying assumption of funders and technical partners remains that there is no better way to learn than by flying bodies and materials at high cost. This is scientifically and morally wrong, does not scale, and has created a per diem economy of perverse incentives. It is wrong even if it is easy to understand why international trainers and trainees both express a preference for the least effective, low volume, high cost approach to learning.

    Image: Online learning networks. Personal collection generated by Mindjourney.

  • Meeting of the minds

    Meeting of the minds

    This is my presentation for the Geneva Learning Foundation, first made at the Swiss Knowledge Management Forum (SKMF) round table held on 8 September 2016 at the École polytechnique fédérale de Lausanne (EPFL). Its title is “Meeting of the minds: Rethinking our assumptions about the superiority of face-to-face encounters.” It is an exploration of the impact of rapid change that encompasses learning at scale, the performance revolution, complexity and volatility, and what Nathan Jurgenson calls the IRL fetish.

    The point is not to invert assumptions about the superiority of one medium over another. Rather, it is to look at the context for change, thinking through the challenges we face, with a specific, pragmatic focus on learning problems such as:

    • You have an existing high-cost, low-volume face-to-face learning initiative, but need to train more people (scale).
    • You want learning to be immediately practical and relevant for practitioners (performance).
    • You need to achieve higher-order learning (complexity), beyond information transmission to develop analytical and evaluation competencies that include mindfulness and reflection.
    • You have a strategy, but individuals in their silos think the way they already do things is just fine (networks).
    • You need to develop case studies, but a consultant will find it difficult to access tacit knowledge and experience (experience).
    • You want to build a self-organizing community of practice, in a geographically distributed organization, to sharpen the mission through decentralized means.

    These are the kinds of problems that we solve for organizations and networks through digital learning. Can such challenges be addressed solely through action or activities that take place solely in the same time and (physical) space? Of course not. Is it correct to describe what happens at a distance, by digital means, as not in-real-life (IRL)? This is a less obvious but equally logical conclusion.

    If we begin to question this assumption that Andrew Feenberg pointed out way back in 1989 was formulated way back when by Plato… What happens next? What are the consequences and the implications? We need new ways to teach and learn. It is the new economy of effort provided by the Internet that enables us to afford these new ways of doing new things. Digital dualism blinds us to the many ways in which technology has seeped into our lives to the point where “real life” (and therefore learning) happens across both physical and digital spaces.

    The idea for this round table emerged from conversations with the SKMF’s Véronique Sikora and Gil Regev. Véronique and I were chatting on LSi’s Slack about the pedagogy of New Learning that underpins Scholar, the learning technology we are using at the Geneva Learning Foundation.

    Cooking up a round table
    Cooking up a round table

    With Scholar, we can quickly organize an exercise in which hundreds of learners from anywhere can co-develop new knowledge, using peer review with a structured rubric that empowers participants to learn from each other. This write-review-revise process is incredibly efficient, and generates higher-order learning outcomes that make Scholar suitable to build analysis, evaluation, and reflection through connected learning.

    Scholar process: write-review-revise
    Scholar process: write-review-revise

    Obviously, such a process does not work at scale in a physical space. However, could the Scholar process be replicated in the purely physical space of a small round table with 15–20 participants? What would be the experience of participants and facilitators?

    It took quite a bit of effort to figure out how we could model this. Some aspects could not be reproduced due to the limitations of physical space. There was much less time than one could afford online, and therefore less space for reflection. The stimulation to engage through conversation was constant, unlike the online experience of sitting alone in front of one’s device. Diversity was limited to the arbitrary subset of people who happened to show up for this round table. This provided comfort to some but narrowed the realm of possibilities for discovery and questioning.

    I have learned to read subtle clues and to infer behavior from comments, e-mail messages, and other signals in a purely digital course where everything happens at a distance. That made it fascinating to directly observe the behavior of participants, in particular the social dimension of their interactions that seemed to be wonderfully enjoyable and terribly inefficient at the same time.

    Only one of the round table participants (Véronique, who finished the first-ever #DigitalScholar course during the Summer) had used Scholar, so the activity, in which they shared a story and then peer reviewed it using a structured rubric, seemed quite banal. At a small scale, it turned out to be quite manageable. I had envisioned a round robin process in which participants would have to move around constantly to complete their three peer reviews. However, since they were already sitting in groups of four, it was easier to have the review process take place at each table, minimizing the need for movement. This felt like an analog to what we often end up doing in an online learning environment when an activity takes shape due to the constraints of the digital space…

    Image: Flowers in Thor. Personal collection (August 2016).

     

     

  • Elements of a learning dashboard

    Elements of a learning dashboard

    “What is clear is that a learning rich culture will emphasize informal learning and more open learning designs rather than relying only on formal training approaches. The learning infrastructure consists of all of the formal, informal, and incidental activities, systems, and policies that promote individual, team, and organizational learning and knowledge creation.”

    Elements of a learning dashboard-Watkins

    Source: Watkins, K., 2013. Building a Learning Dashboard. The HR Review 16–21.

  • Education is the science of sciences

    Education is the science of sciences

    “We want to talk about science as a certain kind of ‘knowing’.

    Specifically, we want to use it to name those deeper forms of knowing that are the purpose of education.

    Science in this broader sense consists of things you do to know that are premeditated, things you set out to know in a carefully considered way.

    It involved out-of-the ordinary knowledge-making efforts that have a peculiar intensity of focus, rather than things you get to know as an incidental consequence of doing something or being somewhere.

    Science has special methods or techniques for knowing.

    These methods are connected with specialized traditions of knowledge making and bodies of knowledge.

    In these senses, history, language studies and mathematics are sciences, as are chemistry, physics and biology.

    Education is the science of learning (and, of course, teaching).

    Its subject is how people come to know.

    It teaches learners the methods for making knowledge that is, in our broad sense, scientific.

    It teaches what has been learned and can be learned using these methods.

    In this sense, education is privileged to be the science of sciences.

    As a discipline itself, the science of education develops knowledge about the processes of coming to know.”

    Kalantzis, M., Cope, B., 2012. New learning: elements of a science of education, Second edition. ed. Cambridge University Press.

    Image: Neurons in the brain. Bryan Jones, University of Utah

     

  • Flow

    Flow

    In our studies, we found that every flow activity, whether it involved competition, chance, or any other dimension of experience, had this in common: It provided a sense of discovery, a creative feeling of transporting the person into a new reality. It pushed the person to higher levels of performance, and led to previously undreamed-of states of consciousness. In short, it transformed the self by making it more complex. In this growth of the self lies the key to flow activities.

    Flow channel states
    Flow channel states

    Source: Csikszentmihalyi, M., 1990. Flow : the psychology of optimal experience, 1st ed. ed. Harper & Row, New York. Photo: Fluid Painting 79 Acrylic On Canvas (Mark Chadwick/Flickr).