Category: Learning design

  • Taking the pulse: why and how we change everything in response to learner signals

    Taking the pulse: why and how we change everything in response to learner signals

    The ability to analyze and respond to learner behavior as it happens is crucial for educators.

    In complex learning that takes place in digital spaces, task separation between the design of instruction and its delivery does not make sense.

    Here is the practical approach we use in The Geneva Learning Foundation’s learning-to-action model to implement responsive learning environments by listening to learner signals and adapting design, activities, and feedback accordingly.

    Listening for and interpreting learner signals

    Educators must pay close attention to various signals that learners emit throughout their learning journey. These signals appear in several key ways:

    1. Engagement levels: This includes participation rates, the quality of contributions in discussions, how learners interact with each other, and knowledge artefacts they produce.
    2. Emotional responses: The tone and content of learner feedback can indicate enthusiasm, frustration, or confusion.
    3. Performance patterns: Trends in speed and volume of responses tend to strongly correlate with more significant learning outcome indicators.
    4. Interaction dynamics: Learners can feel a facilitator’s conviction (or lack thereof) in the learning process. Observing the interaction should focus first on the facilitator’s own behavior: what are they modeling for learners?
    5. Technical interactions: The way learners navigate the learning platform, which resources they access most, and any technical challenges they face are important indicators.

    Making sense of learner signals

    Once these signals are identified, a nuanced approach to analysis is necessary:

    1. Contextual consideration: Understanding the broader context of learners’ experiences is vital. For example, differences between language cohorts might reflect varying levels of real-world experience and cultural contexts.
    2. Holistic view: Look beyond immediate learning objectives to understand all aspects of learners’ experiences, including factors outside the course that may affect their engagement.
    3. Temporal analysis: Track changes in learner behavior over time to reveal important trends and patterns as the course progresses.
    4. Comparative assessment: Compare behavior across different cohorts, language groups, or demographic segments to identify unique needs and preferences.
    5. Feedback loop analysis: Examine how learners respond to different types of feedback and instructional interventions to provide valuable insights.

    Adapting learning design in situ

    What can we change in response to learner behavior, signals, and patterns?

    1. Customized content: Tailor case studies, examples, and scenarios to match the real-world experiences and cultural contexts of different learner groups.
    2. Flexible pacing: Adjust the rhythm of content delivery and activities based on observed engagement patterns and feedback.
    3. Varied support mechanisms: Implement a range of support options, from technical assistance to emotional support, based on identified learner needs.
    4. Dynamic group formations: Adapt group activities and peer learning opportunities based on observed interaction dynamics and skill levels.
    5. Multimodal delivery: Offer content and activities in various formats to cater to different learning preferences and technical capabilities.

    Responding to learner signals

    Feedback plays a crucial role in the learning process:

    1. Comprehensive acknowledgment: Feedback mechanisms should demonstrate to learners that their input is valued and considered. This might involve creating, at least once, detailed summaries of learner feedback to show that every voice has been heard.
    2. Timely interventions: Using real-time feedback to address emerging issues or confusion quickly can prevent small challenges from becoming major obstacles.
    3. Personalized guidance: Tailor feedback to individual learners based on their unique progress, challenges, and goals.
    4. Peer feedback facilitation: Create opportunities for learners to provide feedback to each other to foster a collaborative learning environment.
    5. Metacognitive prompts: Incorporate feedback that encourages learners to reflect on their learning process to promote self-awareness and self-directed learning.

    Balancing act

    When combined, these analyses provide clues to inform decisions.

    Nothing should be set in stone.

    Decisions need to be pragmatic and rapid.

    In order to respond to the pattern formed by signals, what are the trade-offs?

    The digital economy of effort makes rapid changes possible.

    Nevertheless, we consider the cost of each change versus its benefit.

    This adaptive approach involves careful balancing of various factors:

    1. Depth versus speed: Navigate the tension between providing comprehensive feedback and maintaining a timely pace of instruction.
    2. Structure versus flexibility: Maintain a coherent course structure while allowing for adaptations based on learner needs.
    3. Individual versus group needs: Balance addressing individual learner challenges with maintaining the momentum of the entire cohort.
    4. Emotional support versus learning structure: Provide necessary emotional support, especially in challenging contexts, while maintaining focus on learning objectives.

    Learning is research

    Each learning experience should be treated as a research opportunity:

    1. Data collection: Systematically collect data on learner behavior, feedback, and outcomes.
    2. Team reflection: Conduct regular debriefs with the instructional team to share insights and adjust strategies.
    3. Iterative design: Use insights gained from each cohort to refine the learning design for future iterations.
    4. Cross-cohort learning: Apply lessons learned from one language or cultural group to enhance the experience of others, while respecting unique contextual differences.

    Image: The Geneva Learning Foundation Collection © 2024

  • What is a “rubric” and why use rubrics in global health education?

    What is a “rubric” and why use rubrics in global health education?

    Rubrics are well-established, evidence-based tools in education, but largely unknown in global health.

    At the Geneva Learning Foundation (TGLF), the rubric is a key tool that we use – as part of a comprehensive package of interventions – to transform high-cost, low-volume training dependent on the limited availability of global experts into scalable peer learning to improve accessquality, and outcomes.

    The more prosaic definition of the rubric – reduced from any pedagogical questioning – is “a type of scoring guide that assesses and articulates specific components and expectations for an assignment” (Source).

    The rubric is a practical solution to a number of complex issues that prevent effective teaching and learning in global health.

    Developing a rubric provides a practical method for turning complex content and expertise into a learning process in which learners will learn primarily from each other.

    Hence, making sense of a rubric requires recognizing and appreciating the value of peer learning.

    This may be difficult to understand for those working in global health, due to a legacy of scientifically and morally wrong norms for learning and teaching primarily through face-to-face training.

    The first norm is that global experts teach staff in countries who are presumed to not know.

    The second is that the expert who knows (their subject) also necessarily knows how to teach, discounting or dismissing the science of pedagogy.

    Experts consistently believe that they can just “wing it” because they have the requisite technical knowledge.

    This ingrained belief also rests on the third mistaken assumption: that teaching is the job of transmitting information to those who lack it.

    (Paradoxically, the proliferation of online information modules and webinars has strengthened this norm, rather than weakened it).

    Indeed, although almost everyone agrees in principle that peer learning is “great”, there remains deep skepticism about its value.

    Unfortunately, learner preferences do not correlate with outcomes.

    Given the choice, learners prefer sitting passively to listen to a great lecture from a globally-renowned figure, rather than the drudgery of working in a group of peers whose level of expertise is unknown and who may or may not be engaged in the activities.

    (Yet, when assessed formally, the group that works together will out-perform the group that was lectured.) For subject matter experts, there can even be an existential question: if peers can learn without me, the expert, then am I still needed? What is my value to learners? What is my role?

    Developing a rubric provides a way to resolve such tensions and augment rather than diminish the significance of expertise.

    This requires, for the subject matter expert, a willingness to rethink and reframe their role from sage on the stage to guide on the side.

    Rubric development requires:

    1. expert input and review to think through what set of instructions and considerations will guide learners in developing useful knowledge they can use; and
    2. expertise to select the specific resources (such as guidance documents, case studies, etc.) that will help the learner as they develop this new knowledge.

    In this approach, an information module, a webinar, a guidance document, or any other piece of knowledge becomes a potential resource for learning that can be referenced into a rubric, with specific indications to when and how it may be used to support learning.

    In a peer learning context, a rubric is also a tool for reflection, stirring metacognition (thinking about thinking) that helps build critical thinking “muscles”.

    Our rubrics combine didactic instructions (“do this, do that”), reflective and exploratory questions, and as many considerations as necessary to guide the development of high-quality knowledge.

    These instructions are organized into versatile, specific criterion that can be as simple as “Calculate sample size” (where there will be only one correct answer), focus on practicalities (“Formulate your three top recommendations to your national manager”), or allow for exploration (“Reflect on the strategic value of your vaccination coverage survey for your country’s national immunization programme”).

    Yes, we use a scoring guide on a 0-4 scale, where the 4 out of 4 for each criterion summarizes what excellent work looks like.

    This often initially confuses both learners and subject matter experts, who assume that peers (whose prior expertise has not been evaluated) are being asked to grade each other.

    It turns out that, with a well-designed rubric, a neophyte can provide useful, constructive feedback to a seasoned expert – and vice versa.

    Both are using the same quality standard, so they are not sharing their personal opinion but applying that standard by using their critical thinking capabilities to do so.

    Before using the rubric to review the work of peers, each learner has had to use it to develop their own work.

    This ensures a kind of parity between peers: whatever the differences in experience and expertise, countries, or specializations, everyone has first practiced using the rubric for their own needs.

    In such a context, the key is not the rating, but the explanation that the peer reviewer will provide to explain it, with the requirements that she provides constructive, practical suggestions for how the author can improve their work.

    In some cases, learners are surprised to receive contradictory feedback: two reviewers give opposite ratings – one very high, and the other very low – together with conflicting explanations for these ratings.

    In such cases, it is an opportunity for learners to review the rubric, again, while critically examining the feedbacks received, in order to adjudicate between them.

    Ultimately, rubric-based feedback allows for significantly more learner agency in making the determination of what to do with the feedback received – as the next task is to translate this feedback into practical revisions to improve their work.

    This is, in and of itself, conducive to significant learning.

    Learn more about rubrics as part of effective teaching and learning from Bill Cope and Mary Kalantzis, two education pioneers who taught me to use them.

    Image: Mondrian’s classroom. The Geneva Learning Foundation Collection © 2024

  • How we work

    How we work

    We achieve operational excellence to provide a high-quality, personalized and transformative learning experience for each learner – no matter how many are in the cohort.

    We achieve this by:

    • Building on the best available evidence from research and our own practice in adult learning to address, engage, and retain busy, working professionals;
    • Responding as quickly as we possibly can to learner queries and problems – and ensuring that individual problem-solving are used to improve the experience of the entire group;
    • Finding the sweet spot between structure (unambiguous instructions, schedule, and process) and process agility (adapting activities to improve support to learners); and
    • Designing for facilitation to empower learners, scaffolding their journey but recognizing that they are the ones who best know their context and needs.

    Together, these capabilities combine to:

    • Offer a personalized learning experience in which each learner receives the support they need, and feels a growing sense of belonging.
    • Recreate an experience of collaboration that surpasses that of the physical world – still imperfect, but augmenting capabilities and recognizing that this is increasingly how we get things done in the real world, where physical and digital are fused.
    • Accelerate knowledge acquisition by connecting knowledge shards to activities and tasks directly related to the context of work.
    • Guide knowledge development and problem-solving using rubrics that define the quality standard.

    These may seem like abstract principles. Yet they are the ones that have enabled our team to:

    • achieve completion rates above ninety percent with cohorts of hundreds of learners;
    • kindle high motivation; and
    • foster the emergence of new forms of leadership for learning.

    Building on the idea that education is a philosophy for change, our focus has shifted from learning outcomes – necessary but not sufficient – to a focus on supporting learners all the way to the finish line of impact.

  • What lies beyond the event horizon of the ‘webinar’?

    What lies beyond the event horizon of the ‘webinar’?

    It is very hard to convey to learners and newcomers to digital learning alike that asynchronous modes of learning are proven to be far more effective. There is an immediacy to a sage-on-the-stage lecture – whether it is plodding or enthralling – or to being connected simultaneously with others to do group work.

    Asynchronous goes against the way our brains work, driven by prompts, events, and immediacy. But people get the benefit of “time-shifting” their TV shows and “on demand” is the norm for media consumption now.

    Most webinars still require you to show up at a specific time. With live streaming of the Foundation’s events, we are observing growing appreciation for asynchronous “I’ll watch it when I want to” availability of recorded events. The behavior seems different from the intention of viewing a recorded webinar, which almost never happens. (This is, in part, the motivation question: does anyone watch recordings of webinars without being forced to?)

    It is wonderful that the big video platforms immediately make the recording available, at the same URL, after a livestreamed event. Right now, this is better than Zoom, which does not (yet) offer a simple, automated way to share the recording with everyone who missed a live session, nor a mechanism for post-event viewers to contribute comments or questions.

    Image: Time travel (Wikipedia Commons).

  • New learning and leadership for front-line community health workers facing danger

    New learning and leadership for front-line community health workers facing danger

    This presentation was prepared for the second global meeting of the Health Care in Danger (HCiD) project in Geneva, Switzerland (17–18 May 2017).

    In October  2016, over 700 pre-hospital emergency workers from 70 countries signed up for the #Ambulance! initiative to “share experience and document situations of violence”. This initiative was led by Norwegian Red Cross and IFRC in partnership with the Geneva Learning Foundation, as part of the Health Care in Danger project. Over four weeks (equivalent to two days of learning time), participants documented 72 front-line incidents of violence and similar risks, and came up with practical approaches to dealing with such risks.

    This initiative builds on the Scholar Approach, developed by the University of Illinois College of Education, the Geneva Learning Foundation, and Learning Strategies International. In 2013, IFRC had piloted this approach to produce 105 case studies documenting learning in emergency operations.

    These are some of the questions which I address in the video presentation below:

    • Mindfulness: Can behaviors and mindfulness change through a digital learning initiative? If so, what kind of pedagogical approach (and technology to scaffold it) is needed to achieve such meaningful outcomes?
    • Leadership: How can learners become leaders through connected learning? What does leadership mean in a global community – and how does it connect back to the ground?
    • Diversity: What does leadership mean in a global knowledge community where every individual’s context is likely to be different?
    • Local relevance: What is the value of a global network when one’s work is to serve a local community?
    • Credential: What is the credential of value (badges and other gimmicks won’t do) that can appropriately recognize the experience of front-line humanitarians?
    • Pedagogy: Why are MOOCs (information transmission) and gamification (behaviorism)  unlikely to deliver meaningful outcomes for the sustainable development or disaster preparedness of communities?

    The video presentation below (31 minutes):

    • examines a few of the remarkable outcomes produced in 2016 and
    • explains how they led to growing the initiative in 2017.

    To learn more about or join the #Ambulance! activities in 2017, please click here. You may also view below the selfie videos recorded by #Ambulance! course team volunteers to call fellow pre-hospital emergency health practitioners to join the initiative.

    Image credit: #Ambulance! project course team volunteers.

  • Can analysis and critical thinking be taught online in the humanitarian context?

    Can analysis and critical thinking be taught online in the humanitarian context?

    This is my presentation at the First International Forum on Humanitarian Online Training (IFHOLT) organized by the University of Geneva on 12 June 2015.

    I describe some early findings from research and practice that aim to go beyond “click-through” e-learning that stops at knowledge transmission. Such transmissive approaches replicate traditional training methods prevalent in the humanitarian context, but are both ineffective and irrelevant when it comes to teaching and learning the critical thinking skills that are needed to operate in volatile, uncertain, complex and ambiguous environments faced by humanitarian teams. Nor can such approaches foster collaborative leadership and team work.

    Most people recognize this, but then invoke blended learning as the solution. Is it that – or is it just a cop-out to avoid deeper questioning and enquiry of our models for teaching and learning in the humanitarian (and development) space? If not, what is the alternative? This is what I explore in just under twenty minutes.

    This presentation was first made as a Pecha Kucha at the University of Geneva’s First International Forum on Online Humanitarian Training (IFHOLT), on 12 June 2015. Its content is based in part on LSi’s first white paper written by Katia Muck with support from Bill Cope to document the learning process and outcomes of Scholar for the humanitarian contest. 

    Photo: All the way down (Amancay Maahs/flickr.com)

  • Experience and blended learning: two heads of the humanitarian training chimera

    Experience and blended learning: two heads of the humanitarian training chimera

    Experience is the best teacher, we say. This is a testament to our lack of applicable quality standards for training and its professionalization, our inability to act on what has consequently become the fairly empty mantra of 70-20-10, and the blinders that keep the economics (low-volume, high-cost face-to-face training with no measurable outcomes pays the bills of many humanitarian workers, and per diem feeds many trainees…) of humanitarian education out of the picture.

    We are still dropping people into the deep end of the pool (i.e., mission) and hoping that they somehow figure out how to swim. We are where the National Basketball Association in the United States was in 1976. However, if the Kermit Washingtons in our space were to call our Pete Newells (i.e., those of us who design, deliver, or manage humanitarian training), what do we have to offer?

    The corollary to this question is why no one seems to care? How else could an independent impact review of DFID’s five-year £1.2 billion investment in research, evaluation and personnel development conclude that the British agency for international development “does not clearly identify how its investment in learning links to its performance and delivering better impact”… with barely anybody noticing?

    Let us just use blended learning, we say. Yet the largest meta-analysis and review of online learning studies led by Barbara Means and her colleagues in 2010 found no positive effects associated with blended learning (other than the fact that learners typically do more work in such set-ups, once online and then again face-to-face). Rather, the call for blended learning is a symptom for two ills.

    First, there is our lingering skepticism about the effectiveness of online learning (of which we make demands in terms of outcomes, efficacy, and results that we almost never make for face-to-face training), magnified by fear of machines taking away our training livelihoods.

    Second, there is the failure of the prevailing transmissive model of e-learning which, paradoxically, is also responsible for its growing acceptance in the humanitarian sector. We have reproduced the worst kind of face-to-face training in the online space with our click-through PowerPoints that get a multiple-choice quiz tacked on at the end. This is unfair, if only because it only saves the trainer (saved from the drudgery of delivery by a machine) from boredom.

    So the litany about blended learning is ultimately a failure of imagination: are we really incapable of creating new ways of teaching and learning that model the ways we work in volatile, uncertain, complex and ambiguous (VUCA) humanitarian contexts? We actually dialogue, try, fail, learn and iterate all the time – outside of training. How can humanitarians who share a profoundly creative problem-solving learning culture, who operate on the outer cusp of complexity and chaos… do so poorly when it comes to organizing how we teach and learn? How can organizations and donors that preach accountability and results continue to unquestioningly pour money into training with nothing but a fresh but thin coat of capacity-building paint splashed on?

    Transmissive learning – whatever the medium – remains the dominant mode of formal learning in the humanitarian context, even though everyone knows patently that such an approach is both ineffective and irrelevant when it comes to teaching and learning the critical thinking skills that are needed to deliver results and, even more crucially, to see around the corner of the next challenge. Such approaches do not foster collaborative leadership and team work, do not provide experience, and do not confront the learner with complexity. In other words, they fail to do anything of relevance to improved preparedness and performance.

    If you find yourself appalled at the polemical nature of the blanket statements above – that’s great! I believe that the sector should be ripe for such a debate. So please do share the nature of your disagreement and take me to task for getting it all wrong (here is why I don’t have a comments section). If you at least reluctantly acknowledge that there is something worryingly accurate about my observations, let’s talk. Finally, if you find this to be darkly depressing, then check back tomorrow (or subscribe) on this blog when I publish my presentation at the First International Forum on Online Humanitarian training. It is all about new learning and assessment practice that models the complexity and creativity of the work that humanitarians do in order to survive, deliver, and thrive.

    Painting: Peter Paul Rubens. From 1577 to 1640. Antwerp. Medusa’s head. KHM Vienna.

  • 7 key questions when designing a learning system

    7 key questions when designing a learning system

    In the design of a learning system for humanitarians, the following questions should be given careful consideration:

    1. Does each component of the system foster cross-cutting analysis and critical thinking competencies that are key to humanitarian leadership?
    2. Is the curriculum standardized across all components, with shared learning objectives and a common competency framework?
    3. Is the curriculum modular so that components may be tailored to focus on context-specific performance gaps?
    4. Does the system provide experiential learning (through scenario-based simulations) and foster collaboration (through social, peer-to-peer knowledge co-construction) in addition to knowledge transmission (instruction)?
    5. How are learning and performance outcomes evaluated?
    6. Are synergies between components of the learning system leveraged to minimized costs?
    7. Have the costs over time been correctly calculated by estimating both development and delivery costs?

    These questions emerged from the development of a learning system for market assessment last year, thinking through how to use learning innovation to achieve efficiency and effectiveness despite limited resources.

    Photo: The Infinity Room (The House on the Rock) (Justin Kern/Flickr)

  • Unified Knowledge Universe

    Unified Knowledge Universe

    “Knowledge is the economy. What used to be the means has today become the end. Knowledge is a river, not a reservoir. A process, not a product. It’s the pipes that matter, because learning is in the network.” – George Siemens  in Knowing Knowledge (2006)

    Harnessing the proliferation of knowledge systems and the rapid pace of technological change is a key problem for 21st century organizations. When knowledge is more of a deluge than a trickle, old command-control methods of creating, controlling, and distributing knowledge encased in a container view do little to crack how we can tame this flood. How do you scaffold continual improvement in learning and knowledge production to maximize depth, dissemination and impact? A new approach is needed to apply multiple lenses to a specific organizational context.

    What the organization wants to enable, improve and accelerate:

    1. Give decision makers instant, ubiquitous and predictive access to all the knowledge in its universe – and connect it to everywhere.
    2. Rapidly curate, collate and circulate most-current content as a publication (print on demand, ebooks, etc.) when it is thick knowledge, and for everything else as a set of web pages (micro-site or blog), or individual, granular bits of content suitable for embed anywhere.
    3. Accelerate co-construction of new, most-current knowledge using peer review to deliver high-quality case studies, strategies, implementation plans, etc.

    How do you crack this? Here are some of the steps:

    1. Benchmark existing knowledge production workflows and identify bottlenecks, using multiple lenses and mixed methods.
    2. In the short term, fix publishing bottlenecks by improving existing systems (software) and performance support (people).
    3. In the longer view, adopt a total quality management (TQM) approach to build ‘scaffolding’ and ‘pipes’ that maximize production, capture, flow, and impact of high-quality, most-current knowledge production, with everything replicated in a centralized, unstructured repository.

    Multiple lenses are needed as no single way of seeing can unravel the complexity of knowledge flows:

    • The lens of complexity: Systems thinking recognizes that we do not need a full understanding of the constituent objects in order to benchmark, analyze, or make decisions to improve processes, outcomes, and quality.
    • The lens of learning: Learning theory provides the framework to map knowledge flows beyond production to dissemination to impact. The co-construction of knowledge provides a ‘deeper’, less fleeting perspective than conventional social media approaches. More pragmatically, a number of tools from learning and development and education research can be used to benchmark.
    • The lens of talent: Staff lose precious time and experience frustration due to duplication of effort, repetitive tasks, and anxiety due to the risk of errors. They may feel overwhelmed by the complexity and intricacies of multiple systems, as well as by the requirement to learn and adapt to each one. Informal learning communities can bring together in the workflow to identify potential, develop competencies, and drive performance. Hiring, on boarding and handover can be used to identify gaps and improve fitness for purpose.
    • The lens of culture: Determinants of quality through print-centric publishing processes are grounded in a rich cultural legacy, for example. Other specialists (IT, comms, etc.) also have their own, overlapping universes. Correct analysis of these and how they interact is indispensable.
    • The lens of total quality management (TQM): This lens includes quality development, business process improvement (BPI), and risk management. It can help both in the initial diagnosis (process maps) and in designing systems and procedures for continual improvement.
    • The lens of IT: Information technology management includes both agile methods as well as traditional requirements-and-specifications. Although such approaches on their own are unlikely to achieve the desired outcomes, their familiarity may facilitate acceptance and usage of the other lenses.

    The remaining pieces of the puzzle involve standards, mixed methods, and deliverables.

    Unified Knowledge Universe
    Unified Knowledge Universe

    Photo: Lenses rainbow (csaveanu/flickr).

  • Practicum

    Practicum

    Individually, team members continually learn in their respective area of work, by both formal and informal means. Most learning today happens by accretion, as a continual, networked (‘know-where’), and embedded process. However, occasions to share and reflect on best practice are rare, and may be felt to be interruptions or distractions from the ‘real work’ in one’s silo. Furthermore, online learning events (“webinars”) tend to be long (one hour is typical), require professionals to take “time out” from their work in order to learn, and do not provide the necessary linkage between knowledge acquired and its application to work (the “applicability problem”).

    To further continual learning, the practicum offers a 15-minute online presentation from a global thought leader on a topic directly relevant to the business. Participants are invited to watch the presentation together, and to stay together for face-to-face discussion (beyond 15 minutes) to determine practical ways in which the concepts and ideas may be applied (collaborative, collective, and creative) through facilitated discussion. This provides the opportunity to learn with respect to real challenges rather than from generic content and cases, reducing the tensions and tradeoffs felt when having to choose between attending a learning program and getting the work done. The recording of each presentation will be made available one week after the live event (captured and codified), adding incentive to participate in the live event.

    Presenters will share a short list of key resources and links so that participants may independently investigate the topic and its application to their context, encouraging needs-based learning (“acquisition”) and learning as cognition and reflection (“emergence”).