Tag: Artificial Intelligence

  • Why YouTube is obsolete: From linear video content consumption to AI-mediated multimodal knowledge production

    Why YouTube is obsolete: From linear video content consumption to AI-mediated multimodal knowledge production

    Does the educational purpose of video change with AI?

    The purpose of video in education is undergoing a fundamental transformation in the age of artificial intelligence. This medium, long established in digital learning environments, is changing not just in how we consume it, but in its very role within the learning process.

    Video has always been a problem in education

    Video has always presented significant challenges in educational contexts. Its linear format makes it difficult to skim or scan content. Unlike text, which allows learners to quickly jump between sections, glance at headings, or scan for key information, video requires sequential consumption. This constraint has long been problematic for effective learning.

    Furthermore, in many regions where our learners are based, internet access remains expensive, unreliable, or limited. Downloading or streaming video content can be prohibitively costly in terms of both data usage and time. The result is straightforward: few learners will watch educational videos, regardless of their potential value.

    The bandwidth and attention divide

    This reality creates a significant divide in educational access. While instructional designers and educators in high-resource settings continue to produce video-heavy content, learners in bandwidth-constrained environments have been systematically excluded from these resources. Even when videos are technically accessible, the time investment required to watch linear content often exceeds what busy professionals can allocate to learning activities.

    Emergent AI platforms are scanning YouTube video transcripts to extract precisely what users need. This capability suggests a transformation for the role of video. YouTube and other video platforms are evolving into what might be called “interstitial processors”, mediating layers that support knowledge production and dissemination for subsequent extraction and analysis by both humans and machines.

    A more inclusive workflow for knowledge extraction

    This changing relationship with video content could enable more inclusive approaches to learning. When I discover a potentially valuable educational webinar, I now follow a structured approach to maximize efficiency and accessibility:

    1. Download the video file.
    2. Transcribe it using Whisper AI technology.
    3. Ask targeted questions to extract meaningful insights from the transcript.
    4. Request direct quotes as evidence of key points.

    This method circumvents the traditional requirement to invest 60 minutes or more in viewing content that may ultimately offer limited value. More importantly, it transforms bandwidth-heavy video into lightweight text that can be accessed, searched, and processed even in low-connectivity environments.

    I suspect that it is no accident that YouTube has recently placed additional restrictions on downloading videos from its platform.

    Bridging the resource gap with AI

    Current consumer-grade AI systems like Claude.ai have limitations: they cannot yet process full videos directly. For now, we are restricted to text-based interactions with video content, hence my transcription of downloaded content. However, this constraint will likely dissolve as AI capabilities continue to advance.

    The immediate benefit is that this approach can help bridge the resource gap that has disadvantaged learners in bandwidth-constrained environments. By extracting the knowledge essence from videos, we could make educational content more accessible and equitable across diverse learning contexts.

    The continuing value of educational video production

    Despite these challenges, educational video production continues to be a relevant method for humans and machines that need a way to share what they know. Hence, what we are witnessing is not the diminishing relevance of educational video, but rather a transformation in how its knowledge value is extracted and utilized. The production of video content remains valuable. It is our methods of processing and consuming it that are evolving.

    Aligning with effective networked learning theory

    This shift aligns with contemporary understanding of effective learning. Research consistently demonstrates that passive consumption of information, whether through video or text, remains insufficient for meaningful learning. Genuine knowledge development emerges through active construction – the processes of questioning, connecting, applying, and adapting information within broader contexts.

    The AI-enabled extraction of insights from video content represents a step toward more active engagement with educational materials – transforming passive viewing into targeted interaction with the specific knowledge elements most relevant to individual learning needs.

    Knowledge networks trump media formats

    Our experience with global learning networks demonstrates the importance of moving beyond media format limitations. When health professionals from diverse contexts share practices and adapt them to their specific environments, the medium of exchange becomes secondary to the knowledge being constructed.

    AI tools that can extract and process information from videos help overcome the medium’s inherent limitations, turning static content into formats that can not only be read, viewed, or listened to – but that can also be remixed and fused with other sources. This approach allows learners to engage more directly with knowledge, freed from the constraints of linear consumption and bandwidth requirements.

    Rethinking video as a dual-purpose knowledge production format

    We are witnessing the development of new approaches to educational content where media exists simultaneously for direct human consumption and as structured data for AI processing. When the boundaries between content formats become increasingly permeable, with value residing not in the medium itself but in the knowledge that can be extracted and constructed from it.

    Despite the consumption challenges, video remains an exceptional medium for content production that serves both humans and machines. For content creators, video offers unmatched richness in communicating complex ideas through visual demonstration, tone, and emotional connection.

    What is emerging is not a devaluation of video creation but a transformation in how its knowledge is accessed. As AI tools evolve, video becomes increasingly valuable as a comprehensive knowledge repository where information is encoded in multiple dimensions – visual, auditory, and textual through transcripts.

    This makes video uniquely positioned as a “dual-purpose” content format: rich and engaging for those who can consume it directly, while simultaneously serving as a structured data source from which AI can extract targeted insights.

    In this paradigm, video production remains vital while consumption patterns evolve toward more efficient, personalized knowledge extraction.

    The creator’s effort in producing quality video content now yields value across multiple consumption pathways rather than being limited to linear viewing

    How to cite this article: Sadki, R. (2025). Why YouTube is obsolete: From linear video content consumption to AI-mediated multimodal knowledge production. Learning to make a difference. https://doi.org/10.59350/rfr2z-h4y93

    References

    Delello, J.A., Watters, J.B., Garcia-Lopez, A., 2024. Artificial Intelligence in Education: Transforming Learning and Teaching, in: Delello, J.A., McWhorter, R.R. (Eds.), Advances in Business Information Systems and Analytics. IGI Global, pp. 1–26. https://doi.org/10.4018/979-8-3693-3003-6.ch001

    Guo, P.J., Kim, J., Rubin, R., 2014. How video production affects student engagement: An empirical study of MOOC videos, in: Proceedings of the First ACM Conference on Learning@ Scale Conference. ACM, pp. 41–50. https://doi.org/10.1145/2556325.2566239

    Hansch, A., Hillers, L., McConachie, K., Newman, C., Schildhauer, T., Schmidt, P., 2015. Video and Online Learning: Critical Reflections and Findings from the Field. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2577882

    Kumar, L., Singh, D.K., Ansari, M.A., 2024. Role of Video Content Generation in Education Systems Using Generative AI:, in: Doshi, R., Dadhich, M., Poddar, S., Hiran, K.K. (Eds.), Advances in Educational Technologies and Instructional Design. IGI Global, pp. 341–355. https://doi.org/10.4018/979-8-3693-2440-0.ch019

    Mayer, R.E., Fiorella, L., Stull, A., 2020. Five ways to increase the effectiveness of instructional video. Education Tech Research Dev 68, 837–852. https://doi.org/10.1007/s11423-020-09749-6

    Netland, T., Von Dzengelevski, O., Tesch, K., Kwasnitschka, D., 2025. Comparing human-made and AI-generated teaching videos: An experimental study on learning effects. Computers & Education 224, 105164. https://doi.org/10.1016/j.compedu.2024.105164

    Salomon, G., 1984. Television is “easy” and print is “tough”: The differential investment of mental effort in learning as a function of perceptions and attributions. Journal of Educational Psychology 76, 647–658. https://doi.org/10.1037/0022-0663.76.4.647

    Sun, M., 2024. An Intelligent Retrieval Method for Audio and Video Content: Deep Learning Technology Based on Artificial Intelligence. IEEE Access 12, 123430–123446. https://doi.org/10.1109/ACCESS.2024.3450920

    Image: The Geneva Learning Foundation Collection © 2025

  • Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis

    Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis

    I know and appreciate Joseph, a Kenyan health leader from Murang’a County, for years of diligent leadership and contributions as a Scholar of The Geneva Learning Foundation (TGLF). Recently, he began submitting AI-generated responses to Teach to Reach Questions that were meant to elicit narratives grounded in his personal experience.

    Seemingly unrelated to this, OpenAI just announced plans for specialized AI agents—autonomous systems designed to perform complex cognitive tasks—with pricing ranging from $2,000 monthly for a “high-income knowledge worker” equivalent to $20,000 monthly for “PhD-level” research capabilities.

    This is happening at a time when traditional funding structures in global health, development, and humanitarian response face unprecedented volatility.

    These developments intersect around fundamental questions of knowledge economics, authenticity, and power in global health contexts.

    I want to explore three questions:

    • What happens when health professionals in resource-constrained settings experiment with AI technologies within accountability systems that often penalize innovation?
    • How might systems claiming to replicate human knowledge work transform the economics and ethics of knowledge production?
    • And how should we navigate the tensions between technological adoption and authentic knowledge creation?

    Artificial intelligence within punitive accountability structures of global health

    For years, Joseph had shared thoughtful, context-rich contributions based on his direct experiences. All of a sudden, he was submitting generic mush with all the trappings of bad generative AI content.

    Should we interpret this as disengagement from peer learning?

    Given his history of diligence and commitment, I could not dismiss his exploration of AI tools as diminished engagement. Instead, I understood it as an attempt to incorporate new capabilities into his professional repertoire. This was confirmed when I got to chat with him on a WhatsApp call.

    Our current Teach to Reach Questions system has not yet incorporated the use of AI. Our “old” system did not provide any way for Joseph to communicate what he was exploring.

    Hence, the quality limitations in AI-generated narratives highlight not ethical failings but a developmental process requiring support rather than judgment.

    But what does this look like when situated within global health accountability structures?

    Health workers frequently operate within highly punitive systems where performance evaluation directly impacts funding decisions. International donors maintain extensive surveillance of program implementation, creating environments where experimentation carries significant risk. When knowledge sharing becomes entangled with performance evaluation, the incentives for transparency about AI “co-working” (i.e., collaboration between human and AI in work) diminish dramatically.

    Seen through this lens, the question becomes not whether to prohibit AI-generated contributions but how to create environments where practitioners can explore technological capabilities without fear that disclosure will lead to automatic devaluation of their knowledge, regardless of its substantive quality. This heavily depends on the learning culture, which remains largely ignored or dismissed in global health.

    The transparency paradox: disclosure and devaluation of artificial intelligence in global health

    This case illustrates what might be called the “transparency paradox”—when disclosure or recognition of AI contribution triggers automatic devaluation regardless of substantive quality. Current attitudes create a problematic binary: acknowledge AI assistance and have contributions dismissed regardless of quality, or withhold disclosure and risk accusations of misrepresentation or worse.

    This paradox creates perverse incentives against transparency, particularly in contexts where knowledge production undergoes intensive evaluation linked to resource allocation. The global health sector’s evaluation systems often emphasize compliance over innovation, creating additional barriers to technological experimentation. When every submission potentially affects funding decisions, incentives for technological experimentation become entangled with accountability pressures.

    This dynamic particularly affects practitioners in Global South contexts, who face more intense scrutiny while having less institutional protection for experimentation. The punitive nature of global health accountability systems deserves particular emphasis. Health workers operate within hierarchical structures where performance is consistently monitored by both national governments and international donors. Surveillance extends from quantitative indicators to qualitative assessments of knowledge and practice.

    In environments where funding depends on demonstrating certain types of knowledge or outcomes, the incentive to leverage artificial intelligence in global health may conflict with values of authenticity and transparency. This surveillance culture creates uniquely challenging conditions for technological experimentation. When performance evaluation drives resource allocation decisions, health workers face considerable risk in acknowledging technological assistance—even as they face pressure to incorporate emerging technologies into their practice.

    The economics of knowledge in global health contexts

    OpenAI’s announced “agents” represent a substantial evolution beyond simple chatbots or language models. If they are able to deliver what they just announced, these specialized systems would autonomously perform complex tasks simulating the cognitive work of highly-skilled professionals. The most expensive tier, priced at $20,000 monthly, purportedly offers “PhD-level” research capabilities, working continuously without the limitations of human scheduling or attention.

    These claims, while unproven, suggest a potential future where knowledge work economics fundamentally change. For global health organizations operating in Geneva, where even a basic intern position for a recent master’s degree graduate cost more than 200 times that of a ChatGPT subscription, the economic proposition of systems working 24/7 for potentially comparable costs merits careful examination.

    However, the global health sector has historically operated with significant labor stratification, where personnel in Global North institutions command substantially higher compensation than those working in Global South contexts. Local health workers often provide critical knowledge at compensation rates far below those of international consultants or staff at Northern institutions. This creates a different economic equation than suggested by Geneva-based comparisons. Many organizations have long relied on substantially lower local labor costs, often justified through capacity-building narratives that mask underlying power asymmetries.

    Given this history, the risk that artificial intelligence in global health would replace local knowledge workers might initially appear questionable. Furthermore, the sector has demonstrated considerable resistance to technological adoption, particularly when it might disrupt established operational patterns. However, this analysis overlooks how economic pressures interact with technological change during periods of significant disruption.

    The recent decisions of many government to donors to suddenly and drastically cut funding and shut down programs illustrates how rapidly even established funding structures can collapse. In such environments, organizations face existential questions about maintaining operational capacity, potentially creating conditions where technological substitution becomes more attractive despite institutional resistance.

    A new AI divide

    ChatGPT and other generative AI tools were initially “geo-locked”, making them more difficult to access from outside Europe and North America.

    Now, the stratified pricing structure of OpenAI’s announced agents raises profound equity concerns. With the most sophisticated capabilities reserved for those able to pay high costs for the most capable agents, we face the potential emergence of an “AI divide” that threatens to reinforce existing knowledge power imbalances.

    This divide presents particular challenges for global health organizations working across diverse contexts. If advanced AI capabilities remain the exclusive province of Northern institutions while Southern partners operate with limited or no AI augmentation, how might this affect knowledge dynamics already characterized by significant inequities?

    The AI divide extends beyond simple access to include quality differentials in available systems. Even as simple AI tools become widely available, sophisticated capabilities that genuinely enhance knowledge work may remain concentrated within well-resourced institutions. This could lead to a scenario where practitioners in resource-constrained settings use rudimentary AI tools that produce low-quality outputs, further reinforcing perceptions of capability gaps between North and South.

    Confronting power dynamics in AI integration

    Traditional knowledge systems in global health position expertise in academic and institutional centers, with information flowing outward to practitioners who implement standardized solutions. This existing structure reflects and reinforces global power imbalances. 

    The integration of AI within these systems could either exacerbate these inequities—by further concentrating knowledge production capabilities within well-resourced institutions—or potentially disrupt them by enabling more distributed knowledge creation processes.

    Joseph’s journey demonstrates this tension. His adoption of AI tools might be viewed as an attempt to access capabilities otherwise reserved for those with greater institutional resources. The question becomes not whether to allow such adoption, but how to ensure it serves genuine knowledge democratization rather than simply producing more sophisticated simulations of participation.

    These emerging dynamics require us to fundamentally rethink how knowledge is valued, created, and shared within global health networks. The transparency paradox, economic pressures, and emerging AI divide suggest that technological integration will not occur within neutral space but rather within contexts already characterized by significant power asymmetries.

    Developing effective responses requires moving beyond simple prescriptions about AI adoption toward deeper analysis of how these technologies interact with existing power structures—and how they might be intentionally directed toward either reinforcing or transforming these structures.

    My framework for Artificial Intelligence as co-worker to support networked learning and local action is intended to contribute to such efforts.

    Illustration: The Geneva Learning Foundation Collection © 2025

    References

    Frehywot, S., Vovides, Y., 2024. Contextualizing algorithmic literacy framework for global health workforce education. AIH 0, 4903. https://doi.org/10.36922/aih.4903

    Hazarika, I., 2020. Artificial intelligence: opportunities and implications for the health workforce. International Health 12, 241–245. https://doi.org/10.1093/inthealth/ihaa007

    John, A., Newton-Lewis, T., Srinivasan, S., 2019. Means, Motives and Opportunity: determinants of community health worker performance. BMJ Glob Health 4, e001790. https://doi.org/10.1136/bmjgh-2019-001790

    Newton-Lewis, T., Munar, W., Chanturidze, T., 2021. Performance management in complex adaptive systems: a conceptual framework for health systems. BMJ Glob Health 6, e005582. https://doi.org/10.1136/bmjgh-2021-005582

    Newton-Lewis, T., Nanda, P., 2021. Problematic problem diagnostics: why digital health interventions for community health workers do not always achieve their desired impact. BMJ Glob Health 6, e005942. https://doi.org/10.1136/bmjgh-2021-005942

    Artificial Intelligence and the health workforce: Perspectives from medical associations on AI in health (OECD Artificial Intelligence Papers No. 28), 2024. , OECD Artificial Intelligence Papers. https://doi.org/10.1787/9a31d8af-en

    Sadki, R. (2025). A global health framework for Artificial Intelligence as co-worker to support networked learning and local action. Reda Sadki. https://doi.org/10.59350/gr56c-cdd51

  • A global health framework for Artificial Intelligence as co-worker to support networked learning and local action

    A global health framework for Artificial Intelligence as co-worker to support networked learning and local action

    The theme of International Education Day 2025, “AI and education: Preserving human agency in a world of automation,” invites critical examination of how artificial intelligence might enhance rather than replace human capabilities in learning and leadership. Global health education offers a compelling context for exploring this question, as mounting challenges from climate change to persistent inequities demand new approaches to building collective capability.

    The promise of connected communities

    Recent experiences like the Teach to Reach initiative demonstrate the potential of structured peer learning networks. The platform has connected over 60,000 health workers, primarily government workers from districts and facilities across 82 countries, including those serving in conflict zones, remote rural areas, and urban settlements. For example, their exchanges about climate change impacts on community health point the way toward more distributed forms of knowledge creation in global health. 

    Analysis of these networks suggests possibilities for integrating artificial intelligence not merely as tools but as active partners in learning and action. However, realizing this potential requires careful attention to how AI capabilities might enhance rather than disrupt the human connections that drive current success.

    Artificial Intelligence (AI) partnership could provide crucial support for tackling mounting challenges. More importantly, they could help pioneer new approaches to learning and action that genuinely serve community needs while advancing our understanding of how human and machine intelligence might work together in service of global health.

    Understanding Artificial Intelligence (AI) as partner, not tool

    The distinction between AI tools and AI partners merits careful examination. Early AI applications in global health primarily automate existing processes – analyzing data, delivering content, or providing recommendations. While valuable, this tool-based approach maintains clear separation between human and machine capabilities.

    AI partnership suggests a different relationship, where artificial intelligence participates actively in learning networks alongside human practitioners. This could mean AI systems that:

    • Engage in dialogue with health workers about local observations
    • Help validate emerging insights through pattern analysis
    • Support adaptation of solutions across contexts
    • Facilitate connections between practitioners facing similar challenges

    The key difference lies in moving from algorithmic recommendations to collaborative intelligence that combines human wisdom with machine capabilities.

    A framework for AI partnership in global health

    Analysis of current peer learning networks suggests several dimensions where AI partnership could enhance collective capabilities:

    • Knowledge creation: Current peer learning networks enable health workers to share observations and experiences across borders. AI partners could enrich this process by engaging in dialogue about patterns and connections, while preserving the central role of human judgment in validating insights.
    • Learning process: Teach to Reach demonstrates how structured peer learning accelerates knowledge sharing and adaptation. AI could participate in these networks by contributing additional perspectives, supporting rapid synthesis of experiences, and helping identify promising practices.
    • Local leadership: Health workers develop and implement solutions based on deep understanding of community needs. AI partnership could enhance decision-making by exploring options, modeling potential outcomes, and validating approaches while maintaining human agency.
    • Network formation: Digital platforms currently enable lateral connections between health workers across regions. AI could actively facilitate network development by identifying valuable connections and supporting knowledge flow across boundaries.
    • Implementation support: Peer review and structured feedback drive current learning-to-action cycles. AI partners could engage in ongoing dialogue about implementation challenges while preserving the essential role of human judgment in local contexts.
    • Evidence generation: Networks document experiences and outcomes through structured processes. AI collaboration could help develop and test hypotheses about effective practices while maintaining focus on locally-relevant evidence.

    Applications across three global health challenges

    This framework suggests new possibilities for addressing persistent challenges.

    1. Immunization systems

    Current global immunization goals face significant obstacles in reaching zero-dose children and strengthening routine services. AI partnership could enhance efforts by:

    • Supporting microplanning by mediating dialogue about local barriers
    • Facilitating rapid learning about successful engagement strategies
    • Enabling coordinated action across health system levels
    • Modeling potential impacts of different intervention approaches

    2. Neglected Tropical Diseases (NTDs)

    The fight against NTDs suffers from critical information gaps and weak coordination at local levels. Many communities, including health workers, lack basic knowledge about these diseases. AI partnership could help address these gaps through:

    • Facilitating knowledge flow between affected communities
    • Supporting coordination of control efforts
    • Enabling rapid validation of successful approaches
    • Strengthening surveillance and response networks

    3. Climate change and health

    Health workers’ observations of climate impacts on community health provide crucial early warning of emerging threats. AI partnership could enhance response capability by:

    • Engaging in dialogue about changing disease patterns
    • Supporting rapid sharing of adaptation strategies
    • Facilitating coordinated action across regions
    • Modeling potential impacts of interventions

    Pandemic preparedness beyond early warning

    The experience of digital health networks during recent disease outbreaks reveals both the power of distributed response capabilities and the potential for enhancement through AI partnership. When COVID-19 emerged, networks of health workers demonstrated remarkable ability to rapidly share insights and adapt practices. For example, the Geneva Learning Foundation’s COVID-19 Peer Hub connected over 6,000 frontline health professionals who collectively generated and implemented recovery strategies at rates seven times faster than isolated efforts.

    This networked response capability suggests new possibilities for pandemic preparedness that combines human and machine intelligence. Heightened preparedness could emerge from the interaction between health workers, communities, and AI partners engaged in continuous learning and adaptation.

    Current pandemic preparedness emphasizes early detection through formal surveillance. However, health workers in local communities often observe concerning patterns before these register in official systems.

    AI partnership could enhance this distributed sensing capability while maintaining its grounding in local realities. Rather than simply analyzing reports, AI systems could engage in ongoing dialogue with health workers about their observations, helping to:

    • Explore possible patterns and connections
    • Test hypotheses about emerging threats
    • Model potential trajectories
    • Identify similar experiences across regions

    The key lies in combining human judgment about local significance with AI capabilities for pattern recognition across larger scales.

    The focus remains on accelerating locally-led learning rather than imposing standardized solutions.

    Perhaps most importantly, AI partnership could enhance the collective intelligence that emerges when practitioners work together to implement solutions. Current networks enable health workers to share implementation experiences and adapt strategies to local contexts. Adding AI capabilities could support this through:

    • Ongoing dialogue about implementation challenges
    • Analysis of patterns in successful adaptation
    • Support for rapid testing of modifications
    • Facilitation of cross-context learning

    Success requires maintaining human agency in implementation while leveraging machine capabilities to strengthen collective problem-solving.

    This networked vision of pandemic preparedness, enhanced through AI partnership, represents a fundamental shift from current approaches. Rather than attempting to predict and control outbreaks through centralized systems, it suggests building distributed capabilities for continuous learning and adaptation. The experience of existing health worker networks provides a foundation for this transformation, while artificial intelligence offers new possibilities for strengthening collective response capabilities.

    Investment for innovation

    Realizing this vision requires strategic investment in:

    • Network development: Supporting growth of peer learning platforms that accelerate local action while maintaining focus on human connection.
    • AI partnership innovation: Developing systems designed to participate in learning networks while preserving human agency.
    • Implementation research: Studying how AI partnership affects collective capabilities and health outcomes.
    • Capacity strengthening: Building health worker capabilities to effectively collaborate with AI while maintaining critical judgment.

    Looking forward

    The transformation of global health learning requires moving beyond both conventional practices of technical assistance and simple automation. Experience with peer learning networks demonstrates what becomes possible when health workers connect to share knowledge and drive change.

    Adding artificial intelligence as partners in these networks – rather than replacements for human connection – could enhance collective capabilities to protect community health. However, success requires careful attention to maintaining human agency while leveraging technology to strengthen rather than supplant local leadership.

    7 key principles for AI partnership

    1. Maintain human agency in decision-making
    2. Support rather than replace local leadership
    3. Enhance collective intelligence
    4. Enable rapid learning and adaptation
    5. Preserve context sensitivity
    6. Facilitate knowledge flow across boundaries
    7. Build sustainable learning systems

    Listen to an AI-generated podcast about this article

    🤖 This podcast was generated by AI, discussing Reda Sadki’s 24 January 2025 article “A global health framework for Artificial Intelligence as co-worker to support networked learning and local action”. While the conversation is AI-generated, the framework and examples discussed are based on the published article.

    Framework: AI partnership for learning and local action in global health

    DimensionCurrent StateAI as ToolsAI as PartnersPotential Impact
    Knowledge creationHealth workers share observations and experiences through peer networksAI analyzes patterns in shared dataAI engages in dialogue with health workers, asking questions, suggesting connections, validating insightsNew forms of collective intelligence combining human and machine capabilities
    Learning processStructured peer learning through digital platforms and networksAI delivers content and analyzes performanceAI participates in peer learning networks, contributes insights, supports adaptationAccelerated learning through human-AI collaboration
    Local leadershipHealth workers develop and implement solutions for community challengesAI provides recommendations based on data analysisAI works alongside local leaders to explore options, model scenarios, validate approachesEnhanced decision-making combining local wisdom with AI capabilities
    Network formationLateral connections between health workers across regionsAI matches similar profiles or challengesAI actively facilitates network development, identifies valuable connectionsMore effective knowledge networks leveraging both human and machine intelligence
    Implementation supportPeer review and structured feedback on action plansAI checks plans against best practicesAI engages in iterative dialogue about implementation challenges and solutionsImproved implementation through combined human-AI problem-solving
    Evidence generationDocumentation of experiences and outcomes through structured processesAI analyzes implementation dataAI collaborates with health workers to develop and test hypotheses about what worksNew approaches to generating practice-based evidence

    Image: The Geneva Learning Foundation Collection © 2024

  • Digital health: The Geneva Learning Foundation to bring AI-driven training to health workers in 90 countries

    Digital health: The Geneva Learning Foundation to bring AI-driven training to health workers in 90 countries

    GENEVA, 23 April 2019 – The Geneva Learning Foundation (GLF) is partnering with artificial intelligence (AI) learning pioneer Wildfire to pilot cutting edge learning technology with over 1,000 immunization professionals in 90 countries, many working at the district level.

    British startup Wildfire, an award-winning innovator, is helping the Swiss non-profit tackle a wicked problem: while international organizations publish global guidelines, norms, and standards, they often lack an effective, scalable mechanism to support countries to turn these into action that leads to impact.

    By using machine learning to automate the conversion of such guidelines into learning modules, Wildfire’s AI reduces the cost of training health workers to recall critical information. This is a key step for global norms and standards to translate into making a real impact in the health of people.

    If the pilot is successful, Wildfire’s AI will be included in TGLF’s Scholar Approach, a state-of-the-art evidence-based package of pedagogies to deliver high-quality, multi-lingual learning. This unique Approach has already been shown to not only enhance competencies but also to foster collaborative implementation of transformative projects that began as course work.

    TGLF President Reda Sadki (@redasadki) said: “The global community allocates considerable human and financial resources to training (1). This investment should go into pedagogical innovation to revolutionize health (2).”

    Wildfire CEO Donald Clark (@donaldclark) said: “As a Learning Innovation Partner to the Geneva learning Foundation, our aim is to improve the adoption and application of digital learning toward achievement of the Sustainable Development Goals (SDGs).”

    Three learning modules based on the World Health Organization’s Global Routine Immunization Strategies and Practices (GRISP) guidelines are now available to pilot participants, including Alumni of the WHO Scholar Level 1 GRISP certification in routine immunization planning. They will be asked to evaluate the relevance of such modules for their own training needs.

    About Wildfire

    Wildfire is one of the Foundation’s first Learning Innovation Partners. It is an award-winning educational technology startup based in the United Kingdom.

    • Described by the company as the “first AI driven content creation tool”, Wildfire’s system takes any document, PowerPoint or video to automatically create online learning.
    • This may reduce costs and time required to produce self-guided e-learning that can help improve the ability to recall information.

    About the Geneva Learning Foundation

    The mission of the Geneva Learning Foundation (TGLF) is to research, invent, and trial breakthrough approaches for new learning, talent and leadership as a way of shaping humanity and society for the better.

    • Learning Innovation Partners (LIP) are startups selected by the Foundation to trial new ways of doing new things to tackle ‘wicked’ problems that have resisted conventional approaches.
    • The Foundation is currently developing the first Impact Accelerator to support learners using the Scholar Approach beyond training, with support from the Bill and Melinda Gates Foundation (BMGF).

    References

     (1) The Bill and Melinda Gates Foundation. “Framework for Immunization Training and Learning.” Seattle, USA: The Bill and Melinda Gates Foundation, August 2017.

    (2) Sadki, R., 2013. The significance of technology for humanitarian education, in: World Disasters Report 2013: Technology and the Effectiveness of Humanitarian Action. International Federation of Red Cross and Red Crescent Societies, Geneva.