Tag: generative AI

  • How do we stop AI-generated ‘poverty porn’ fake images?

    How do we stop AI-generated ‘poverty porn’ fake images?

    There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications.

    Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering – the very tropes many of us have worked for decades to banish.

    The alarms are valid.

    The images are harmful.

    But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

    The problem is not the tool.

    The problem is the user.

    Generative artificial intelligence is not the cause of poverty porn.

    The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception.

    This is not a new phenomenon.

    It is a long-standing pattern.

    In my private conversations with colleagues and researchers like Alenichev, I find we often agree on this point.

    Yet, the public-facing writing and research seem to stop short, focusing on the technological symptom rather than the systemic illness.

    It is vital we correct this focus before we implement the wrong solutions.

    The old poison in a new bottle

    Long before Midjourney, large organizations and their communications teams were propagating the worst kinds of caricatures.

    I know this.

    Many of us know this.

    We remember the history of award-winning photographers being sent from the Global North to “find… miserable kids” and stage images to meet the needs of funders. Organizations have always been willing to manufacture narratives that “show… people on the receiving end of aid as victims”.

    These working cultures — which demand images of suffering, which view Black and Brown bodies as instruments for fundraising, and which prioritize the “western gaze” — existed decades before artificial intelligence.

    Artificial intelligence did not create this impulse.

    It just made it cheaper, faster, and easier to execute.

    It is an enabler, not an originator.

    If an organization’s communications philosophy is rooted in colonial stereotypes, it will produce colonial stereotypes, whether it is using a 1000-dollar-a-day photographer or a 30-dollar-a-month software subscription.

    The danger of a misdiagnosis

    If we incorrectly identify artificial intelligence as the cause of this problem, our “solution” will be to ban the technology.

    This would be a catastrophic mistake.

    First, it is a superficial fix.

    It allows the very organizations producing this content to performatively cleanse themselves by banning a tool, all while eluding the fundamental, painful work of challenging their own underlying racism and colonial impulses.

    The problem will not be solved; it will simply revert to being expressed through traditional (and often staged) photography.

    Second, it punishes the wrong people.

    For local actors and other small organizations, generative artificial intelligence is not necessarily a tool for creating poverty porn.

    It is a tactical advantage in a fight for survival.

    Such organizations may lack the resources for a full communication team.

    They are then “punished by algorithms” that demand a constant stream of visuals, burying stories of organizations that cannot provide them.

    Furthermore, some organizations committed to dignity in representation are also using artificial intelligence to solve other deep ethical problems.

    They use it to create dignified portraits for stories without having to navigate the complex and often extractive issues of child protection and consent.

    They use it to avoid exploiting real people.

    A blanket ban on artificial intelligence in our sector would disarm small, local organizations.

    It would silence those of us trying to use the tool ethically, while allowing the large, wealthy organizations to continue their old, harmful practices unchanged.

    The real work ahead

    This is why I must insist we reframe the debate.

    The question is not if we should use artificial intelligence.

    The question is, and has always been, how we challenge the racist systems that demand these images in the first place.

    My Algerian ancestors fought colonialism.

    I cannot separate my work at The Geneva Learning Foundation from the struggle against racism and fighting for the right to tell our own stories.

    That philosophy guides how I use any tool, whether it is a word processor or an image generator.

    The tool is not the ethic.

    We need to demand accountability from organizations like the World Health Organization, Plan International, and even the United Nations.

    We must challenge the working cultures that green-light these campaigns.

    We should also, as Arsenii rightly points out, support local photographers and artists.

    But we must not let organizations off the hook by allowing them to blame a piece of software for their own lack of imagination and their deep, unaddressed colonial legacies.

    Artificial intelligence is not the problem.

    Our sector’s colonial mindset is.

    References

    Image: The Geneva Learning Foundation Collection © 2025

  • A generative AI podcast dialogue exploring The Geneva Learning Foundation’s progress in 2024

    A generative AI podcast dialogue exploring The Geneva Learning Foundation’s progress in 2024

    This experimental podcast, created in collaboration with generative AI, demonstrates a novel approach to exploring complex learning concepts through a conversational framework that is intended to support dialogic learning. Based on TGLF’s 2024 end-of-year message and supplementary materials, the conversation examines their peer learning model through a combination of concrete examples and theoretical reflection. The dialogue format enables exploration of how knowledge emerges through structured interaction, even in AI-generated content.

    Experimental nature and limitations of generative AI for dialogic learning

    This content is being shared as an exploration of how generative AI might contribute to learning and knowledge construction. While based on TGLF’s actual 2024 message, the dialogue includes AI-generated elaborations that may contain inaccuracies. However, these limitations themselves provide interesting insights into how knowledge emerges through interaction, even in artificial contexts.

    You can read our actual 2024 Year in review message here.

    Pedagogical value and theoretical implications of a generative AI conversational framework

    Structured knowledge construction: The conversational framework illustrates how knowledge can emerge through structured dialogue, even when artificially generated. This mirrors TGLF’s own insights about how structure enables rather than constrains dialogic learning.

    Multi-level learning: The dialogue operates on multiple levels:

    • Direct information sharing about TGLF’s work
    • Modeling of reflective dialogue
    • Meta-level exploration of how knowledge emerges through interaction
    • Integration of concrete examples with theoretical reflection

    Network effects in learning: The conversation demonstrates how different types of knowledge (statistical, narrative, theoretical, practical) can be woven together through dialogue to create deeper understanding. This parallels TGLF’s observations about how learning emerges through structured networks of interaction.

      We invite listeners to consider:

      • How a conversational framework enables exploration of complex ideas
      • The role of structure in enabling knowledge emergence
      • The relationship between concrete examples and theoretical understanding
      • The potential and limitations of AI in supporting dialogic learning

      This experiment invites reflection not just on the content itself, but on how knowledge and understanding emerge through structured interaction – whether human or artificial.

      Your insights about how this generative AI format affects your understanding will help inform future explorations of AI’s role in learning.

      What aspects of the conversational framework enhanced or hindered your understanding?

      How did the interplay of concrete examples and reflective discussion affect your learning?

      What difference did it make that you knew before listening that the conversation was created using generative AI?

      We welcome your thoughts on these deeper questions about how learning happens through structured interaction.