Category: Artificial intelligence

  • The future of work: remarks at the 9th 1M1B Impact Summit held at the United Nations in Geneva

    The future of work: remarks at the 9th 1M1B Impact Summit held at the United Nations in Geneva

    On November 7, 2025, Reda Sadki, Executive Director of The Geneva Learning Foundation, joined the panel “The Future of Work: AI and Green Skills” at the 9th 1M1B Impact Summit held at the United Nations in Geneva. Moderated by Elizabeth Saunders, the discussion explored the rapid redefinition of the workforce by artificial intelligence and the green transition. The following is an edited transcript of Mr. Sadki’s remarks.

    Living with artificial intelligence

    Moderator: You have just seen some of these really incredible changemaker ideas and so what skills and mindsets stood out to you and how do you think those can be scaled to build a workforce that is living with AI and not competing with it?

    That is a wonderful question.

    I would answer that the key skill is learning to work with artificial intelligence.

    It is likely that your generation will be the first one learning to work side-by-side with AI as a partner or a co-worker, in the same way my generation learned to navigate the Internet.

    This requires three things.

    First, being ambitious.

    Second, being bold.

    And third, being courageous.

    Things are going to change dramatically in the next three to six years.

    There is a convergence of belief among those building these systems—what some call the “San Francisco Consensus”—that within this short timeframe, AI will fundamentally transform every aspect of human activity.

    We are facing the arrival of a new, non-human intelligence that is likely to have better reasoning skills than humans.

    This is not just about new tools.

    We are already seeing AI automate the routine tasks that make up the first rungs of a professional career.

    Some may tell you AI is not coming for your job, but I struggle to see that as anything other than misleading at best.

    In our programmes at The Geneva Learning Foundation, we have already used AI to replace key functions previously performed by humans.

    So, the sooner we are thinking, learning, and getting ready to navigate those changes, the better.

    The challenge is not to compete with AI in knowledge transmission.

    The risk is what some call “metacognitive laziness”, outsourcing our critical thinking to the machine.

    What is left for humans, and what we must cultivate, is facilitation, interpretation, and uniquely human-centric skills.

    These include creativity, curiosity, critical thinking, collective care, and consciousness.

    We must cultivate judgment, contextual wisdom, and ethical reasoning.

    We are navigating the unknown, and learning to do so together – by strengthening the connections between us, by asking what it means to be connected as humans – will be critical to our survival.

    Peer learning and democratizing access

    Moderator: I have a question for you about your foundation, because you have pioneered peer learning networks that have reached thousands globally. So what can we learn from this model about how to democratize access to AI and green skills, and make lifelong learning more inclusive and action-driven?

    The Geneva Learning Foundation’s mission, since 2016, has been to research, develop, and implement new ways to learn and lead.

    Our initial premise was that our traditional education systems are broken.

    They often rely on a top-down, “transmission model” of learning, where knowledge flows from experts to practitioners.

    This model is too slow, too expensive, and often fails to reach the people and communities that are facing extinction-level threats, whether that is climate change or artificial intelligence.

    In today’s world, these broken systems create significant risks when it comes to the critical threats upon our societies, including climate change and artificial intelligence.

    In the last three years, we have made key breakthroughs in solving four problems:

    • The problem of scale: how do we simultaneously connect tens of thousands of people in a single initiative, rather than one classroom at a time?
    • The problem of speed: how do we share knowledge at the speed problems emerge?
    • The problem of cost: how do we make this affordable?
    • And the problem of sustainability: how do we create systems people will continue to use because they are relevant?

    We have developed a model that we have tested in over 137 countries, working with international partners as well as ministries of health and, most importantly, with people on the ground in local communities.

    The first lesson learned is that in today’s complex, hyper-connected world, where there is an abundance of knowledge, simply knowing things is necessary, but not sufficient.

    The second lesson is recognizing the significance of what people know because they are there every day.

    We operate within knowledge systems that tend to devalue this “experiential knowledge”, often dismissing it as “anecdotal”.

    This is a form of “epistemic injustice”.

    We believe we must value what the health worker knows, what the mother or grandmother knows, and what the youth know, in order to solve the challenges before us.

    The third lesson is the power of digital networks to enable connections.

    In the past, learning from experience was constrained by our local environment.

    With digital networks, you can make connections with people from all over the world.

    This led us to the central piece of our innovation: peer learning mediated through digital networks.

    This could be so much more than the informal chatter and negative feedback loops of social media.

    It is a structured process where participants develop concrete projects addressing real challenges, review each other’s work, and engage in facilitated dialogue to share insights.

    Knowledge flows horizontally, from peer to peer, rather than just vertically.

    This model solves our four problems.

    It gives us scale.

    There is no upper limit.

    It gives us speed.

    It turns out to be incredibly cheap.

    And it is sustainable, because people keep doing it because it is actually helping them solve their needs.

    To give a specific example, in July 2023 we launched our program on climate change and health.

    We started by listening to the voices of thousands of health workers from all over the world, who painted a very scary picture of the impacts of climate change on the health of those they serve.

    But we also found that health workers were being incredibly creative with very limited resources.

    They had already begun to solve the problems they were facing in their communities, but unfortunately, very often with no one helping or supporting them.

    That led us to calculate that if we are able to connect one million health workers to each other to be learning from and supporting each other by 2030, that group of health workers could use the power of those connections to save seven million lives.

    And for the “bean counters” in the room, this would be at a cost of less than two dollars per life saved, which is actually cheaper than vaccination, one of the most effective interventions we have in health today.

    This is such an incredible equation that some of our partners say it sounds too good to be true.

    There is an incredible opportunity to link up health workers with other segments of society, including youth.

    We see the potential from building these coalitions and networks.

    This brings us back to AI.

    We really see peer learning as key to our survival as human beings.

    We may end up working with machines that already exceed our cognitive capacities, and will almost certainly do so definitively in pretty much every area of work within the next three to six years.

    We are going to have to respond to that by strengthening the connections we have as human beings.

    AI systems are trained on global data, but humans possess deep “contextual intelligence”.

    Peer learning is the bridge.

    It is how we learn together how to adapt AI’s powerful analytics to our local realities, cultural contexts, trust networks, and resource constraints.

    We have to think about what it means to be human in the Age of AI, and learning from each other will be very critical, very key to that survival.

    Image: The Geneva Learning Foundation Collection © 2025. Suspended between earth and ether, Cathedral of Circuits and Roots evokes a world where technology and nature, thought and matter, coalesce in fragile harmony. Its oxidized cubes, hues of turquoise, gold, and quiet rust, resemble relics of a civilization both ancient and yet to come. The sculpture’s floating architecture suggests a digital forest, each metallic block a leaf of knowledge, each connection a pulse of shared intelligence. It speaks to the dual call of our age: to grow roots deep in human wisdom, even as we build circuits reaching toward artificial minds. In this shimmering equilibrium, the work asks: can progress be both luminous and humane — and can learning itself become an act of restoration?

    References

  • How do we stop AI-generated ‘poverty porn’ fake images?

    How do we stop AI-generated ‘poverty porn’ fake images?

    There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications.

    Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering – the very tropes many of us have worked for decades to banish.

    The alarms are valid.

    The images are harmful.

    But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

    The problem is not the tool.

    The problem is the user.

    Generative artificial intelligence is not the cause of poverty porn.

    The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception.

    This is not a new phenomenon.

    It is a long-standing pattern.

    In my private conversations with colleagues and researchers like Alenichev, I find we often agree on this point.

    Yet, the public-facing writing and research seem to stop short, focusing on the technological symptom rather than the systemic illness.

    It is vital we correct this focus before we implement the wrong solutions.

    The old poison in a new bottle

    Long before Midjourney, large organizations and their communications teams were propagating the worst kinds of caricatures.

    I know this.

    Many of us know this.

    We remember the history of award-winning photographers being sent from the Global North to “find… miserable kids” and stage images to meet the needs of funders. Organizations have always been willing to manufacture narratives that “show… people on the receiving end of aid as victims”.

    These working cultures — which demand images of suffering, which view Black and Brown bodies as instruments for fundraising, and which prioritize the “western gaze” — existed decades before artificial intelligence.

    Artificial intelligence did not create this impulse.

    It just made it cheaper, faster, and easier to execute.

    It is an enabler, not an originator.

    If an organization’s communications philosophy is rooted in colonial stereotypes, it will produce colonial stereotypes, whether it is using a 1000-dollar-a-day photographer or a 30-dollar-a-month software subscription.

    The danger of a misdiagnosis

    If we incorrectly identify artificial intelligence as the cause of this problem, our “solution” will be to ban the technology.

    This would be a catastrophic mistake.

    First, it is a superficial fix.

    It allows the very organizations producing this content to performatively cleanse themselves by banning a tool, all while eluding the fundamental, painful work of challenging their own underlying racism and colonial impulses.

    The problem will not be solved; it will simply revert to being expressed through traditional (and often staged) photography.

    Second, it punishes the wrong people.

    For local actors and other small organizations, generative artificial intelligence is not necessarily a tool for creating poverty porn.

    It is a tactical advantage in a fight for survival.

    Such organizations may lack the resources for a full communication team.

    They are then “punished by algorithms” that demand a constant stream of visuals, burying stories of organizations that cannot provide them.

    Furthermore, some organizations committed to dignity in representation are also using artificial intelligence to solve other deep ethical problems.

    They use it to create dignified portraits for stories without having to navigate the complex and often extractive issues of child protection and consent.

    They use it to avoid exploiting real people.

    A blanket ban on artificial intelligence in our sector would disarm small, local organizations.

    It would silence those of us trying to use the tool ethically, while allowing the large, wealthy organizations to continue their old, harmful practices unchanged.

    The real work ahead

    This is why I must insist we reframe the debate.

    The question is not if we should use artificial intelligence.

    The question is, and has always been, how we challenge the racist systems that demand these images in the first place.

    My Algerian ancestors fought colonialism.

    I cannot separate my work at The Geneva Learning Foundation from the struggle against racism and fighting for the right to tell our own stories.

    That philosophy guides how I use any tool, whether it is a word processor or an image generator.

    The tool is not the ethic.

    We need to demand accountability from organizations like the World Health Organization, Plan International, and even the United Nations.

    We must challenge the working cultures that green-light these campaigns.

    We should also, as Arsenii rightly points out, support local photographers and artists.

    But we must not let organizations off the hook by allowing them to blame a piece of software for their own lack of imagination and their deep, unaddressed colonial legacies.

    Artificial intelligence is not the problem.

    Our sector’s colonial mindset is.

    References

    Image: The Geneva Learning Foundation Collection © 2025

  • What the 2025 State of AI Report means for global health and humanitarian action

    What the 2025 State of AI Report means for global health and humanitarian action

    The 2025 State of AI Report has arrived, painting a picture of an industry being fundamentally reshaped by “The Squeeze.”

    This is a critical, intensifying constraint on three key resources: the massive-scale compute (processing power) required for training, the availability of high-quality data, and the specialized human talent to build frontier models.

    This squeeze, the report details, is accelerating a consolidation of power.

    It favors the “hyperscalers”—the handful of large technology corporations that can afford to build their own power plants to run their data centers.

    For leaders in global health and humanitarian action, the report is essential reading.

    However, it must be read with a critical eye.

    The report’s narrative is, in many ways, the narrative of the hyperscalers.

    It focuses on the benchmarks they dominate, the closed models they are building, and the resource problems they face.

    This “view from the top” is valuable, but it is not the only reality.

    What does this consolidation of power mean for our sector, and where should we be focusing our attention?

    The new AI divide: A focus on closed-model dominance

    The report documents a clear trend: closed, proprietary models are pulling ahead of open-source alternatives in raw performance benchmarks.

    This is a direct result of the compute squeeze.

    When training costs become astronomical, only the wealthiest organizations can compete at the frontier.

    This focus on state-of-the-art performance, while informative, can be a distraction.

    For humanitarian action, the “best” model is not necessarily the one that tops a leaderboard, but the one that is affordable, adaptable, and deployable in low-resource settings.

    The true implication for our sector is the emergence of a new “AI divide”.

    This divide is not just about access but about capability.

    We may face a future where Global North institutions may license “PhD-level” specialized AI agents at cost lower than their human counterparts, while practitioners in the Global South are left with rudimentary or geolocked tools.

    This dynamic threatens to reinforce, rather than disrupt, existing knowledge power imbalances and risks a new era of “digital colonialism”, where the sector becomes entirely dependent on a few private companies for its most critical technology.

    Opportunities in the State of AI: Breakthroughs in science and health

    The most unambiguous good news in the 2025 report is the dramatic acceleration of AI in science and medicine.

    AI is no longer just a research assistant; it is demonstrating expert-level accuracy in diagnostics and is actively designing novel therapeutics.

    This is a profound opportunity for global health.

    Where the report’s perspective is incomplete, however, is on the gap between this capability and its real-world application.

    An AI can provide a brilliant medical insight, but it lacks the “contextual intelligence” of a local practitioner.

    An AI model may not know that people in a specific district avoid the clinic on Tuesdays because it is market day – unless humans are working side-by-side with the model to share such qualitative and experiential data.

    Read more: Why peer learning is critical to survive the Age of Artificial Intelligence

    Therefore, the report’s findings on medical AI should not prompt us to simply buy new tools.

    It should prompt us to invest in the human infrastructure—like structured peer learning networks —where health workers can collectively learn how to blend AI’s power with their deep understanding of local realities.

    The State of AI report’s risks and our own

    The 2025 report rightly identifies a shift in risk, moving from passive issues like model bias to active, malicious threats like accelerated cyber capabilities and new “bio-risks.”

    These are critical concerns for the health and humanitarian sectors.

    But the report misses the most immediate barrier to AI adoption in our field: our own organizational culture.

    Many of our institutions operate within “highly punitive accountability systems”.

    These systems, which tie performance evaluation directly to funding, create an environment where experimentation carries significant personal and institutional risk.

    This leads to a “transparency paradox”.

    Health workers and field staff are already experimenting with AI, but they are forced to hide their use.

    If they disclose that a report was AI-assisted, they risk having their work subjected to “automatic devaluation,” regardless of its quality.

    This punitive culture prevents open discussion and makes collective learning difficult.

    State of AI: A strategic response to the squeeze

    The 2025 State of AI Report confirms that we cannot compete in the compute squeeze.

    Our strategy must therefore be one of smart adaptation and collective action.

    For global health and humanitarian leaders, key takeaways include:

    1. Do not be distracted by the “SOTA” race. Our goal is not to have the highest-performing model, but the most applicable and equitable one.
    2. Invest in human networks, not just technology. The greatest gains will come from building the collaborative capacity of our workforce to use AI tools effectively in context.
    3. Fix our internal culture. We must create environments where staff can experiment with AI openly and safely, without fear of reprisal. We cannot adapt to this technology if we are punishing our innovators.
    4. Unite for collective power. The report’s theme of consolidation is a warning. As individual non-governmental organizations, we have no power to negotiate with hyperscalers. We must explore forming a “cooperative” to gain a “seat at the table” and co-shape an AI ecosystem that serves the public interest, not just corporate agendas.

    These risks and opportunities are part and parcel of why The Geneva Learning Foundation is offering the AI4Health certificate programme. Learn more here: https://www.learning.foundation/ai.

    References

  • The great unlearning: notes on the Empower Learners for the Age of AI conference

    The great unlearning: notes on the Empower Learners for the Age of AI conference

    Artificial intelligence is forcing a reckoning not just in our schools, but in how we solve the world’s most complex problems. 

    When ChatGPT exploded into public consciousness, the immediate fear that rippled through our institutions was singular: the corruption of process.

    The specter of students, professionals, and even leaders outsourcing their intellectual labor to a machine seemed to threaten the very foundation of competence and accountability.

    In response, a predictable arsenal was deployed: detection software, outright bans, and policies hastily drafted to contain the threat.

    Three years later, a more profound and unsettling truth is emerging.

    The Empowering Learners AI 2025 global conference (7-10 October 2025) was a fascinating location to observe how academics – albeit mostly white men from the Global North centers that concentrate resources for research – are navigating these troubled waters.

    The impacts of AI in education matter because, as the OECD’s Stefan Vincent-Lancrin explained: “performance in education is the learning, whereas in many other businesses, the performance is performing the task that you’re supposed to do.” 

    The problem is not that AI will do our work for us.

    The problem is that in doing so, it may cause us to forget how to think.

    This is not a distant, dystopian fear.

    It is happening now.

    A landmark study presented by Vincent-Lancrin delivered a startling verdict: students who used a generic, answer-providing chatbot to study for a math exam performed significantly worse than those who used no AI at all.

    The tool, designed for efficiency, had become a shortcut around the very cognitive struggle that builds lasting knowledge.

    Jason Lodge of the University of Queensland captured the paradox with a simple analogy.

    “It’s like an e-bike,” he explained. “An e-bike will help you get to a destination… But if you’re using an e-bike to get fit, then getting the e-bike to do all the work is not going to get you fit. And ultimately our job… is to help our students be fit in their minds”.

    This phenomenon, dubbed “cognitive offloading,” is creating what Professor Dragan Gasevic of Monash University calls an epidemic of “metacognitive laziness”.

    Metacognition – the ability to think about our own thinking – is the engine of critical inquiry.

    Yet, generative AI is masterfully engineered to disarm it.

    By producing content that is articulate, confident, and authoritative, it exploits a fundamental human bias known as “processing fluency,” our tendency to be less critical of information that is presented cleanly. 

    “Generative AI articulates content… that basically sounds really good, and that can potentially disarm us as the users of such content,” Gasevic warned.

    The risk is not merely that a health worker will use AI to draft a report, but that they will trust its conclusions without the rigorous, critical validation that prevents catastrophic errors.

    Empower Learners for the Age of AI: the human algorithm

    If AI is taking over the work of assembling and synthesizing information, what, then, is left for us to learn and to do?

    This question has triggered a profound re-evaluation of our priorities.

    The consensus emerging is a radical shift away from what can be automated and toward what makes us uniquely human.

    The urgency of this shift is not just philosophical.

    It is economic.

    Matt Sigelman, president of The Burning Glass Institute, presented sobering data showing that AI is already automating the routine tasks that constitute the first few rungs of a professional career ladder.

    “The problem is that if AI overlaps with… those humble tasks… then employers tend to say, well, gee, why am I hiring people at the entry level?” Sigelman explained.

    The result is a shrinking number of entry-level jobs, forcing us to cultivate judgment and adaptive skills from day one.

    This new reality demands a focus on what machines cannot replicate.

    For Pinar Demirdag, an artist and co-founder of the creative AI company Cuebric, this means a focus on the “5 Cs”: Creativity, Curiosity, Critical Thinking, Collective Care, and Consciousness.

    She argues that true creativity remains an exclusively human domain. “I don’t believe any machine can ever be creative because it doesn’t lie in their nature,” she asserted.

    She believes that AI is confined to recombining what is already in its data, while human creativity stems from presence and a capacity to break patterns.

    This sentiment was echoed by Rob English, a creative director who sees AI not as a threat, but as a catalyst for a deeper humanity.

    “It creates an opportunity for us to sort of have to amplify the things that make us more human,” he argued.

    For English, the future of learning lies in transforming it from a transactional task into a “lifestyle,” a mode of being grounded in identity and personal meaning.

    He believes that as the value of simply aggregating information diminishes, what becomes more valuable is our ability “to dissect… to interpret or to infer”.

    In this new landscape, the purpose of learning – whether for a student or a seasoned professional – shifts from knowledge transmission to the cultivation of human-centric capabilities.

    It is no longer enough to know things.

    The premium is on judgment, contextual wisdom, ethical reasoning, and the ability to connect with others – skills forged through the very intellectual and social struggles that generic AI helps us avoid.

    Empower Learners for the Age of AI: Collaborate or be colonized

    While the pedagogical challenge is profound, the institutional one may be even greater.

    For all the talk of disruptive change, the current state in many of our organizations is one of inertia, indecision, and a dangerous passivity.

    As George Siemens lamented after investing several years in trying to move the needle at higher education institutions, leadership has been “too passive,” risking a repeat of the era when institutions outsourced online learning to corporations known as “OPMs” (online programme managers) that did not share their values: “I’m worried that we’re going to do the same thing with AI, that we’re just going to sit on our hands, leadership’s going to be too passive… and the end result is we’re going to be reliant down the road on handing off the visioning and the capabilities of AI to external partners.”

    The presidents of two of the largest nonprofit universities in the United States, Dr. Mark Milliron of National University and Dr. Lisa Marsh Ryerson, president of Southern New Hampshire University, offered a candid diagnosis of the problem.

    Ryerson set the stage: “We don’t see it as a tool. We see it as a true framework redesign for learning for the future.” 

    However, before any institution can deploy sophisticated AI, it must first undertake the unglamorous, foundational work of fixing its own data infrastructure.

    “A lot of universities aren’t willing to take two steps back before they take three steps forward on this,” Dr. Milliron stated. “They want to jump to the advanced AI… when they actually need to go back and really… get the basics done”.

    This failure to fix the “plumbing” leaves organizations vulnerable, unable to build their own strategic capabilities.

    Such a dynamic is creating what keynote speaker Howard Brodsky termed a new form of “digital colonialism,” where a handful of powerful tech companies dictate the future of critical public goods like health and education.

    His proposed solution is for institutions to form a cooperative, a model that has proven successful for over a billion people globally.

    “I don’t believe at the current that universities have a seat at the table,” Brodsky argued. “And the only way you get a seat at the table is scale. And it’s to have a large voice”.

    A cooperative would give organizations the collective power to negotiate with tech giants and co-shape an AI ecosystem that serves public interest, not just commercial agendas.

    Without such collective action, the fear is that our health systems and educational institutions will become mere consumers of technologies designed without their input, ceding their agency and their future to Silicon Valley.

    The choice is stark: either become intentional builders of our own solutions, or become passive subjects of a transformation orchestrated by others.

    The engine of equity

    Amid these profound challenges, a powerfully optimistic vision for AI’s role is also taking shape.

    If harnessed intentionally, AI could become one of the greatest engines for equity in our history.

    The key lies in recognizing the invisible advantages that have long propped up success.

    As Dr. Mark Milliron explained in a moment of striking clarity: “I actually think AI has the potential to level the playing field… second, third, fourth generation higher ed students have always had AI. They were extended families… who came in and helped them navigate higher education because they had a knowing about it.”

    For generations, those from privileged backgrounds have had access to a human support network that functions as a sophisticated guidance system.

    First-generation students and professionals in under-resourced settings are often left to fend for themselves.

    AI offers the possibility of democratizing that support system.

    A personalized AI companion can serve as that navigational guide for everyone, answering logistical questions, reducing administrative friction, and connecting them with the right human support at the right time.

    This is not about replacing human mentors.

    It is about ensuring that every learner and every practitioner has the foundational scaffolding needed to thrive.

    As Dr. Lisa Marsh Ryerson put it, the goal is to use AI to “serve more learners, more equitably, with equitable outcomes, and more humanely”.

    This vision recasts AI not as a threat to be managed, but as a moral imperative to be embraced.

    It suggests that the technology’s most profound impact may not be in how it changes our interaction with knowledge, but in how it changes our access to opportunity.

    Technology as culture

    The debates from the conference make one thing clear.

    The AI revolution is not, at its core, a technological event.

    Read the article: Why learning technologists are obsolete

    It is a pedagogical, ethical, and institutional one.

    It forces us to ask what we believe the purpose of learning is, what skills are foundational to a flourishing human life, and what kind of world we want to build.

    The technology will not provide the answers.

    It will only amplify the choices we make.

    As we stand at this inflection point, the most critical task is not to integrate AI, but to become more intentional about our own humanity.

    The future of our collective ability to solve the world’s most pressing challenges depends on it.

    Do you work in health?

    As AI capabilities advance rapidly, health leaders need to prepare, learn, and adapt. The Geneva Learning Foundation’s new AI4Health Framework equips you to harness AI’s potential while protecting what matters most—human experience, local leadership, and health equity. Learn more: https://www.learning.foundation/ai.

    References

    Image: The Geneva Learning Foundation Collection © 2025

  • Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

    Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

    “We are at the beginning of a new epoch,” Eric Schmidt declared at the RAISE Summit in Paris on 9 July 2025. The former Google CEO’s message grounded in what he calls the San Francisco Consensus carries unusual weight—not necessarily because of his past role leading one of tech’s giants, but because of his current one: advising heads of state and industry on artificial intelligence.

    “When I talk to governments, what I tell them is, one, ChatGPT is great, but that was two years ago. Everything’s changed again. You’re not prepared for it. And two, you better get organized around it—the good and the bad.”

    At the Paris summit, he shared what he calls the “San Francisco Consensus”—a convergence of belief among Silicon Valley’s leaders that within three to six years AI will fundamentally transform every aspect of human activity.

    Whether one views this timeline as realistic or delusional matters less than the fact that the people building AI systems—and investing hundreds of billions in infrastructure—believe it. Their conviction alone makes the Consensus a force shaping our immediate future.

    “There is a group of people that I work with. They are all in San Francisco, and they have all basically convinced themselves that in the next two to six years—the average is three years—the entire world will change,” Schmidt explained. (In the past, he initially referred to the Consensus as a kind of inside joke.)

    He carefully framed this as a consensus rather than fact: “I call it a consensus because it’s true that we agree… but it’s not necessarily true that the consensus is true.”

    Schmidt’s own position became clear as he compared the arrival of artificial general intelligence (“AGI”) to the Enlightenment itself. “During the Enlightenment, we as humans learned from going from direct faith in God to using our reasoning skills. So now we have the arrival of a new non-human intelligence, which is likely to have better reasoning skills than humans can have.”

    The three pillars of the San Francisco Consensus

    The Consensus rests on three converging technological revolutions:

    1. The language revolution

    Large language models like ChatGPT captured public attention by demonstrating AI’s ability to understand and generate human language. But Schmidt emphasized these are already outdated. The real transformation lies in language becoming a universal interface for AI systems—enabling them to process instructions, maintain context, and coordinate complex tasks through natural language.

    2. The agentic revolution

    “The agentic revolution can be understood as language in, memory in, language out,” Schmidt explained. These are AI systems that can pursue goals, maintain state across interactions, and take actions in the world.

    His deliberately mundane example illustrated the profound implications: “I have a house in California, I want to build another one. I have an agent that finds the lot, I have another agent that works on what the rules are, another agent that works on designing the building, selects the contractor, and at least in America, you have an agent that then sues the contractor when the house doesn’t work.”

    The punchline: “I just gave you a workflow example that’s true of every business, every government, and every group human activity.”

    3. The reasoning revolution

    Most significant is the emergence of AI systems that can engage in complex reasoning through what experts call “inference”—the process of drawing conclusions from data—enhanced by “reinforcement learning,” where systems improve by learning from outcomes.

    “Take a look at o3 from ChatGPT,” Schmidt urged. “Watch it go forward and backward, forward and backward in its reasoning, and it will blow your mind away.” These systems use vastly more computational power than traditional searches—”many, many thousands of times more electricity, queries, and so forth”—to work through problems step by step.

    The results are striking. Google’s math model, says Schmidt, now performs “at the 90 percentile of math graduate students.” Similar breakthroughs are occurring across disciplines.

    What is the timeline of the San Francisco Consensus?

    The Consensus timeline seems breathtaking: three years on average, six in Schmidt’s more conservative estimate. But the direction matters more than the precise date.

    “Recursive self-improvement” represents the critical threshold—when AI systems begin improving themselves. “The system begins to learn on itself where it goes forward at a rate that is impossible for us to understand.”

    After AGI comes superintelligence, which Schmidt defines with precision: “It can prove something that we know to be true, but we cannot understand the proof. We humans, no human can understand it. Even all of us together cannot understand it, but we know it’s true.”

    His timeline? “I think this will occur within a decade.”

    The infrastructure gamble

    The Consensus drives unprecedented infrastructure investment. Schmidt addressed this directly when asked about whether massive AI capital expenditures represent a bubble:

    “If you ask most of the executives in the industry, they will say the following. They’ll say that we’re in a period of overbuilding. They’ll say that there will be overcapacity in two or three years. And when you ask them, they’ll say, but I’ll be fine and the other guys are going to lose all their money. So that’s a classic bubble, right?”

    But Schmidt sees a different logic at work: “I’ve never seen a situation where hardware capacity was not taken up by software.” His point: throughout tech history, new computational capacity enables new applications that consume it. Today’s seemingly excessive AI infrastructure will likely be absorbed by tomorrow’s AI applications, especially if reasoning-based AI systems require “many, many thousands of times more” computational power than current models.

    The network effect trap

    Schmidt’s warnings about international competition reveal why AI development resembles a “network effect business”—where the value increases exponentially with scale and market dominance becomes self-reinforcing. In AI, this manifests through:

    • More data improving models;
    • Better models attracting more users;
    • More users generating more data; and
    • Greater resources enabling faster improvement.

    “What happens when you’ve got two countries where one is ahead of the other?” Schmidt asked. “In a network effect business, this is likely to produce slopes of gains at this level,” he said, gesturing sharply upward. “The opponent may realize that once you get there, they’ll never catch up.”

    This creates what he calls a “race condition of preemption”—a term from computer science describing a situation where the outcome depends critically on the sequence of events. In geopolitics, it means countries might take aggressive action to prevent rivals from achieving irreversible AI advantage.

    The scale-free domains

    Schmidt believes that some fields will transform faster due to their “scale-free” nature—domains where AI can generate unlimited training data without human input. Exhibit A: mathematics. “Mathematicians with whiteboards or chalkboards just make stuff up all day. And they do it over and over again.”

    Software development faces similar disruption. When Schmidt asked a programmer what language they code in, the response—”Why does it matter?”—captured how AI makes specific technical skills increasingly irrelevant.

    Critical perspectives on the San Francisco Consensus

    The San Francisco Consensus could be wrong. Silicon Valley has predicted imminent breakthroughs in artificial intelligence before—for decades, in fact. Today’s optimism might reflect the echo chamber of Sand Hill Road. Fundamental challenges remain: reliability, alignment, the leap from pattern matching to genuine reasoning.

    But here is what matters: the people building AI systems believe their own timeline. This belief, held by those controlling hundreds of billions in capital and the world’s top technical talent, becomes self-fulfilling. Investment flows, talent migrates, governments scramble to respond.

    Schmidt speaks to heads of state because he understands this dynamic. The consensus shapes reality through sheer force of capital and conviction. Even if wrong about timing, it is setting the direction. The infrastructure being built, the talent being recruited, the systems being designed—all point toward the same destination.

    The imperative of speed

    Schmidt’s message to leaders carried the urgency of hard-won experience: “If you’re going to [invest], do it now and move very, very fast. This market has so many players. There’s so much money at stake that you will be bypassed if you spend too much time worrying about anything other than building incredible products.”

    His confession about Google drove the point home: “Every mistake I made was fundamentally one of time… We didn’t move fast enough.”

    This was not generic startup advice but specific warning about exponential technologies. In AI development, being six months late might mean being forever behind. The network effects Schmidt described—where leaders accumulate insurmountable advantages—are already visible in the concentration of AI capabilities among a handful of companies.

    For governments crafting AI policy, businesses planning strategy, or education institutions charting their futures,  the timeline debate misses the point. Whether recursive self-improvement arrives in three years or six, the time to act is now. The changes ahead—in labor markets, in global power dynamics, in the very nature of intelligence—demand immediate attention.

    Schmidt’s warning to world leaders was not about a specific date but about a mindset: those still debating whether AI represents fundamental change have already lost the race.

    Photo credit: Paris RAISE Summit (8-9 July 2025) © Sébastien Delarque

  • Why peer learning is critical to survive the Age of Artificial Intelligence

    Why peer learning is critical to survive the Age of Artificial Intelligence

    María, a pediatrician in Argentina, works with an AI diagnostic system that can identify rare diseases, suggest treatment protocols, and draft reports in perfect medical Spanish. But something crucial is missing. The AI provides brilliant medical insights, yet María struggles to translate them into action in her community. What is needed to realize the promise of the Age of Artificial Intelligence?

    Then she discovers the missing piece. Through a peer learning network—where health workers develop projects addressing real challenges, review each other’s work, and engage in facilitated dialogue—she connects with other health professionals across Latin America who are learning to work with AI as a collaborative partner. Together, they discover that AI becomes far more useful when combined with their understanding of local contexts, cultural practices, and community dynamics.

    This speculative scenario, based on current AI developments and existing peer learning successes, illuminates a crucial insight as we ascend into the age of artificial intelligence. Eric Schmidt’s San Francisco Consensus predicts that within three to six years, AI will reason at expert levels, coordinate complex tasks through digital agents, and understand any request in natural language.

    Understanding how peer learning can bridge AI capabilities and human thinking and action is critical to prepare for this future.

    Collaboration in the Age of Artificial Intelligence

    The three AI revolutions—language interfaces, reasoning systems, and agentic coordination—will offer unprecedented capabilities. If access is equitable, this will be available to any health worker, anywhere. Yet having access to these tools is just the beginning. The transformation will require humans to learn together how to collaborate effectively with AI.

    Consider what becomes possible when health workers combine AI capabilities with collective human insight:

    • AI analyzes disease patterns; peer networks share which interventions work in specific cultural contexts.
    • AI suggests optimal treatment protocols; practitioners adapt them based on local resource availability.
    • AI identifies at-risk populations; community workers know how to reach them effectively.

    The magic happens in integration of AI and human capabiltiies through peer learning. Think of it this way: AI can analyze millions of health records to identify disease patterns, but it may not know that in your district, people avoid the Tuesday clinic because that is market day, or that certain communities trust traditional healers more than government health workers.

    When epidemiologists share these contextual insights with peers facing similar challenges—through structured discussions and collaborative problem-solving—they learn together how to adapt AI’s analytical power to local realities.

    For example, when an AI system identifies a disease cluster, epidemiologists in a peer network can share strategies for investigating it: one colleague might explain how they gained community trust for contact tracing, another might share how they adapted AI-generated survey questions to be culturally appropriate, and a third might demonstrate how they used AI predictions alongside traditional knowledge to improve outbreak response.

    This collective learning—where professionals teach each other how to blend AI’s computational abilities with human understanding of communities—creates solutions more effective than either AI or individual expertise could achieve alone.

    Understanding peer learning in the Age of Artificial Intelligence

    Peer learning is not about professionals sharing anecdotes. It is a structured learning process where:

    • Participants develop concrete projects addressing real challenges in their contexts, such as improving vaccination coverage or adapting AI tools for local use.
    • Peers review each other’s work using expert-designed rubrics that ensure quality while encouraging innovation.
    • Facilitated dialogue sessions help surface patterns across different contexts and generate collective insights.
    • Continuous cycles of action, reflection, and revision transform individual experiences into shared wisdom.
    • Every participant becomes both teacher and learner, contributing their unique insights while learning from others.

    This approach differs fundamentally from traditional training because knowledge flows horizontally between peers rather than vertically from experts. When applied to human-AI collaboration, it enables rapid collective learning about what works, what fails, and why.

    Why peer networks unlock the potential of the Age of Artificial Intelligence

    Contextual intelligence through collective wisdom

    AI systems train on global data and identify universal patterns. This is their strength. Human practitioners understand local contexts intimately. This is theirs. Peer learning networks create bridges between these complementary intelligences.

    When a health worker discovers how to adapt AI-generated nutrition plans for local food availability, that insight becomes valuable to peers in similar contexts worldwide. Through structured sharing and review processes, the network creates a living library of contextual adaptations that make AI recommendations actionable.

    Trust-building in the age of AI

    Communities often view new technologies with suspicion. The most sophisticated AI cannot overcome this alone. But when local health workers learn from peers how to introduce AI as a helpful tool rather than a threatening replacement, acceptance grows.

    In peer networks, practitioners share not just technical knowledge but communication strategies through structured dialogue: how to explain AI recommendations to skeptical patients, how to involve community leaders in AI-assisted health programs, how to maintain the human touch while using digital tools. This collective learning makes AI acceptable and valuable to communities that might otherwise reject it.

    Distributed problem-solving

    When AI provides a diagnosis or recommendation that seems inappropriate for local conditions, isolated practitioners might simply ignore it. But in peer networks with structured review processes, they can explore why the discrepancy exists and how to bridge it.

    A teacher receives AI-generated lesson plans that assume resources her school lacks. Through her network’s collaborative problem-solving process, she finds teachers in similar situations who have created innovative adaptations. Together, they develop approaches that preserve AI’s pedagogical insights while working within real constraints.

    The new architecture of collaborative learning

    Working effectively with AI requires new forms of human collaboration built on three essential elements:

    Reciprocal knowledge flows

    When everyone has access to AI expertise, the most valuable learning happens between peers who share similar contexts and challenges. They teach each other not what AI knows, but how to make AI knowledge useful in their specific situations through:

    • Structured project development and peer review;
    • Regular assemblies where practitioners share experiences;
    • Documentation of successful adaptations and failures;
    • Continuous refinement based on collective feedback.

    Structured experimentation

    Peer networks provide safe spaces to experiment with AI collaboration. Through structured cycles of action and reflection, practitioners:

    • Try AI recommendations in controlled ways;
    • Document what works and what needs adaptation using shared frameworks;
    • Share failures as valuable learning opportunities through facilitated sessions;
    • Build collective knowledge about human-AI collaboration.

    Continuous capability building

    As AI capabilities evolve rapidly, no individual can keep pace alone. Peer networks create continuous learning environments where:

    • Early adopters share new AI features through structured presentations;
    • Groups explore emerging capabilities together in hands-on sessions;
    • Collective intelligence about AI use grows through documented experiences;
    • Everyone stays current through shared discovery and regular dialogue.

    Evidence-based speculation: imagining peer networks that include both machines and humans

    While the following examples are speculative, they build on current evidence from existing peer learning networks and emerging AI capabilities to imagine near-future possibilities.

    The Nigerian immunization scenario

    Based on Nigeria’s successful peer learning initiatives and current AI development trajectories, we can envision how AI-assisted immunization programs might work. AI could help identify optimal vaccine distribution patterns and predict which communities are at risk. Success would come when health workers form peer networks to share:

    • Techniques for presenting AI predictions to community leaders effectively;
    • Methods for adapting AI-suggested schedules to local market days and religious observances;
    • Strategies for using AI insights while maintaining personal relationships that drive vaccine acceptance.

    This scenario extrapolates from current successes in peer learning for immunization in Nigeria to imagine enhanced outcomes with AI partnership.

    Climate health innovation networks

    Drawing from existing climate health responses and AI’s growing environmental analysis capabilities, we can project how peer networks might function. As climate change creates unprecedented health challenges, AI models will predict impacts and suggest interventions. Community-based health workers could connect these ‘big data’ insights with their own local observations and experience to take action, sharing innovations like:

    • Using AI climate predictions to prepare communities for heat waves;
    • Adapting AI-suggested cooling strategies to local housing conditions;
    • Combining traditional knowledge with AI insights for water management.

    These possibilities build on documented peer learning successes in sharing health workers observations and insights about the impacts of climate change on the health of local communities.

    Addressing AI’s limitations through collective wisdom

    While AI offers powerful capabilities, we must acknowledge that technology is not neutral—AI systems carry biases from their training data, reflect the perspectives of their creators, and can perpetuate or amplify existing inequalities. Peer learning networks provide a crucial mechanism for identifying and addressing these limitations collectively.

    Through structured dialogue and shared experiences, practitioners can:

    • Document when AI recommendations reflect biases inappropriate for their contexts;
    • Develop collective strategies for identifying and correcting AI biases;
    • Share techniques for adapting AI outputs to ensure equity;
    • Build shared understanding of AI’s limitations and appropriate use cases.

    This collective vigilance and adaptation becomes essential for ensuring AI serves all communities fairly.

    What this means for different stakeholders

    For funders: Investing in collaborative capacity

    The highest return on AI investment comes not from technology alone but from building human capacity to use it effectively. Peer learning networks:

    • Multiply the impact of AI tools through shared adaptation strategies;
    • Create sustainable capacity that grows with technological advancement;
    • Generate innovations that improve AI applications for specific contexts;
    • Build resilience through distributed expertise.

    For practitioners: New collaborative competencies

    Working effectively with AI requires skills best developed through structured peer learning:

    • Partnership mindset: Seeing AI as a collaborative tool requiring human judgment.
    • Adaptive expertise: Learning to blend AI capabilities with contextual knowledge.
    • Reflective practice: Regularly examining what works in human-AI collaboration through structured reflection.
    • Knowledge sharing: Contributing insights through peer review and dialogue that help others work better with AI.

    For policymakers: Enabling collaborative ecosystems

    Policies should support human-AI collaboration by:

    • Funding peer learning infrastructure alongside AI deployment;
    • Creating time and space for structured peer learning activities;
    • Recognizing peer learning as essential professional development;
    • Supporting documentation and spread of effective practices.

    AI-human transformation through collaboration: A comparative view

    Working with AI individuallyWorking with AI through structured peer networks
    Powerful tools but limited adaptation
    Insights remain isolated
    Success depends on individual skill
    Continuous adaptation through structured sharing
    Insights multiply across network through peer review
    Collective wisdom enhances individual capability
    AI recommendations may miss local context
    Trial and error in isolation
    Slow spread of effective practices
    Context-aware applications emerge through dialogue
    Structured experimentation with collective learning
    Rapid diffusion through documented innovations
    Overwhelmed by rapid AI changes
    Struggling to keep pace alone
    Uncertainty about appropriate use
    Collective sense-making through facilitated sessions
    Shared discovery in peer projects
    Growing confidence through structured support

    The collaborative future

    As AI capabilities expand, two paths emerge:

    Path 1: Individuals struggle alone to make sense of AI tools, leading to uneven adoption, missed opportunities, and growing inequality between those who figure it out and those who do not.

    Path 2: Structured peer networks enable collective learning about human-AI collaboration, leading to widespread effective use, continuous innovation, and shared benefit from AI advances.

    What determines outcomes is how humans organize to learn and work together with AI through structured peer learning processes.

    María’s projected transformation

    Six months after her initial struggles, we can envision how María’s experience might transform. Through structured peer learning—project development, peer review, and facilitated dialogue—she could learn to see AI not as a foreign expert imposing solutions, but as a knowledgeable colleague whose insights she can adapt and apply.

    Based on current peer learning practices, she might discover techniques from colleagues across Latin America and the rest of the world:

    • Methods for using AI diagnosis as a conversation starter with traditional healers;
    • Strategies for validating AI recommendations through community health committees;
    • Approaches for using AI analytics to support (not replace) community knowledge.

    Following the pattern of peer learning networks, Maríawould begin contributing her own innovations through structured sharing, particularly around integrating AI insights with indigenous healing practices. Her documented approaches would spread through peer review and dialogue, helping thousands of health workers make AI truly useful in their communities.

    Conclusion: The multiplication effect

    AI transformation promises to augment human capabilities dramatically. Language interfaces will democratize access to advanced tools. Reasoning systems will provide expert-level analysis. Agentic AI will coordinate complex operations. These capabilities are beginning to transform what individuals can accomplish.

    But the true multiplication effect will come through structured peer learning networks. When thousands of practitioners share how to work effectively with AI through systematic project work, peer review, and facilitated dialogue, they create collective intelligence about human-AI collaboration that no individual could develop alone. They transform AI from an impressive but alien technology into a natural extension of human capability.

    For funders, this means the highest-impact investments combine AI tools with structured peer learning infrastructure. For policymakers, it means creating conditions where collaborative learning flourishes alongside technological deployment. For practitioners, it means embracing both AI partnership and peer collaboration through structured processes as essential to professional practice.

    The future of human progress may rest on our ability to find effective ways to build powerful collaboration in networks that combine human and artificial intelligence. When we learn together through structured peer learning how to work with AI, we multiply not just individual capability but collective capacity to address the complex challenges facing our world.

    AI is still emergent, changing constantly and rapidly. The peer learning methods are proven: we know a lot about how humans learn and collaborate. The question is how quickly we can scale this collaborative approach to match the pace of AI advancement. In that race, structured peer learning is not optional—it is essential.

    Image: The Geneva Learning Foundation Collection © 2025

    Fediverse Reactions
  • Language as AI’s universal interface: What it means and why it matters

    Language as AI’s universal interface: What it means and why it matters

    Imagine if you could control every device, system, and process in the world simply by talking to it in plain English—or any language you speak. No special commands to memorize. No programming skills required. No technical manuals to study. Just explain what you want in your own words, and it happens.

    This is the transformation Eric Schmidt described when he spoke about language becoming the “universal interface” for artificial intelligence. To understand why this matters, we need to step back and see how radically this changes everything.

    The old way: A tower of Babel

    Today, interacting with technology requires learning its language, not the other way around. Consider what you need to know:

    • To use your smartphone, you must understand apps, settings, swipes, and taps
    • To search the internet effectively, you need the right keywords and search operators
    • To work with a spreadsheet, you must learn formulas, functions, and formatting
    • To program a computer, you need years of training in coding languages
    • To operate specialized software—from medical systems to industrial controls—requires extensive training

    Each system speaks its own language. Humans must constantly translate their intentions into forms machines can understand. This creates barriers everywhere: between people and technology, between different systems, and between those who have technical skills and those who do not.

    The new way: Natural language as universal interface

    What changes when AI systems can understand and act on natural human language? Everything.

    Instead of learning how to use technology, you simply tell it what you want:

    • “Find all our customers who haven’t ordered in six months and draft a personalized re-engagement email for each”
    • “Look at this medical scan and highlight anything unusual compared to healthy tissue”
    • “Monitor our factory equipment and alert me if any patterns suggest maintenance is needed soon”
    • “Take this contract and identify any terms that differ from our standard agreement”

    The AI system translates your natural language into whatever technical operations are needed—database queries, image analysis, pattern recognition, document comparison—without you needing to know how any of it works.

    Why a universal interface changes everything

    1. Democratization of capability

    When language becomes the interface, advanced capabilities become available to everyone who can explain what they want. A small business owner can perform complex data analysis without hiring analysts. A teacher can create customized learning materials without programming skills. A farmer can optimize irrigation without understanding algorithms.

    The divide between technical and non-technical people begins to disappear. What matters is not knowing how to code but knowing what outcomes you want to achieve.

    2. System integration without friction

    Today, making different systems work together is a nightmare of APIs, data formats, and compatibility issues. But when every system can be controlled through natural language, integration becomes as simple as explaining the connection you want:

    “When a customer complains on social media, create a support ticket, alert the appropriate team based on the issue type, and draft a public response acknowledging their concern”

    The AI handles all the technical complexity of connecting social media monitoring, ticketing systems, team communications, and response generation.

    3. Context that travels

    Unlike traditional interfaces that reset with each interaction, language-based AI systems can maintain context across time and tasks. They remember previous conversations, understand ongoing projects, and track evolving situations.

    Imagine telling an AI: “Remember that analysis we did last month on customer churn? Update it with this quarter’s data and highlight what’s changed.” The system knows exactly what you’re referring to and can build on previous work.

    4. Coordination at scale

    When AI agents can communicate through natural language, they can coordinate complex operations without human intervention. Schmidt’s example of building a house illustrates this—multiple AI agents handling different aspects of a project, all coordinating through language:

    • The land-finding agent tells the regulation agent about the plot it found
    • The regulation agent informs the design agent about building restrictions
    • The design agent coordinates with the contractor agent on feasibility
    • Each agent can explain its actions and reasoning in plain language

    Real-world implications

    For business

    Companies can automate complex workflows by describing them in natural language rather than programming them. A marketing manager could say: “Monitor our competitor’s pricing daily, alert me to any changes over 5%, and prepare a report on their promotional patterns.” No need for programmers, database experts, or data analysts.

    For healthcare

    Doctors can interact with AI diagnostic tools using medical terminology they already know, rather than learning proprietary interfaces. “Compare this patient’s symptoms with similar cases in our database and suggest additional tests based on what we might be missing.”

    For education

    Teachers can create personalized learning experiences by describing what they want: “Create practice problems for my students who are struggling with fractions, make them progressively harder as they improve, and let me know who needs extra help.”

    For government

    Policy makers can analyze complex data and model scenarios using plain language: “Show me how proposed changes to tax policy would affect families earning under $50,000 in rural areas versus urban areas.”

    Five challenges ahead

    This transformation is not without risks and challenges:

    1. Accuracy: Natural language is ambiguous. Ensuring AI systems correctly interpret intentions requires sophisticated understanding of context and nuance.
    2. Security: If anyone can control systems through language, protecting against malicious use becomes critical.
    3. Verification: When complex operations happen through simple commands, how do we verify the AI did what we intended?
    4. Dependency: As we rely more on AI to translate our intentions into actions, what happens to human technical skills?

    The bottom line

    Language as a universal interface represents a fundamental shift in how humans relate to technology. Instead of humans learning to speak machine languages, machines are learning to understand human intentions expressed naturally.

    This is not just about making technology easier to use. It is about removing the barriers between human intention and digital capability. When that barrier falls, we enter Eric Schmidt’s “new epoch”—where the distance between thinking something and achieving it collapses to nearly zero.

    The implications ripple through every industry, every job, every aspect of daily life. Those who understand this shift and adapt quickly will find themselves with almost magical capabilities. Those who do not may find themselves bypassed by others who can achieve in minutes what once took months.

    The universal interface is coming. The question is not whether to prepare, but how quickly you can begin imagining what becomes possible when the only limit is your ability to describe what you want.

    Fediverse Reactions
  • What does AI reasoning mean for global health?

    What does AI reasoning mean for global health?

    When epidemiologists investigate a disease outbreak, they do not just match symptoms to known pathogens. They work through complex chains of evidence, test hypotheses, reconsider assumptions when data does not fit, and sometimes completely change their approach based on new information. This deeply human process of systematic reasoning is what artificial intelligence systems are now learning to do.

    This capability represents a fundamental shift from AI that recognizes patterns to AI that can work through complex problems the way a skilled professional would. For those working in global health and education, understanding this transformation is essential.

    The difference between answering and reasoning

    To understand this revolution, consider how most AI works today versus how reasoning AI operates.

    Traditional AI excels at pattern recognition. Show it a chest X-ray, and it can identify pneumonia by matching patterns it learned from millions of examples. Ask it about disease symptoms, and it retrieves information from its training data. This is sophisticated, but it is fundamentally different from reasoning.

    Consider this scenario: An unusual cluster of respiratory illness appears in a rural community. The symptoms partially match several known diseases but perfectly match none. Environmental factors are unclear. Some patients respond to standard treatments. Others do not.

    A pattern-matching AI might list possible diseases based on symptom similarity. But a reasoning AI would approach it like an epidemiologist:

    • “Let me examine the symptom progression timeline.”
    • “The geographic clustering suggests environmental or infectious cause. Let me investigate both paths.”
    • “Wait, these treatment responses do not align with any single pathogen. Could this be co-infection?”
    • “I need to reconsider. What if the environmental factor is not the cause but is affecting treatment efficacy?”

    The AI actually works through the problem, forms hypotheses, recognizes when evidence contradicts its assumptions, and adjusts its approach accordingly.

    How reasoning AI thinks through problems

    Advanced AI systems now demonstrate visible thinking processes. When analyzing complex health data, they might:

    • “First, let me identify the key variables affecting disease transmission in this population.”
    • “I will start by calculating the basic reproduction number using standard methods.”
    • “These results seem inconsistent with the observed spread pattern. Let me check my assumptions.”
    • “I may have overlooked the role of asymptomatic carriers. Let me recalculate.”
    • “This aligns better with observations. Now I can project intervention outcomes.”

    This is not scripted behavior. The AI works through problems, recognizes errors, and corrects its approach—much like a researcher reviewing their analysis.

    Why reasoning requires massive computational power

    Reasoning AI systems require thousands of times more computational resources than traditional AI. Understanding why helps explain both their power and limitations.

    Think about the difference between recognizing a disease from symptoms versus investigating a novel outbreak. Recognition happens quickly: an experienced clinician identifies malaria almost instantly. But investigating an unusual disease cluster requires sustained analysis, exploring multiple hypotheses, checking each against evidence.

    The same applies to AI. Traditional pattern-matching AI makes a single pass through its neural network. But reasoning AI must:

    • Explore multiple hypotheses simultaneously;
    • Check each reasoning step for logical consistency;
    • Backtrack when evidence contradicts assumptions;
    • Verify conclusions against all available data; and
    • Consider alternative explanations.

    Each step requires intensive computation. The AI might explore hundreds of reasoning paths before reaching sound conclusions.

    Matching expert performance

    AI systems in mid-2025 perform at the level of graduate students in mathematics and other fields. For global health, this means AI that can:

    • Design epidemiological studies with appropriate controls;
    • Identify confounding variables in complex datasets;
    • Recognize when standard statistical methods do not apply; and
    • Develop novel approaches to emerging health challenges.

    This is not about calculating faster—computers have done that for decades. It is about understanding concepts, recognizing which analytical techniques to apply, and working through novel problems.

    Applications in global health

    Reasoning AI transforms multiple aspects of global health work:

    Outbreak investigation: AI that can integrate diverse data sources—clinical reports, environmental data, travel patterns, genetic sequences—to identify outbreak sources and transmission patterns.

    Treatment optimization: Systems that reason through drug interactions, comorbidities, and local factors to recommend personalized treatment protocols.

    Resource allocation: AI that understands trade-offs between prevention and treatment, immediate needs and long-term capacity building, to optimize limited resources.

    Research design: Systems that can identify weaknesses in study designs, suggest improvements, and recognize when findings may not generalize to other populations.

    Policy analysis: AI that reasons through complex interventions, anticipating unintended consequences and identifying implementation barriers.

    What makes AI reasoning different

    Five capabilities distinguish reasoning AI from pattern-matching systems:

    1. Working memory: Reasoning AI holds multiple pieces of information active while working through problems, like a human tracking several hypotheses simultaneously.
    2. Logical consistency: Each conclusion must follow logically from evidence and prior reasoning steps.
    3. Error recognition: When results do not make sense, the system recognizes the problem and adjusts its approach.
    4. Abstraction: The AI recognizes general principles and applies them to specific situations, not just memorizing solutions.
    5. Explanation: Reasoning AI can explain its logic, making its conclusions verifiable and trustworthy.

    The path forward

    The reasoning revolution does not replace human expertise but augments it in powerful ways. For global health professionals, this means:

    • AI partners that can work through complex epidemiological puzzles;
    • Systems that help design culturally appropriate interventions;
    • Tools that identify patterns humans might miss while respecting local knowledge.

    Understanding reasoning AI is no longer optional for those shaping global health. These systems are becoming intellectual partners capable of working through complex problems alongside human experts. The question is not whether to engage with this technology but how to use it effectively while maintaining human agency, judgment, and values in decisions that affect human lives.

    The ability to reason—to work systematically through complex problems—has always been central to advancing human health and knowledge. Now that machines are learning this capability, we must thoughtfully consider how to harness it for global benefit while ensuring human wisdom guides its application.

  • The agentic AI revolution: what does it mean for workforce development?

    The agentic AI revolution: what does it mean for workforce development?

    Imagine hiring an assistant who never sleeps, never forgets, can work on a thousand tasks simultaneously, and communicates with you in your own language. Now imagine having not just one such assistant, but an entire team of them, each specialized in different areas, all coordinating seamlessly to achieve your goals. This is the “agentic AI revolution” —a transformation where AI systems become agents that can understand objectives, remember context, plan actions, and work together to complete complex tasks. It represents a shift from AI as a tool you use to AI as a workforce that you collaborate with.

    Understanding AI agents: More than chatbots

    When most people think of AI today, they think of ChatGPT or similar systems—you ask a question, you get an answer. That interaction ends, and the next time you return, you start fresh. These are powerful tools, but they are fundamentally reactive and limited to single exchanges.

    AI agents are different. They work on a principle of “language in, memory in, language out.” Let’s break down what this means:

    1. Language in: You describe what you want in natural language, not computer code. “Find me a house in California that meets these criteria…”
    2. Memory in: The agent remembers everything relevant—your preferences, previous searches, budget constraints, past interactions. It maintains this memory across days, weeks, or months.
    3. Language out: The agent reports back in plain language, explains what it did, and asks for clarification when needed. “I found three properties matching your criteria. Here’s why each might work…”

    But here is the crucial part: between receiving your request and reporting back, the agent can take actions in the world. It can search databases, fill out forms, make appointments, send emails, analyze documents, and coordinate with other agents.

    The house that agentic AI built

    The example of building a house perfectly illustrates how agents transform complex projects. In the traditional approach, you would:

    1. Spend weeks searching real estate listings yourself.
    2. Hire a lawyer to research zoning laws and regulations.
    3. Work with an architect to design the building.
    4. Interview and select contractors.
    5. Manage the construction process.
    6. Deal with disputes if things go wrong.

    Each step requires your active involvement, coordination between different professionals, and enormous amounts of time.

    In the agentic model, you simply state your goal: “I want to build a house in California with these specifications and this budget.” Then:

    • Agent 1 searches for suitable lots, analyzing thousands of options against your criteria.
    • Agent 2 researches all applicable regulations, permits, and restrictions for each potential lot.
    • Agent 3 creates design options that maximize your preferences while meeting all regulations.
    • Agent 4 identifies and vets contractors, checking licenses, reviews, and past performance.
    • Agent 5 monitors construction progress and prepares documentation if issues arise.

    These agents do not work in isolation. They communicate constantly:

    • The lot-finding agent tells the regulation agent which properties to research.
    • The regulation agent informs the design agent about height restrictions and setback requirements.
    • The design agent coordinates with the contractor agent about feasibility and costs.
    • All agents update you on progress and escalate decisions that need human judgment.

    Why agentic AI changes everything

    This workflow example is true of every business, every government, and every group human activity. In other words, this transformation has universal relevance.

    Every complex human endeavor involves similar patterns:

    • Multiple steps that must happen in sequence;
    • Different types of expertise needed at each step;
    • Coordination between various parties;
    • Information that must flow between stages; and
    • Decisions based on accumulated knowledge.

    Today, humans do all this coordination work. We are the project managers, the communicators, the information carriers, the decision makers at every level. The agentic revolution means AI agents can handle much of this coordination, freeing humans to focus on setting goals and making key judgments.

    The memory advantage

    What makes agents truly powerful is their memory. Unlike human workers who might forget details or need to be briefed repeatedly, agents maintain perfect recall of:

    • Every interaction and decision;
    • All relevant documents and data;
    • The complete history of a project; and
    • Relationships between different pieces of information.

    This memory persists across time and can be shared between agents. When you return to a project months later, the agents remember exactly where things stood and can continue seamlessly.

    Agentic AI from individual tools to digital teams

    The revolutionary aspect is not just individual agents but how they work together. Like a well-functioning human team, AI agents can:

    • Divide complex tasks based on specialization;
    • Share information and coordinate actions;
    • Escalate issues that need human decision-making;
    • Learn from outcomes to improve future performance; and
    • Scale up or down based on workload.

    But unlike human teams, they can:

    • Work 24/7 without breaks;
    • Handle thousands of tasks in parallel;
    • Communicate instantly without misunderstandings;
    • Maintain perfect consistency; and
    • Never forget critical details.

    The new human role as co-worker to agentic AI

    In this world, humans do not become obsolete—our role fundamentally changes. Instead of doing routine coordination and information processing, we:

    • Set goals and priorities;
    • Make value judgments;
    • Handle exceptions requiring creativity or empathy;
    • Build relationships and trust;
    • Ensure ethical considerations are met; and
    • Provide the vision and purpose that guides agent actions.

    Challenges and considerations

    The agentic revolution raises important questions:

    • Trust: How do we verify agents are acting in our interest?
    • Control: What happens when agents make decisions we did not anticipate?
    • Accountability: Who is responsible when an agent makes an error?
    • Privacy: What data do agents need access to, and how is it protected?
    • Employment: What happens to jobs based on coordination and information processing?

    What can agentic AI do in 2025?

    Early versions of these agents already exist in limited forms. Organizations and individuals who understand this shift early will have significant advantages. Those who continue operating as if human coordination is the only option may find themselves struggling to compete with those augmented by agentic AI teams.

    Where do we go from here?

    The agentic revolution represents something humanity has never had before: the ability to multiply our capacity for complex action without proportionally increasing human effort. It is as if every person could have their own team of tireless, brilliant assistants who understand their goals and work together seamlessly to achieve them.

    This is not about replacing human intelligence but augmenting human capability. When we can delegate routine coordination and information processing to agents, we can focus on what humans do best: creating meaning, building relationships, making ethical judgments, and pursuing purposes that matter to us.

    The world we imagine—where building a house or running a business or navigating healthcare becomes as simple as stating your goal clearly—represents a fundamental shift in how complex tasks get accomplished. Whatever the timeline for this transformation, understanding how AI agents work and what they make possible has become essential for anyone trying to make sense of where our societies are heading.

    The concept is clear: AI systems that can understand goals, remember context, and coordinate actions to achieve complex outcomes. What we do with this capability remains an open question—one that will be answered not by the technology itself, but by how we choose to use it.

    Fediverse Reactions
  • The business of artificial intelligence and the equity challenge

    The business of artificial intelligence and the equity challenge

    Since 2019, when The Geneva Learning Foundation (TGLF) launched its first AI pilot project, we have been exploring how the Second Machine Age is reshaping learning. Ahead of the release of the first framework for AI in global health, I had a chance to sit down with a group of Swiss business leaders at the PanoramAI conference in Lausanne on 5 June 2025 to share TGLF’s insights about the significance and potential of artificial intelligence for global health and humanitarian response. Here is the article posted by the conference to recap a few of the take-aways.

    The Global Equity Challenger

    At the Panoramai AI Summit, Reda Sadki, leader of The Geneva Learning Foundation, delivered provocative insights about AI’s impact on global equity and the future of human work. Drawing from humanitarian emergency response and global health networks, he challenged comfortable assumptions about AI’s societal implications.

    The job displacement reality

    Reda directly confronted panel optimism about job preservation: “One of the things I’ve heard from fellow panelists is this idea that we can tell employees AI is not coming for your job. And I struggle to see that as anything other than deceitful or misleading at best. ”

    Eliminating knowledge worker positions in education

    “In one of our programmes, after six months we were able to use AI to replace key functions initially performed by humans. Humans helped us figure out how to do it. We then refocused a smaller team on tasks that we cannot or do not want to automate. We tried to do this openly.”

    What’s left for humans to do?

    “These machines are already learning faster and better than us, and they are doing so exponentially. Right now, what’s left for humans currently is the facilitation, facilitating connections in a peer learning system. We do not yet have agents that can facilitate, that can read the room, that can help humans understand.”

    Global access inequities

    Reda highlighted three critical equity challenges: geographic access restrictions (‘geolocking’), transparency expectations around AI usage, and punitive accountability systems that discourage innovation in humanitarian contexts. “Somebody who uses AI in that context is more likely to be punished than rewarded, even if the outcomes are better and the costs are lower. ”

    Emerging markets disconnect

    “Even though that’s where the future markets are likely to be for AI, ” Reda observed limited engagement with Africa, Asia, and Latin America among attendees, highlighting a strategic blindness to global AI market evolution.

    Organizational evolution question

    Reda posed fundamental questions about future organizational structures, questioning whether traditional hierarchical models with management layers will remain dominant “two years or five years down the line. ”

    Network-based innovation vision

    “We’ve nurtured the emergence of a global network of health workers sharing their observations of climate change impacts on the health of communities they serve. This is already powerful for preparedness and response, but we’re trying to find ways to weave in and embed AI as co-workers and co-thinkers to help health workers harness messy, complex, large-volume climate data.”

    Exponential learning challenge

    “These machines are already learning faster and better than us and that, and they’re doing so exponentially better than us. It’s pretty clear what, you know, what keeps me awake at night is what what’s left for humans. ”

    Key Achievement: Reda demonstrated how honest assessment of AI’s transformative impact requires abandoning comfortable narratives about job preservation, positioning global leaders to address equity challenges while identifying uniquely human capabilities in an AI-augmented world.

    Reda Sadki serves as Executive Director of The Geneva Learning Foundation (TGLF), a Swiss non-profit. Concurrently, he maintains his position as Chief Learning Officer at Learning Strategies International (LSi) since 2013, where he helps international organizations improve their change execution capabilities. TGLF, under his guidance, catalyzes large-scale peer networks of frontline actors across 137 countries, developing learning experiences that transform local expertise into innovation and measurable results.

    Image: PanoramAI (Raphaël Briner).