Tag: San Francisco Consensus

  • Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

    Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

    “We are at the beginning of a new epoch,” Eric Schmidt declared at the RAISE Summit in Paris on 9 July 2025. The former Google CEO’s message grounded in what he calls the San Francisco Consensus carries unusual weight—not necessarily because of his past role leading one of tech’s giants, but because of his current one: advising heads of state and industry on artificial intelligence.

    “When I talk to governments, what I tell them is, one, ChatGPT is great, but that was two years ago. Everything’s changed again. You’re not prepared for it. And two, you better get organized around it—the good and the bad.”

    At the Paris summit, he shared what he calls the “San Francisco Consensus”—a convergence of belief among Silicon Valley’s leaders that within three to six years AI will fundamentally transform every aspect of human activity.

    Whether one views this timeline as realistic or delusional matters less than the fact that the people building AI systems—and investing hundreds of billions in infrastructure—believe it. Their conviction alone makes the Consensus a force shaping our immediate future.

    “There is a group of people that I work with. They are all in San Francisco, and they have all basically convinced themselves that in the next two to six years—the average is three years—the entire world will change,” Schmidt explained. (In the past, he initially referred to the Consensus as a kind of inside joke.)

    He carefully framed this as a consensus rather than fact: “I call it a consensus because it’s true that we agree… but it’s not necessarily true that the consensus is true.”

    Schmidt’s own position became clear as he compared the arrival of artificial general intelligence (“AGI”) to the Enlightenment itself. “During the Enlightenment, we as humans learned from going from direct faith in God to using our reasoning skills. So now we have the arrival of a new non-human intelligence, which is likely to have better reasoning skills than humans can have.”

    The three pillars of the San Francisco Consensus

    The Consensus rests on three converging technological revolutions:

    1. The language revolution

    Large language models like ChatGPT captured public attention by demonstrating AI’s ability to understand and generate human language. But Schmidt emphasized these are already outdated. The real transformation lies in language becoming a universal interface for AI systems—enabling them to process instructions, maintain context, and coordinate complex tasks through natural language.

    2. The agentic revolution

    “The agentic revolution can be understood as language in, memory in, language out,” Schmidt explained. These are AI systems that can pursue goals, maintain state across interactions, and take actions in the world.

    His deliberately mundane example illustrated the profound implications: “I have a house in California, I want to build another one. I have an agent that finds the lot, I have another agent that works on what the rules are, another agent that works on designing the building, selects the contractor, and at least in America, you have an agent that then sues the contractor when the house doesn’t work.”

    The punchline: “I just gave you a workflow example that’s true of every business, every government, and every group human activity.”

    3. The reasoning revolution

    Most significant is the emergence of AI systems that can engage in complex reasoning through what experts call “inference”—the process of drawing conclusions from data—enhanced by “reinforcement learning,” where systems improve by learning from outcomes.

    “Take a look at o3 from ChatGPT,” Schmidt urged. “Watch it go forward and backward, forward and backward in its reasoning, and it will blow your mind away.” These systems use vastly more computational power than traditional searches—”many, many thousands of times more electricity, queries, and so forth”—to work through problems step by step.

    The results are striking. Google’s math model, says Schmidt, now performs “at the 90 percentile of math graduate students.” Similar breakthroughs are occurring across disciplines.

    What is the timeline of the San Francisco Consensus?

    The Consensus timeline seems breathtaking: three years on average, six in Schmidt’s more conservative estimate. But the direction matters more than the precise date.

    “Recursive self-improvement” represents the critical threshold—when AI systems begin improving themselves. “The system begins to learn on itself where it goes forward at a rate that is impossible for us to understand.”

    After AGI comes superintelligence, which Schmidt defines with precision: “It can prove something that we know to be true, but we cannot understand the proof. We humans, no human can understand it. Even all of us together cannot understand it, but we know it’s true.”

    His timeline? “I think this will occur within a decade.”

    The infrastructure gamble

    The Consensus drives unprecedented infrastructure investment. Schmidt addressed this directly when asked about whether massive AI capital expenditures represent a bubble:

    “If you ask most of the executives in the industry, they will say the following. They’ll say that we’re in a period of overbuilding. They’ll say that there will be overcapacity in two or three years. And when you ask them, they’ll say, but I’ll be fine and the other guys are going to lose all their money. So that’s a classic bubble, right?”

    But Schmidt sees a different logic at work: “I’ve never seen a situation where hardware capacity was not taken up by software.” His point: throughout tech history, new computational capacity enables new applications that consume it. Today’s seemingly excessive AI infrastructure will likely be absorbed by tomorrow’s AI applications, especially if reasoning-based AI systems require “many, many thousands of times more” computational power than current models.

    The network effect trap

    Schmidt’s warnings about international competition reveal why AI development resembles a “network effect business”—where the value increases exponentially with scale and market dominance becomes self-reinforcing. In AI, this manifests through:

    • More data improving models;
    • Better models attracting more users;
    • More users generating more data; and
    • Greater resources enabling faster improvement.

    “What happens when you’ve got two countries where one is ahead of the other?” Schmidt asked. “In a network effect business, this is likely to produce slopes of gains at this level,” he said, gesturing sharply upward. “The opponent may realize that once you get there, they’ll never catch up.”

    This creates what he calls a “race condition of preemption”—a term from computer science describing a situation where the outcome depends critically on the sequence of events. In geopolitics, it means countries might take aggressive action to prevent rivals from achieving irreversible AI advantage.

    The scale-free domains

    Schmidt believes that some fields will transform faster due to their “scale-free” nature—domains where AI can generate unlimited training data without human input. Exhibit A: mathematics. “Mathematicians with whiteboards or chalkboards just make stuff up all day. And they do it over and over again.”

    Software development faces similar disruption. When Schmidt asked a programmer what language they code in, the response—”Why does it matter?”—captured how AI makes specific technical skills increasingly irrelevant.

    Critical perspectives on the San Francisco Consensus

    The San Francisco Consensus could be wrong. Silicon Valley has predicted imminent breakthroughs in artificial intelligence before—for decades, in fact. Today’s optimism might reflect the echo chamber of Sand Hill Road. Fundamental challenges remain: reliability, alignment, the leap from pattern matching to genuine reasoning.

    But here is what matters: the people building AI systems believe their own timeline. This belief, held by those controlling hundreds of billions in capital and the world’s top technical talent, becomes self-fulfilling. Investment flows, talent migrates, governments scramble to respond.

    Schmidt speaks to heads of state because he understands this dynamic. The consensus shapes reality through sheer force of capital and conviction. Even if wrong about timing, it is setting the direction. The infrastructure being built, the talent being recruited, the systems being designed—all point toward the same destination.

    The imperative of speed

    Schmidt’s message to leaders carried the urgency of hard-won experience: “If you’re going to [invest], do it now and move very, very fast. This market has so many players. There’s so much money at stake that you will be bypassed if you spend too much time worrying about anything other than building incredible products.”

    His confession about Google drove the point home: “Every mistake I made was fundamentally one of time… We didn’t move fast enough.”

    This was not generic startup advice but specific warning about exponential technologies. In AI development, being six months late might mean being forever behind. The network effects Schmidt described—where leaders accumulate insurmountable advantages—are already visible in the concentration of AI capabilities among a handful of companies.

    For governments crafting AI policy, businesses planning strategy, or education institutions charting their futures,  the timeline debate misses the point. Whether recursive self-improvement arrives in three years or six, the time to act is now. The changes ahead—in labor markets, in global power dynamics, in the very nature of intelligence—demand immediate attention.

    Schmidt’s warning to world leaders was not about a specific date but about a mindset: those still debating whether AI represents fundamental change have already lost the race.

    Photo credit: Paris RAISE Summit (8-9 July 2025) © Sébastien Delarque

  • Why peer learning is critical to survive the Age of Artificial Intelligence

    Why peer learning is critical to survive the Age of Artificial Intelligence

    María, a pediatrician in Argentina, works with an AI diagnostic system that can identify rare diseases, suggest treatment protocols, and draft reports in perfect medical Spanish. But something crucial is missing. The AI provides brilliant medical insights, yet María struggles to translate them into action in her community. What is needed to realize the promise of the Age of Artificial Intelligence?

    Then she discovers the missing piece. Through a peer learning network—where health workers develop projects addressing real challenges, review each other’s work, and engage in facilitated dialogue—she connects with other health professionals across Latin America who are learning to work with AI as a collaborative partner. Together, they discover that AI becomes far more useful when combined with their understanding of local contexts, cultural practices, and community dynamics.

    This speculative scenario, based on current AI developments and existing peer learning successes, illuminates a crucial insight as we ascend into the age of artificial intelligence. Eric Schmidt’s San Francisco Consensus predicts that within three to six years, AI will reason at expert levels, coordinate complex tasks through digital agents, and understand any request in natural language.

    Understanding how peer learning can bridge AI capabilities and human thinking and action is critical to prepare for this future.

    Collaboration in the Age of Artificial Intelligence

    The three AI revolutions—language interfaces, reasoning systems, and agentic coordination—will offer unprecedented capabilities. If access is equitable, this will be available to any health worker, anywhere. Yet having access to these tools is just the beginning. The transformation will require humans to learn together how to collaborate effectively with AI.

    Consider what becomes possible when health workers combine AI capabilities with collective human insight:

    • AI analyzes disease patterns; peer networks share which interventions work in specific cultural contexts.
    • AI suggests optimal treatment protocols; practitioners adapt them based on local resource availability.
    • AI identifies at-risk populations; community workers know how to reach them effectively.

    The magic happens in integration of AI and human capabiltiies through peer learning. Think of it this way: AI can analyze millions of health records to identify disease patterns, but it may not know that in your district, people avoid the Tuesday clinic because that is market day, or that certain communities trust traditional healers more than government health workers.

    When epidemiologists share these contextual insights with peers facing similar challenges—through structured discussions and collaborative problem-solving—they learn together how to adapt AI’s analytical power to local realities.

    For example, when an AI system identifies a disease cluster, epidemiologists in a peer network can share strategies for investigating it: one colleague might explain how they gained community trust for contact tracing, another might share how they adapted AI-generated survey questions to be culturally appropriate, and a third might demonstrate how they used AI predictions alongside traditional knowledge to improve outbreak response.

    This collective learning—where professionals teach each other how to blend AI’s computational abilities with human understanding of communities—creates solutions more effective than either AI or individual expertise could achieve alone.

    Understanding peer learning in the Age of Artificial Intelligence

    Peer learning is not about professionals sharing anecdotes. It is a structured learning process where:

    • Participants develop concrete projects addressing real challenges in their contexts, such as improving vaccination coverage or adapting AI tools for local use.
    • Peers review each other’s work using expert-designed rubrics that ensure quality while encouraging innovation.
    • Facilitated dialogue sessions help surface patterns across different contexts and generate collective insights.
    • Continuous cycles of action, reflection, and revision transform individual experiences into shared wisdom.
    • Every participant becomes both teacher and learner, contributing their unique insights while learning from others.

    This approach differs fundamentally from traditional training because knowledge flows horizontally between peers rather than vertically from experts. When applied to human-AI collaboration, it enables rapid collective learning about what works, what fails, and why.

    Why peer networks unlock the potential of the Age of Artificial Intelligence

    Contextual intelligence through collective wisdom

    AI systems train on global data and identify universal patterns. This is their strength. Human practitioners understand local contexts intimately. This is theirs. Peer learning networks create bridges between these complementary intelligences.

    When a health worker discovers how to adapt AI-generated nutrition plans for local food availability, that insight becomes valuable to peers in similar contexts worldwide. Through structured sharing and review processes, the network creates a living library of contextual adaptations that make AI recommendations actionable.

    Trust-building in the age of AI

    Communities often view new technologies with suspicion. The most sophisticated AI cannot overcome this alone. But when local health workers learn from peers how to introduce AI as a helpful tool rather than a threatening replacement, acceptance grows.

    In peer networks, practitioners share not just technical knowledge but communication strategies through structured dialogue: how to explain AI recommendations to skeptical patients, how to involve community leaders in AI-assisted health programs, how to maintain the human touch while using digital tools. This collective learning makes AI acceptable and valuable to communities that might otherwise reject it.

    Distributed problem-solving

    When AI provides a diagnosis or recommendation that seems inappropriate for local conditions, isolated practitioners might simply ignore it. But in peer networks with structured review processes, they can explore why the discrepancy exists and how to bridge it.

    A teacher receives AI-generated lesson plans that assume resources her school lacks. Through her network’s collaborative problem-solving process, she finds teachers in similar situations who have created innovative adaptations. Together, they develop approaches that preserve AI’s pedagogical insights while working within real constraints.

    The new architecture of collaborative learning

    Working effectively with AI requires new forms of human collaboration built on three essential elements:

    Reciprocal knowledge flows

    When everyone has access to AI expertise, the most valuable learning happens between peers who share similar contexts and challenges. They teach each other not what AI knows, but how to make AI knowledge useful in their specific situations through:

    • Structured project development and peer review;
    • Regular assemblies where practitioners share experiences;
    • Documentation of successful adaptations and failures;
    • Continuous refinement based on collective feedback.

    Structured experimentation

    Peer networks provide safe spaces to experiment with AI collaboration. Through structured cycles of action and reflection, practitioners:

    • Try AI recommendations in controlled ways;
    • Document what works and what needs adaptation using shared frameworks;
    • Share failures as valuable learning opportunities through facilitated sessions;
    • Build collective knowledge about human-AI collaboration.

    Continuous capability building

    As AI capabilities evolve rapidly, no individual can keep pace alone. Peer networks create continuous learning environments where:

    • Early adopters share new AI features through structured presentations;
    • Groups explore emerging capabilities together in hands-on sessions;
    • Collective intelligence about AI use grows through documented experiences;
    • Everyone stays current through shared discovery and regular dialogue.

    Evidence-based speculation: imagining peer networks that include both machines and humans

    While the following examples are speculative, they build on current evidence from existing peer learning networks and emerging AI capabilities to imagine near-future possibilities.

    The Nigerian immunization scenario

    Based on Nigeria’s successful peer learning initiatives and current AI development trajectories, we can envision how AI-assisted immunization programs might work. AI could help identify optimal vaccine distribution patterns and predict which communities are at risk. Success would come when health workers form peer networks to share:

    • Techniques for presenting AI predictions to community leaders effectively;
    • Methods for adapting AI-suggested schedules to local market days and religious observances;
    • Strategies for using AI insights while maintaining personal relationships that drive vaccine acceptance.

    This scenario extrapolates from current successes in peer learning for immunization in Nigeria to imagine enhanced outcomes with AI partnership.

    Climate health innovation networks

    Drawing from existing climate health responses and AI’s growing environmental analysis capabilities, we can project how peer networks might function. As climate change creates unprecedented health challenges, AI models will predict impacts and suggest interventions. Community-based health workers could connect these ‘big data’ insights with their own local observations and experience to take action, sharing innovations like:

    • Using AI climate predictions to prepare communities for heat waves;
    • Adapting AI-suggested cooling strategies to local housing conditions;
    • Combining traditional knowledge with AI insights for water management.

    These possibilities build on documented peer learning successes in sharing health workers observations and insights about the impacts of climate change on the health of local communities.

    Addressing AI’s limitations through collective wisdom

    While AI offers powerful capabilities, we must acknowledge that technology is not neutral—AI systems carry biases from their training data, reflect the perspectives of their creators, and can perpetuate or amplify existing inequalities. Peer learning networks provide a crucial mechanism for identifying and addressing these limitations collectively.

    Through structured dialogue and shared experiences, practitioners can:

    • Document when AI recommendations reflect biases inappropriate for their contexts;
    • Develop collective strategies for identifying and correcting AI biases;
    • Share techniques for adapting AI outputs to ensure equity;
    • Build shared understanding of AI’s limitations and appropriate use cases.

    This collective vigilance and adaptation becomes essential for ensuring AI serves all communities fairly.

    What this means for different stakeholders

    For funders: Investing in collaborative capacity

    The highest return on AI investment comes not from technology alone but from building human capacity to use it effectively. Peer learning networks:

    • Multiply the impact of AI tools through shared adaptation strategies;
    • Create sustainable capacity that grows with technological advancement;
    • Generate innovations that improve AI applications for specific contexts;
    • Build resilience through distributed expertise.

    For practitioners: New collaborative competencies

    Working effectively with AI requires skills best developed through structured peer learning:

    • Partnership mindset: Seeing AI as a collaborative tool requiring human judgment.
    • Adaptive expertise: Learning to blend AI capabilities with contextual knowledge.
    • Reflective practice: Regularly examining what works in human-AI collaboration through structured reflection.
    • Knowledge sharing: Contributing insights through peer review and dialogue that help others work better with AI.

    For policymakers: Enabling collaborative ecosystems

    Policies should support human-AI collaboration by:

    • Funding peer learning infrastructure alongside AI deployment;
    • Creating time and space for structured peer learning activities;
    • Recognizing peer learning as essential professional development;
    • Supporting documentation and spread of effective practices.

    AI-human transformation through collaboration: A comparative view

    Working with AI individuallyWorking with AI through structured peer networks
    Powerful tools but limited adaptation
    Insights remain isolated
    Success depends on individual skill
    Continuous adaptation through structured sharing
    Insights multiply across network through peer review
    Collective wisdom enhances individual capability
    AI recommendations may miss local context
    Trial and error in isolation
    Slow spread of effective practices
    Context-aware applications emerge through dialogue
    Structured experimentation with collective learning
    Rapid diffusion through documented innovations
    Overwhelmed by rapid AI changes
    Struggling to keep pace alone
    Uncertainty about appropriate use
    Collective sense-making through facilitated sessions
    Shared discovery in peer projects
    Growing confidence through structured support

    The collaborative future

    As AI capabilities expand, two paths emerge:

    Path 1: Individuals struggle alone to make sense of AI tools, leading to uneven adoption, missed opportunities, and growing inequality between those who figure it out and those who do not.

    Path 2: Structured peer networks enable collective learning about human-AI collaboration, leading to widespread effective use, continuous innovation, and shared benefit from AI advances.

    What determines outcomes is how humans organize to learn and work together with AI through structured peer learning processes.

    María’s projected transformation

    Six months after her initial struggles, we can envision how María’s experience might transform. Through structured peer learning—project development, peer review, and facilitated dialogue—she could learn to see AI not as a foreign expert imposing solutions, but as a knowledgeable colleague whose insights she can adapt and apply.

    Based on current peer learning practices, she might discover techniques from colleagues across Latin America and the rest of the world:

    • Methods for using AI diagnosis as a conversation starter with traditional healers;
    • Strategies for validating AI recommendations through community health committees;
    • Approaches for using AI analytics to support (not replace) community knowledge.

    Following the pattern of peer learning networks, Maríawould begin contributing her own innovations through structured sharing, particularly around integrating AI insights with indigenous healing practices. Her documented approaches would spread through peer review and dialogue, helping thousands of health workers make AI truly useful in their communities.

    Conclusion: The multiplication effect

    AI transformation promises to augment human capabilities dramatically. Language interfaces will democratize access to advanced tools. Reasoning systems will provide expert-level analysis. Agentic AI will coordinate complex operations. These capabilities are beginning to transform what individuals can accomplish.

    But the true multiplication effect will come through structured peer learning networks. When thousands of practitioners share how to work effectively with AI through systematic project work, peer review, and facilitated dialogue, they create collective intelligence about human-AI collaboration that no individual could develop alone. They transform AI from an impressive but alien technology into a natural extension of human capability.

    For funders, this means the highest-impact investments combine AI tools with structured peer learning infrastructure. For policymakers, it means creating conditions where collaborative learning flourishes alongside technological deployment. For practitioners, it means embracing both AI partnership and peer collaboration through structured processes as essential to professional practice.

    The future of human progress may rest on our ability to find effective ways to build powerful collaboration in networks that combine human and artificial intelligence. When we learn together through structured peer learning how to work with AI, we multiply not just individual capability but collective capacity to address the complex challenges facing our world.

    AI is still emergent, changing constantly and rapidly. The peer learning methods are proven: we know a lot about how humans learn and collaborate. The question is how quickly we can scale this collaborative approach to match the pace of AI advancement. In that race, structured peer learning is not optional—it is essential.

    Image: The Geneva Learning Foundation Collection © 2025

    Fediverse Reactions