Tag: Eric Schmidt

  • Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

    Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

    “We are at the beginning of a new epoch,” Eric Schmidt declared at the RAISE Summit in Paris on 9 July 2025. The former Google CEO’s message grounded in what he calls the San Francisco Consensus carries unusual weight—not necessarily because of his past role leading one of tech’s giants, but because of his current one: advising heads of state and industry on artificial intelligence.

    “When I talk to governments, what I tell them is, one, ChatGPT is great, but that was two years ago. Everything’s changed again. You’re not prepared for it. And two, you better get organized around it—the good and the bad.”

    At the Paris summit, he shared what he calls the “San Francisco Consensus”—a convergence of belief among Silicon Valley’s leaders that within three to six years AI will fundamentally transform every aspect of human activity.

    Whether one views this timeline as realistic or delusional matters less than the fact that the people building AI systems—and investing hundreds of billions in infrastructure—believe it. Their conviction alone makes the Consensus a force shaping our immediate future.

    “There is a group of people that I work with. They are all in San Francisco, and they have all basically convinced themselves that in the next two to six years—the average is three years—the entire world will change,” Schmidt explained. (In the past, he initially referred to the Consensus as a kind of inside joke.)

    He carefully framed this as a consensus rather than fact: “I call it a consensus because it’s true that we agree… but it’s not necessarily true that the consensus is true.”

    Schmidt’s own position became clear as he compared the arrival of artificial general intelligence (“AGI”) to the Enlightenment itself. “During the Enlightenment, we as humans learned from going from direct faith in God to using our reasoning skills. So now we have the arrival of a new non-human intelligence, which is likely to have better reasoning skills than humans can have.”

    The three pillars of the San Francisco Consensus

    The Consensus rests on three converging technological revolutions:

    1. The language revolution

    Large language models like ChatGPT captured public attention by demonstrating AI’s ability to understand and generate human language. But Schmidt emphasized these are already outdated. The real transformation lies in language becoming a universal interface for AI systems—enabling them to process instructions, maintain context, and coordinate complex tasks through natural language.

    2. The agentic revolution

    “The agentic revolution can be understood as language in, memory in, language out,” Schmidt explained. These are AI systems that can pursue goals, maintain state across interactions, and take actions in the world.

    His deliberately mundane example illustrated the profound implications: “I have a house in California, I want to build another one. I have an agent that finds the lot, I have another agent that works on what the rules are, another agent that works on designing the building, selects the contractor, and at least in America, you have an agent that then sues the contractor when the house doesn’t work.”

    The punchline: “I just gave you a workflow example that’s true of every business, every government, and every group human activity.”

    3. The reasoning revolution

    Most significant is the emergence of AI systems that can engage in complex reasoning through what experts call “inference”—the process of drawing conclusions from data—enhanced by “reinforcement learning,” where systems improve by learning from outcomes.

    “Take a look at o3 from ChatGPT,” Schmidt urged. “Watch it go forward and backward, forward and backward in its reasoning, and it will blow your mind away.” These systems use vastly more computational power than traditional searches—”many, many thousands of times more electricity, queries, and so forth”—to work through problems step by step.

    The results are striking. Google’s math model, says Schmidt, now performs “at the 90 percentile of math graduate students.” Similar breakthroughs are occurring across disciplines.

    What is the timeline of the San Francisco Consensus?

    The Consensus timeline seems breathtaking: three years on average, six in Schmidt’s more conservative estimate. But the direction matters more than the precise date.

    “Recursive self-improvement” represents the critical threshold—when AI systems begin improving themselves. “The system begins to learn on itself where it goes forward at a rate that is impossible for us to understand.”

    After AGI comes superintelligence, which Schmidt defines with precision: “It can prove something that we know to be true, but we cannot understand the proof. We humans, no human can understand it. Even all of us together cannot understand it, but we know it’s true.”

    His timeline? “I think this will occur within a decade.”

    The infrastructure gamble

    The Consensus drives unprecedented infrastructure investment. Schmidt addressed this directly when asked about whether massive AI capital expenditures represent a bubble:

    “If you ask most of the executives in the industry, they will say the following. They’ll say that we’re in a period of overbuilding. They’ll say that there will be overcapacity in two or three years. And when you ask them, they’ll say, but I’ll be fine and the other guys are going to lose all their money. So that’s a classic bubble, right?”

    But Schmidt sees a different logic at work: “I’ve never seen a situation where hardware capacity was not taken up by software.” His point: throughout tech history, new computational capacity enables new applications that consume it. Today’s seemingly excessive AI infrastructure will likely be absorbed by tomorrow’s AI applications, especially if reasoning-based AI systems require “many, many thousands of times more” computational power than current models.

    The network effect trap

    Schmidt’s warnings about international competition reveal why AI development resembles a “network effect business”—where the value increases exponentially with scale and market dominance becomes self-reinforcing. In AI, this manifests through:

    • More data improving models;
    • Better models attracting more users;
    • More users generating more data; and
    • Greater resources enabling faster improvement.

    “What happens when you’ve got two countries where one is ahead of the other?” Schmidt asked. “In a network effect business, this is likely to produce slopes of gains at this level,” he said, gesturing sharply upward. “The opponent may realize that once you get there, they’ll never catch up.”

    This creates what he calls a “race condition of preemption”—a term from computer science describing a situation where the outcome depends critically on the sequence of events. In geopolitics, it means countries might take aggressive action to prevent rivals from achieving irreversible AI advantage.

    The scale-free domains

    Schmidt believes that some fields will transform faster due to their “scale-free” nature—domains where AI can generate unlimited training data without human input. Exhibit A: mathematics. “Mathematicians with whiteboards or chalkboards just make stuff up all day. And they do it over and over again.”

    Software development faces similar disruption. When Schmidt asked a programmer what language they code in, the response—”Why does it matter?”—captured how AI makes specific technical skills increasingly irrelevant.

    Critical perspectives on the San Francisco Consensus

    The San Francisco Consensus could be wrong. Silicon Valley has predicted imminent breakthroughs in artificial intelligence before—for decades, in fact. Today’s optimism might reflect the echo chamber of Sand Hill Road. Fundamental challenges remain: reliability, alignment, the leap from pattern matching to genuine reasoning.

    But here is what matters: the people building AI systems believe their own timeline. This belief, held by those controlling hundreds of billions in capital and the world’s top technical talent, becomes self-fulfilling. Investment flows, talent migrates, governments scramble to respond.

    Schmidt speaks to heads of state because he understands this dynamic. The consensus shapes reality through sheer force of capital and conviction. Even if wrong about timing, it is setting the direction. The infrastructure being built, the talent being recruited, the systems being designed—all point toward the same destination.

    The imperative of speed

    Schmidt’s message to leaders carried the urgency of hard-won experience: “If you’re going to [invest], do it now and move very, very fast. This market has so many players. There’s so much money at stake that you will be bypassed if you spend too much time worrying about anything other than building incredible products.”

    His confession about Google drove the point home: “Every mistake I made was fundamentally one of time… We didn’t move fast enough.”

    This was not generic startup advice but specific warning about exponential technologies. In AI development, being six months late might mean being forever behind. The network effects Schmidt described—where leaders accumulate insurmountable advantages—are already visible in the concentration of AI capabilities among a handful of companies.

    For governments crafting AI policy, businesses planning strategy, or education institutions charting their futures,  the timeline debate misses the point. Whether recursive self-improvement arrives in three years or six, the time to act is now. The changes ahead—in labor markets, in global power dynamics, in the very nature of intelligence—demand immediate attention.

    Schmidt’s warning to world leaders was not about a specific date but about a mindset: those still debating whether AI represents fundamental change have already lost the race.

    Photo credit: Paris RAISE Summit (8-9 July 2025) © Sébastien Delarque

  • Language as AI’s universal interface: What it means and why it matters

    Language as AI’s universal interface: What it means and why it matters

    Imagine if you could control every device, system, and process in the world simply by talking to it in plain English—or any language you speak. No special commands to memorize. No programming skills required. No technical manuals to study. Just explain what you want in your own words, and it happens.

    This is the transformation Eric Schmidt described when he spoke about language becoming the “universal interface” for artificial intelligence. To understand why this matters, we need to step back and see how radically this changes everything.

    The old way: A tower of Babel

    Today, interacting with technology requires learning its language, not the other way around. Consider what you need to know:

    • To use your smartphone, you must understand apps, settings, swipes, and taps
    • To search the internet effectively, you need the right keywords and search operators
    • To work with a spreadsheet, you must learn formulas, functions, and formatting
    • To program a computer, you need years of training in coding languages
    • To operate specialized software—from medical systems to industrial controls—requires extensive training

    Each system speaks its own language. Humans must constantly translate their intentions into forms machines can understand. This creates barriers everywhere: between people and technology, between different systems, and between those who have technical skills and those who do not.

    The new way: Natural language as universal interface

    What changes when AI systems can understand and act on natural human language? Everything.

    Instead of learning how to use technology, you simply tell it what you want:

    • “Find all our customers who haven’t ordered in six months and draft a personalized re-engagement email for each”
    • “Look at this medical scan and highlight anything unusual compared to healthy tissue”
    • “Monitor our factory equipment and alert me if any patterns suggest maintenance is needed soon”
    • “Take this contract and identify any terms that differ from our standard agreement”

    The AI system translates your natural language into whatever technical operations are needed—database queries, image analysis, pattern recognition, document comparison—without you needing to know how any of it works.

    Why a universal interface changes everything

    1. Democratization of capability

    When language becomes the interface, advanced capabilities become available to everyone who can explain what they want. A small business owner can perform complex data analysis without hiring analysts. A teacher can create customized learning materials without programming skills. A farmer can optimize irrigation without understanding algorithms.

    The divide between technical and non-technical people begins to disappear. What matters is not knowing how to code but knowing what outcomes you want to achieve.

    2. System integration without friction

    Today, making different systems work together is a nightmare of APIs, data formats, and compatibility issues. But when every system can be controlled through natural language, integration becomes as simple as explaining the connection you want:

    “When a customer complains on social media, create a support ticket, alert the appropriate team based on the issue type, and draft a public response acknowledging their concern”

    The AI handles all the technical complexity of connecting social media monitoring, ticketing systems, team communications, and response generation.

    3. Context that travels

    Unlike traditional interfaces that reset with each interaction, language-based AI systems can maintain context across time and tasks. They remember previous conversations, understand ongoing projects, and track evolving situations.

    Imagine telling an AI: “Remember that analysis we did last month on customer churn? Update it with this quarter’s data and highlight what’s changed.” The system knows exactly what you’re referring to and can build on previous work.

    4. Coordination at scale

    When AI agents can communicate through natural language, they can coordinate complex operations without human intervention. Schmidt’s example of building a house illustrates this—multiple AI agents handling different aspects of a project, all coordinating through language:

    • The land-finding agent tells the regulation agent about the plot it found
    • The regulation agent informs the design agent about building restrictions
    • The design agent coordinates with the contractor agent on feasibility
    • Each agent can explain its actions and reasoning in plain language

    Real-world implications

    For business

    Companies can automate complex workflows by describing them in natural language rather than programming them. A marketing manager could say: “Monitor our competitor’s pricing daily, alert me to any changes over 5%, and prepare a report on their promotional patterns.” No need for programmers, database experts, or data analysts.

    For healthcare

    Doctors can interact with AI diagnostic tools using medical terminology they already know, rather than learning proprietary interfaces. “Compare this patient’s symptoms with similar cases in our database and suggest additional tests based on what we might be missing.”

    For education

    Teachers can create personalized learning experiences by describing what they want: “Create practice problems for my students who are struggling with fractions, make them progressively harder as they improve, and let me know who needs extra help.”

    For government

    Policy makers can analyze complex data and model scenarios using plain language: “Show me how proposed changes to tax policy would affect families earning under $50,000 in rural areas versus urban areas.”

    Five challenges ahead

    This transformation is not without risks and challenges:

    1. Accuracy: Natural language is ambiguous. Ensuring AI systems correctly interpret intentions requires sophisticated understanding of context and nuance.
    2. Security: If anyone can control systems through language, protecting against malicious use becomes critical.
    3. Verification: When complex operations happen through simple commands, how do we verify the AI did what we intended?
    4. Dependency: As we rely more on AI to translate our intentions into actions, what happens to human technical skills?

    The bottom line

    Language as a universal interface represents a fundamental shift in how humans relate to technology. Instead of humans learning to speak machine languages, machines are learning to understand human intentions expressed naturally.

    This is not just about making technology easier to use. It is about removing the barriers between human intention and digital capability. When that barrier falls, we enter Eric Schmidt’s “new epoch”—where the distance between thinking something and achieving it collapses to nearly zero.

    The implications ripple through every industry, every job, every aspect of daily life. Those who understand this shift and adapt quickly will find themselves with almost magical capabilities. Those who do not may find themselves bypassed by others who can achieve in minutes what once took months.

    The universal interface is coming. The question is not whether to prepare, but how quickly you can begin imagining what becomes possible when the only limit is your ability to describe what you want.

    Fediverse Reactions
  • The agentic AI revolution: what does it mean for workforce development?

    The agentic AI revolution: what does it mean for workforce development?

    Imagine hiring an assistant who never sleeps, never forgets, can work on a thousand tasks simultaneously, and communicates with you in your own language. Now imagine having not just one such assistant, but an entire team of them, each specialized in different areas, all coordinating seamlessly to achieve your goals. This is the “agentic AI revolution” —a transformation where AI systems become agents that can understand objectives, remember context, plan actions, and work together to complete complex tasks. It represents a shift from AI as a tool you use to AI as a workforce that you collaborate with.

    Understanding AI agents: More than chatbots

    When most people think of AI today, they think of ChatGPT or similar systems—you ask a question, you get an answer. That interaction ends, and the next time you return, you start fresh. These are powerful tools, but they are fundamentally reactive and limited to single exchanges.

    AI agents are different. They work on a principle of “language in, memory in, language out.” Let’s break down what this means:

    1. Language in: You describe what you want in natural language, not computer code. “Find me a house in California that meets these criteria…”
    2. Memory in: The agent remembers everything relevant—your preferences, previous searches, budget constraints, past interactions. It maintains this memory across days, weeks, or months.
    3. Language out: The agent reports back in plain language, explains what it did, and asks for clarification when needed. “I found three properties matching your criteria. Here’s why each might work…”

    But here is the crucial part: between receiving your request and reporting back, the agent can take actions in the world. It can search databases, fill out forms, make appointments, send emails, analyze documents, and coordinate with other agents.

    The house that agentic AI built

    The example of building a house perfectly illustrates how agents transform complex projects. In the traditional approach, you would:

    1. Spend weeks searching real estate listings yourself.
    2. Hire a lawyer to research zoning laws and regulations.
    3. Work with an architect to design the building.
    4. Interview and select contractors.
    5. Manage the construction process.
    6. Deal with disputes if things go wrong.

    Each step requires your active involvement, coordination between different professionals, and enormous amounts of time.

    In the agentic model, you simply state your goal: “I want to build a house in California with these specifications and this budget.” Then:

    • Agent 1 searches for suitable lots, analyzing thousands of options against your criteria.
    • Agent 2 researches all applicable regulations, permits, and restrictions for each potential lot.
    • Agent 3 creates design options that maximize your preferences while meeting all regulations.
    • Agent 4 identifies and vets contractors, checking licenses, reviews, and past performance.
    • Agent 5 monitors construction progress and prepares documentation if issues arise.

    These agents do not work in isolation. They communicate constantly:

    • The lot-finding agent tells the regulation agent which properties to research.
    • The regulation agent informs the design agent about height restrictions and setback requirements.
    • The design agent coordinates with the contractor agent about feasibility and costs.
    • All agents update you on progress and escalate decisions that need human judgment.

    Why agentic AI changes everything

    This workflow example is true of every business, every government, and every group human activity. In other words, this transformation has universal relevance.

    Every complex human endeavor involves similar patterns:

    • Multiple steps that must happen in sequence;
    • Different types of expertise needed at each step;
    • Coordination between various parties;
    • Information that must flow between stages; and
    • Decisions based on accumulated knowledge.

    Today, humans do all this coordination work. We are the project managers, the communicators, the information carriers, the decision makers at every level. The agentic revolution means AI agents can handle much of this coordination, freeing humans to focus on setting goals and making key judgments.

    The memory advantage

    What makes agents truly powerful is their memory. Unlike human workers who might forget details or need to be briefed repeatedly, agents maintain perfect recall of:

    • Every interaction and decision;
    • All relevant documents and data;
    • The complete history of a project; and
    • Relationships between different pieces of information.

    This memory persists across time and can be shared between agents. When you return to a project months later, the agents remember exactly where things stood and can continue seamlessly.

    Agentic AI from individual tools to digital teams

    The revolutionary aspect is not just individual agents but how they work together. Like a well-functioning human team, AI agents can:

    • Divide complex tasks based on specialization;
    • Share information and coordinate actions;
    • Escalate issues that need human decision-making;
    • Learn from outcomes to improve future performance; and
    • Scale up or down based on workload.

    But unlike human teams, they can:

    • Work 24/7 without breaks;
    • Handle thousands of tasks in parallel;
    • Communicate instantly without misunderstandings;
    • Maintain perfect consistency; and
    • Never forget critical details.

    The new human role as co-worker to agentic AI

    In this world, humans do not become obsolete—our role fundamentally changes. Instead of doing routine coordination and information processing, we:

    • Set goals and priorities;
    • Make value judgments;
    • Handle exceptions requiring creativity or empathy;
    • Build relationships and trust;
    • Ensure ethical considerations are met; and
    • Provide the vision and purpose that guides agent actions.

    Challenges and considerations

    The agentic revolution raises important questions:

    • Trust: How do we verify agents are acting in our interest?
    • Control: What happens when agents make decisions we did not anticipate?
    • Accountability: Who is responsible when an agent makes an error?
    • Privacy: What data do agents need access to, and how is it protected?
    • Employment: What happens to jobs based on coordination and information processing?

    What can agentic AI do in 2025?

    Early versions of these agents already exist in limited forms. Organizations and individuals who understand this shift early will have significant advantages. Those who continue operating as if human coordination is the only option may find themselves struggling to compete with those augmented by agentic AI teams.

    Where do we go from here?

    The agentic revolution represents something humanity has never had before: the ability to multiply our capacity for complex action without proportionally increasing human effort. It is as if every person could have their own team of tireless, brilliant assistants who understand their goals and work together seamlessly to achieve them.

    This is not about replacing human intelligence but augmenting human capability. When we can delegate routine coordination and information processing to agents, we can focus on what humans do best: creating meaning, building relationships, making ethical judgments, and pursuing purposes that matter to us.

    The world we imagine—where building a house or running a business or navigating healthcare becomes as simple as stating your goal clearly—represents a fundamental shift in how complex tasks get accomplished. Whatever the timeline for this transformation, understanding how AI agents work and what they make possible has become essential for anyone trying to make sense of where our societies are heading.

    The concept is clear: AI systems that can understand goals, remember context, and coordinate actions to achieve complex outcomes. What we do with this capability remains an open question—one that will be answered not by the technology itself, but by how we choose to use it.

    Fediverse Reactions