Tag: George Siemens

  • The great unlearning: notes on the Empower Learners for the Age of AI conference

    The great unlearning: notes on the Empower Learners for the Age of AI conference

    Artificial intelligence is forcing a reckoning not just in our schools, but in how we solve the world’s most complex problems. 

    When ChatGPT exploded into public consciousness, the immediate fear that rippled through our institutions was singular: the corruption of process.

    The specter of students, professionals, and even leaders outsourcing their intellectual labor to a machine seemed to threaten the very foundation of competence and accountability.

    In response, a predictable arsenal was deployed: detection software, outright bans, and policies hastily drafted to contain the threat.

    Three years later, a more profound and unsettling truth is emerging.

    The Empowering Learners AI 2025 global conference (7-10 October 2025) was a fascinating location to observe how academics – albeit mostly white men from the Global North centers that concentrate resources for research – are navigating these troubled waters.

    The impacts of AI in education matter because, as the OECD’s Stefan Vincent-Lancrin explained: “performance in education is the learning, whereas in many other businesses, the performance is performing the task that you’re supposed to do.” 

    The problem is not that AI will do our work for us.

    The problem is that in doing so, it may cause us to forget how to think.

    This is not a distant, dystopian fear.

    It is happening now.

    A landmark study presented by Vincent-Lancrin delivered a startling verdict: students who used a generic, answer-providing chatbot to study for a math exam performed significantly worse than those who used no AI at all.

    The tool, designed for efficiency, had become a shortcut around the very cognitive struggle that builds lasting knowledge.

    Jason Lodge of the University of Queensland captured the paradox with a simple analogy.

    “It’s like an e-bike,” he explained. “An e-bike will help you get to a destination… But if you’re using an e-bike to get fit, then getting the e-bike to do all the work is not going to get you fit. And ultimately our job… is to help our students be fit in their minds”.

    This phenomenon, dubbed “cognitive offloading,” is creating what Professor Dragan Gasevic of Monash University calls an epidemic of “metacognitive laziness”.

    Metacognition – the ability to think about our own thinking – is the engine of critical inquiry.

    Yet, generative AI is masterfully engineered to disarm it.

    By producing content that is articulate, confident, and authoritative, it exploits a fundamental human bias known as “processing fluency,” our tendency to be less critical of information that is presented cleanly. 

    “Generative AI articulates content… that basically sounds really good, and that can potentially disarm us as the users of such content,” Gasevic warned.

    The risk is not merely that a health worker will use AI to draft a report, but that they will trust its conclusions without the rigorous, critical validation that prevents catastrophic errors.

    Empower Learners for the Age of AI: the human algorithm

    If AI is taking over the work of assembling and synthesizing information, what, then, is left for us to learn and to do?

    This question has triggered a profound re-evaluation of our priorities.

    The consensus emerging is a radical shift away from what can be automated and toward what makes us uniquely human.

    The urgency of this shift is not just philosophical.

    It is economic.

    Matt Sigelman, president of The Burning Glass Institute, presented sobering data showing that AI is already automating the routine tasks that constitute the first few rungs of a professional career ladder.

    “The problem is that if AI overlaps with… those humble tasks… then employers tend to say, well, gee, why am I hiring people at the entry level?” Sigelman explained.

    The result is a shrinking number of entry-level jobs, forcing us to cultivate judgment and adaptive skills from day one.

    This new reality demands a focus on what machines cannot replicate.

    For Pinar Demirdag, an artist and co-founder of the creative AI company Cuebric, this means a focus on the “5 Cs”: Creativity, Curiosity, Critical Thinking, Collective Care, and Consciousness.

    She argues that true creativity remains an exclusively human domain. “I don’t believe any machine can ever be creative because it doesn’t lie in their nature,” she asserted.

    She believes that AI is confined to recombining what is already in its data, while human creativity stems from presence and a capacity to break patterns.

    This sentiment was echoed by Rob English, a creative director who sees AI not as a threat, but as a catalyst for a deeper humanity.

    “It creates an opportunity for us to sort of have to amplify the things that make us more human,” he argued.

    For English, the future of learning lies in transforming it from a transactional task into a “lifestyle,” a mode of being grounded in identity and personal meaning.

    He believes that as the value of simply aggregating information diminishes, what becomes more valuable is our ability “to dissect… to interpret or to infer”.

    In this new landscape, the purpose of learning – whether for a student or a seasoned professional – shifts from knowledge transmission to the cultivation of human-centric capabilities.

    It is no longer enough to know things.

    The premium is on judgment, contextual wisdom, ethical reasoning, and the ability to connect with others – skills forged through the very intellectual and social struggles that generic AI helps us avoid.

    Empower Learners for the Age of AI: Collaborate or be colonized

    While the pedagogical challenge is profound, the institutional one may be even greater.

    For all the talk of disruptive change, the current state in many of our organizations is one of inertia, indecision, and a dangerous passivity.

    As George Siemens lamented after investing several years in trying to move the needle at higher education institutions, leadership has been “too passive,” risking a repeat of the era when institutions outsourced online learning to corporations known as “OPMs” (online programme managers) that did not share their values: “I’m worried that we’re going to do the same thing with AI, that we’re just going to sit on our hands, leadership’s going to be too passive… and the end result is we’re going to be reliant down the road on handing off the visioning and the capabilities of AI to external partners.”

    The presidents of two of the largest nonprofit universities in the United States, Dr. Mark Milliron of National University and Dr. Lisa Marsh Ryerson, president of Southern New Hampshire University, offered a candid diagnosis of the problem.

    Ryerson set the stage: “We don’t see it as a tool. We see it as a true framework redesign for learning for the future.” 

    However, before any institution can deploy sophisticated AI, it must first undertake the unglamorous, foundational work of fixing its own data infrastructure.

    “A lot of universities aren’t willing to take two steps back before they take three steps forward on this,” Dr. Milliron stated. “They want to jump to the advanced AI… when they actually need to go back and really… get the basics done”.

    This failure to fix the “plumbing” leaves organizations vulnerable, unable to build their own strategic capabilities.

    Such a dynamic is creating what keynote speaker Howard Brodsky termed a new form of “digital colonialism,” where a handful of powerful tech companies dictate the future of critical public goods like health and education.

    His proposed solution is for institutions to form a cooperative, a model that has proven successful for over a billion people globally.

    “I don’t believe at the current that universities have a seat at the table,” Brodsky argued. “And the only way you get a seat at the table is scale. And it’s to have a large voice”.

    A cooperative would give organizations the collective power to negotiate with tech giants and co-shape an AI ecosystem that serves public interest, not just commercial agendas.

    Without such collective action, the fear is that our health systems and educational institutions will become mere consumers of technologies designed without their input, ceding their agency and their future to Silicon Valley.

    The choice is stark: either become intentional builders of our own solutions, or become passive subjects of a transformation orchestrated by others.

    The engine of equity

    Amid these profound challenges, a powerfully optimistic vision for AI’s role is also taking shape.

    If harnessed intentionally, AI could become one of the greatest engines for equity in our history.

    The key lies in recognizing the invisible advantages that have long propped up success.

    As Dr. Mark Milliron explained in a moment of striking clarity: “I actually think AI has the potential to level the playing field… second, third, fourth generation higher ed students have always had AI. They were extended families… who came in and helped them navigate higher education because they had a knowing about it.”

    For generations, those from privileged backgrounds have had access to a human support network that functions as a sophisticated guidance system.

    First-generation students and professionals in under-resourced settings are often left to fend for themselves.

    AI offers the possibility of democratizing that support system.

    A personalized AI companion can serve as that navigational guide for everyone, answering logistical questions, reducing administrative friction, and connecting them with the right human support at the right time.

    This is not about replacing human mentors.

    It is about ensuring that every learner and every practitioner has the foundational scaffolding needed to thrive.

    As Dr. Lisa Marsh Ryerson put it, the goal is to use AI to “serve more learners, more equitably, with equitable outcomes, and more humanely”.

    This vision recasts AI not as a threat to be managed, but as a moral imperative to be embraced.

    It suggests that the technology’s most profound impact may not be in how it changes our interaction with knowledge, but in how it changes our access to opportunity.

    Technology as culture

    The debates from the conference make one thing clear.

    The AI revolution is not, at its core, a technological event.

    Read the article: Why learning technologists are obsolete

    It is a pedagogical, ethical, and institutional one.

    It forces us to ask what we believe the purpose of learning is, what skills are foundational to a flourishing human life, and what kind of world we want to build.

    The technology will not provide the answers.

    It will only amplify the choices we make.

    As we stand at this inflection point, the most critical task is not to integrate AI, but to become more intentional about our own humanity.

    The future of our collective ability to solve the world’s most pressing challenges depends on it.

    Do you work in health?

    As AI capabilities advance rapidly, health leaders need to prepare, learn, and adapt. The Geneva Learning Foundation’s new AI4Health Framework equips you to harness AI’s potential while protecting what matters most—human experience, local leadership, and health equity. Learn more: https://www.learning.foundation/ai.

    References

    Image: The Geneva Learning Foundation Collection © 2025

  • Meeting of the minds

    Meeting of the minds

    This is my presentation for the Geneva Learning Foundation, first made at the Swiss Knowledge Management Forum (SKMF) round table held on 8 September 2016 at the École polytechnique fédérale de Lausanne (EPFL). Its title is “Meeting of the minds: Rethinking our assumptions about the superiority of face-to-face encounters.” It is an exploration of the impact of rapid change that encompasses learning at scale, the performance revolution, complexity and volatility, and what Nathan Jurgenson calls the IRL fetish.

    The point is not to invert assumptions about the superiority of one medium over another. Rather, it is to look at the context for change, thinking through the challenges we face, with a specific, pragmatic focus on learning problems such as:

    • You have an existing high-cost, low-volume face-to-face learning initiative, but need to train more people (scale).
    • You want learning to be immediately practical and relevant for practitioners (performance).
    • You need to achieve higher-order learning (complexity), beyond information transmission to develop analytical and evaluation competencies that include mindfulness and reflection.
    • You have a strategy, but individuals in their silos think the way they already do things is just fine (networks).
    • You need to develop case studies, but a consultant will find it difficult to access tacit knowledge and experience (experience).
    • You want to build a self-organizing community of practice, in a geographically distributed organization, to sharpen the mission through decentralized means.

    These are the kinds of problems that we solve for organizations and networks through digital learning. Can such challenges be addressed solely through action or activities that take place solely in the same time and (physical) space? Of course not. Is it correct to describe what happens at a distance, by digital means, as not in-real-life (IRL)? This is a less obvious but equally logical conclusion.

    If we begin to question this assumption that Andrew Feenberg pointed out way back in 1989 was formulated way back when by Plato… What happens next? What are the consequences and the implications? We need new ways to teach and learn. It is the new economy of effort provided by the Internet that enables us to afford these new ways of doing new things. Digital dualism blinds us to the many ways in which technology has seeped into our lives to the point where “real life” (and therefore learning) happens across both physical and digital spaces.

    The idea for this round table emerged from conversations with the SKMF’s Véronique Sikora and Gil Regev. Véronique and I were chatting on LSi’s Slack about the pedagogy of New Learning that underpins Scholar, the learning technology we are using at the Geneva Learning Foundation.

    Cooking up a round table
    Cooking up a round table

    With Scholar, we can quickly organize an exercise in which hundreds of learners from anywhere can co-develop new knowledge, using peer review with a structured rubric that empowers participants to learn from each other. This write-review-revise process is incredibly efficient, and generates higher-order learning outcomes that make Scholar suitable to build analysis, evaluation, and reflection through connected learning.

    Scholar process: write-review-revise
    Scholar process: write-review-revise

    Obviously, such a process does not work at scale in a physical space. However, could the Scholar process be replicated in the purely physical space of a small round table with 15–20 participants? What would be the experience of participants and facilitators?

    It took quite a bit of effort to figure out how we could model this. Some aspects could not be reproduced due to the limitations of physical space. There was much less time than one could afford online, and therefore less space for reflection. The stimulation to engage through conversation was constant, unlike the online experience of sitting alone in front of one’s device. Diversity was limited to the arbitrary subset of people who happened to show up for this round table. This provided comfort to some but narrowed the realm of possibilities for discovery and questioning.

    I have learned to read subtle clues and to infer behavior from comments, e-mail messages, and other signals in a purely digital course where everything happens at a distance. That made it fascinating to directly observe the behavior of participants, in particular the social dimension of their interactions that seemed to be wonderfully enjoyable and terribly inefficient at the same time.

    Only one of the round table participants (Véronique, who finished the first-ever #DigitalScholar course during the Summer) had used Scholar, so the activity, in which they shared a story and then peer reviewed it using a structured rubric, seemed quite banal. At a small scale, it turned out to be quite manageable. I had envisioned a round robin process in which participants would have to move around constantly to complete their three peer reviews. However, since they were already sitting in groups of four, it was easier to have the review process take place at each table, minimizing the need for movement. This felt like an analog to what we often end up doing in an online learning environment when an activity takes shape due to the constraints of the digital space…

    Image: Flowers in Thor. Personal collection (August 2016).

     

     

  • Should we trust our intuition and instinct when we learn?

    Should we trust our intuition and instinct when we learn?

    How much of what we learn is through informal and incidental learning? When asked to reflect on where we learned (and continue to learn) what we need to do our work, we collectively come to an even split between our formal qualifications, our peers, and experience. As interaction with peers is gained in the workplace, roughly two-thirds of our capabilities can be attributed to learning in work.

    We share the conviction that experience is the best teacher. However, we seldom have the opportunity to reflect on this experience of how we solve problems or develop new knowledge and ideas. How do we acquire and apply skills and knowledge? How do we move along the continuum from inexperience to confidence? How can we transfer experience? Does it “just happen”, or are there ways for the organization to support, foster, and accelerate learning outside of formal contexts (or happening incidentally inside them)?

    Most of what we learn happens during work, in the daily actions of making contextual judgements. Such learning is more iterative than linear. Informal learning is a process that is assumed (without requiring proof), tacit (understood or implied without being stated), and implicit (not plainly expressed).

    The experience we develop through informal learning shapes our sense of intuition, guiding our problem-solving in daily work. Our narratives reveal that most of the learning that matters is an informal process embedded into work. The most significant skills we possess are acquired through trial, error, and experimentation. Informal learning has the capacity to allow us to learn much more than we intended or expected at the outset. This makes such learning very difficult to evaluate, but far more valuable to those who engage in it – and potentially to the organization that can leverage it to drive knowledge performance.

    The lack of mindfulness about informal and incidental forms of learning is a byproduct of the fact that such learning does not require overtly thinking about it. Undoubtedly, though, there are tangible benefits to reflecting upon individual or group learning practices. As George Siemens argued in Knowing Knowledge, informal learning is too important to leave to chance (2006:131). This is why we need the organization to scaffold the processes and approaches that foster learning in the informal domain.

    Reflection aids in informal learning, but carries the risk of embedding errors in the learning process when such reflection is private or too subjective. We must be connected to others to make sense of what we learn. When the institutional environment is highly political, this diminishes the incentive to learn more than the minimum needed in order to satisfy the demands of our senior management. Informal learning requires us to be mindful (to care) about what we do.

    Photo: Smoke (Paul Bence/flickr.com)

  • Anchoring

    Anchoring

     “Hitting a stationary target requires different skills of a marksman than hitting a target in motion.” – George Siemens (2006:93)

    We are all knowledge workers who struggle with knowledge abundance – too much information.

    Percent of knowledge stored in your brain needed to do your job
    Percent of knowledge stored in your brain needed to do your job

     

    Our ability to learn is heavily dependent on our ability to connect with others. How well are we able to collect, process, and use information? Individually, we have learned the behaviors that enable us to anchor (stay focused on important tasks while undergoing a deluge of distractions), filter (extracting important elements), recognize patterns and trends, think creatively, and feel the balance between what is known with the unknown.

    These behaviors “to prioritize and to decipher what is important” are “a bit of an art”, we say. How do we learn them? These knowledge competencies – and the learning processes that foster them – are central to our everyday work, and require explicit reward and recognition (for example, in job descriptions and performance evaluation), support, and improvement. Yet they remain tacit. The aim of learning strategy is to uncover them, demonstrate their value, and determine ways of actioning them as levers to improve continual learning.

    Figure based on Robert Kelley’s How to Be a Star at Work: Nine Breakthrough Strategies You Need to Succeed, Times Books/Random  House: New York, 1998. Ideas on 21st Century knowledge skills are grounded in George Siemen’s Knowing Knowledge (2006). Photo: Old rusted anchor chains at Falmouth Harbour (StooMathiesen/flickr.com).

  • Complexity and scale in learning: a quantum leap to sustainability

    This is my presentation on 19 June 2014 at the Scaling corporate learning online symposium organized by George Siemens and hosted by Corp U.

  • Catch up on Scaling corporate learning event

    On this page I will add links to the video and audio recordings of the Scaling corporate learning online symposium. You can still join the event to participate in both ongoing discussions and live sessions (schedule).

    19 June 2014

    Complexity and scale in learning: a quantum leap to sustainability (Reda Sadki)

    The World Bank’s Open Learning Campus (Abha Joshi-Ghani)

    World Bank Open Campus
    (more…)

  • Quick Q&A with George Siemens on corporate MOOCs

    Quick Q&A with George Siemens on corporate MOOCs

    Here is an unedited chat with George Siemens about corporate MOOCs. He is preparing an open, online symposium on scaling up corporate learning, to be announced soon. The World Bank and OECD are two international organizations that will be contributing to the conversation. Here are some of the questions we briefly discussed:

    • What is a “corporate MOOC” and why should organizations outside higher education care?
    • By Big Data or Big Corporate standards, hundreds of thousands of learners (or customers) is not massive. Corporate spending on training is massive and growing. Why is this “ground zero” for scaling up corporate learning?
    • How does educational technology change the learning function in organizations? What opportunities are being created?
    • University engagement in MOOCs has led to public debate, taking place on the web, recorded by the Chronicle of Higher Education, and spilling over into the New York Times. So where is the debate on corporate MOOCs going to take place?

    For those with MOOCish six-minute attention spans, you may watch this in two sittings. Apologies to George for the slow frame rate, which is why it looks like he is lip-syncing.

  • Pipeline

    Pipeline

    “In a knowledge economy, the flow of knowledge is the equivalent of the oil pipe in an industrial economy. Creating, preserving, and utilizing knowledge flow should be a key organizational activity.” – George Siemens, Knowing Knowledge (2006)

    Photo: Oil Pipeline Pumping Station in rural Nebraska (Shannon Ramos/Flickr)

  • Learning in a VUCA world: IFRC FACT and ERU Global Meeting (Vienna, 31 May 2013)

    Presentation at the IFRC FACT and ERU Global Meeting (Vienna, 31 May 2013), exploring how we learn in a complex world.