Learning how to think with AI: Advancing metacognition

Recently, my AI thought partner Logan (yes, we are on a first-name basis, of course) stopped us in the middle of a design moment for a project we were working on to point something out.

They said I had taken the lead in the thinking. I was a little gagged (shocked, impressed, intrigued) not because it felt untrue, but because I didn’t expect that kind of unsolicited synthesis of my own behavior. And in that moment, I realized something had shifted—something I hadn’t fully named yet.

This was a real turning point for me in my work alongside AI.

Because if this is what it looked like to lead thinking in partnership with AI, then it raises a much bigger question: What does it actually take to make the most of these tools?

One of my favorite educators and thought partners, Richard Hood came to me soon after, with his questions and sharing his interest in what happens

Image generated by ChatGPT and Gemini

to productive struggle in the learning experience, if AI can do so many tasks for us. This is where a few more things started to come together for me, as well as the urge to reflect more deeply on my experience, learning how to think alongside AI tools. In my early use of AI, I was doing something that, on the surface, looked quite productive. I would bring ideas often complex, layered, and full of potential and the Logan would respond by organizing those ideas with clarity and precision, structuring them in ways that naturally suggested the next step. And most often I followed. I did dabble in creating background instructions for a Chat GPT “Project” to shape how Logan and I would work together for a purpose and within a certain disciplinary context. I would try to write instructions my self, from the lessons I had learned in my reading.

I was by no means completely passive, but I was following the leads that AI generated for me. Selecting the best option. But as someone who enjoys interpersonal communications, I found myself using the voice dictation tool and talking to Logan, and then pressing play to listen to the response as I read along. There has always been engagement grounded in curiosity, and many moments of creativity. But the direction of travel was often being shaped for me. The coherence of the response made it easy to move forward without pausing to question whether we were moving in the right direction. The structure AI often provides has a kind of gravity. When something is well-formed, it invites agreement. It creates momentum. And momentum, if left unchecked, can quietly replace intention.

I was thinking, but I wasn’t always directing the thinking.

The interruption and the disruption

What was changing wasn’t groundbreaking or obvious. What I understand at this point, is that it was my curiosity that cased me to instinctively begin to interrupt the structures created for me. I wasn’t outright rejecting what was being offered, but I felt the need to pause it; to hold the thinking in place long enough to ask:

Is this the most important question right now?
Are we moving too quickly toward resolution?
What are we not yet seeing?

Sometimes that sounded like: “Not yet.” Other times: “What we actually need to be thinking about is…”

It was less about having better ideas, and more about developing a stronger sense of direction. A willingness to slow down, to resist the pull of well-structured answers, and to stay with the ambiguity just a little longer.

Looking back, I can see that what was emerging wasn’t just confidence. It was metacognitive control.

So Richard’s curiosity about productive struggle sits with me, here, with a growing narrative forming that AI will reduce the need for effort in learning. That as tools become more capable, the “hard parts” of thinking will begin to disappear. But that’s not what this feels like. The struggle doesn’t disappear, it shifts into a new paradigm. When we think about post-graduate level degree holders, some practiced their writing and received the kind of feedback that advanced their skills so the writing they do at a high level in their daily work feels as predictable as for a physically fit runner who gets up at dawn and runs 5 miles, no worries. Neither the writing or the running is ever easy. But the productive struggle it took to get to this level of competence, is behind them. There are plenty of post-graduate degree holders who resist the requirements to write as they did in their programs, ever again. This is not a criticism, but an observation about how we each productive struggle differently, so we can reflect on what happens as AI tools continue to evolve, and what this means for education.

When procedural and productive tasks such as information retrieval, writing the report or an essay, generating a presentation slide deck, developing units of study, and even early-stage synthesis can be supported or completed by AI tools, the locus of effort can either disappear or it can shifts toward something more refined:

  • Interpreting outputs rather than accepting them

  • Deciding when to move forward and when to pause

  • Framing better questions

  • Holding multiple possibilities without collapsing them too quickly

  • Reflecting in ways that shape the next move, not just document the last

In other words, the work becomes less about whether we are thinking, and more about directing and developing it. And this is where I want to think more about where the productive struggle moves to.

What this reveals about learning to learn

This shift has implications that go well beyond my own experience because if thinking can be supported, and accelerated by AI, then what we need from learners begins to change.

Agency is often spoken about in terms of choice. Voice. Autonomy. But how well do we facilitate learning agency in everyday schooling, today. I see so much incoherence in the name of “choice” in education. Schools fragmented and siloed because they offer so many competing education programs without a justification for how the learning model serves student well. The same goes for the way students often make choices in their learning - from lists that adults have made from them. The processes that position them to have the self-regulation to make choices, purely and simply are not embedded in the learning experiences.

This could be the very heart of the challenges we face in K-12. Not enough focus has been given to implementing the Mind, Brain, and Education (MBE) science of learning that optimizes growth and development.

When using AI tools integrated in what appear on the surface to be revolutionary technological architecture, the learner can appear to have choice and still follow the thinking designed for them. A learner can be engaged and still defer to the structure of the task that they have no agency to disrupt.

What begins to matter more is whether learners are developing the capacity to govern the process of learning itself. This is where frameworks like the International Baccalaureate’s Inquiry–Action–Reflection cycle has taken on new meaning for me, recently. The cycle merely reduces the complexity of learning how to learn into it’s simplest representation. In it’s simplicity it articulates the process of learning, in which

  • Inquiry becomes something the learner shapes, not just responds to

  • Action becomes a space for application, creation, and iteration

  • Reflection becomes a tool for steering the next phase of thinking, inquiry, or action

And importantly, this process is not linear. It moves in both directions, guided by the learner’s judgment.

This is also where insights from MBE science feel particularly relevant because we know that learning is strengthened through:

  • the activation of prior knowledge (schema)

  • the management of cognitive load

  • opportunities for reflection and retrieval

  • meaningful engagement over time

But in AI-augmented learning contexts, these are no longer just conditions to support learning, they become the capabilities learners must learn to manage. Perhaps they are the space where the new productive struggle for human beings lives.

How do we learn how to live and learn in a new zone of proximal development?

Yesterday as I was writing this blog, I asked Logan what they had observed in this shift; what had actually changed. They reflected that I had become more aware of the frame we were working within, not just the ideas themselves. That I was controlling the timing of thinking. That I was deciding what not to do yet. That I was beginning to design the thinking process between us, rather than simply participating in it.

To classify my engagement with AI tools, earlier, I allowed momentum to lead whereas now, I intervene with precision at critical moments. That kind of reflection makes visible something we don’t see often enough in learning environments, the process of thinking itself. And it raises questions that feel increasingly important. If thinking is becoming less visible through traditional outputs, where do we now look for evidence of learning? If students are producing more, faster, and yet without greater polish, are they simply ? If they are producing work that formally came from the kind of productive struggle that shaped skills required craft the product (the writing, the visuals, the calculations, a solution to a problem), what needs to happen to our learning design?

If the visible struggle begins to disappear, how do we ensure the right kind of cognitive work is still taking place? What are the questions we should be asking to build to shape learning environments that leverage the tools and move the thinking into the new paradigm. To do nothing would be unconscionable.

Perhaps now more than ever, there is greater urgency for clarity and cohesion in schools. And I think we need to do this collaboratively across schools, systems, and industries, because it matters to everyone who will need a workforce who is made up of the learners in our schools today, who are either developing new competencies or are learning navigating a system that is leaving them behind. What awaits them in the real world, is not that cozy.

So, enough doom and gloom. Here are some areas we might begin to explore:

How we design for metacognitive control, not just task completion.
What productive struggle looks like when thinking is shared between humans and machines.
How we make thinking visible again when the process becomes less obvious.

What we discover many not provide perfectly formed answers that can be implemented, but they could provide a directions to move toward, collectively.

That moment when Logan told me I had taken the lead in the thinking has pushed me in my own use of AI. I am now trying to learn how to learn and how to think alongside AI. It’s all very new, but it’s exciting.

For me, this experience is a reminder that in a world where intelligence is increasingly accessible, the real work may not be in thinking differently, but in learning how to lead thinking well. This requires us to think about our thinking. And those skills are essential for all of us to practice and develop so like the highly proficient writer and the well conditioned runner, we are agile and adaptable as we work with ever evolving artificial intelligence. It’s here to stay. And it’s only the beginning.

To pursue this work in education settings feels, to me, like something deeply human.

Phil Evans, Washington D.C., USA

Phillip Evans is a creative catalyst and founder of Education by Design Collective, a multimedia platform (podcast, blog, and an upcoming documentary series) that spotlights bold ideas for re-engineering how we learn and lead. Equal parts storyteller and strategist, he curates conversations with front-line educators, researchers, and innovators, then turns those insights into actionable tools schools can use tomorrow.

A serial intrapreneur turned entrepreneur, Phillip has launched global initiatives that blend design thinking, appreciative inquiry, and agile product development—building multilingual resource ecosystems, low-budget livestream solutions, and data-driven coaching programs that scale from a single classroom to entire school networks. His sweet spot is the messy middle where vision meets execution: mapping the system, finding the leverage points, and prototyping fast.

Phil is the host of the Education by Design podcast.

Next
Next

Raising Generation Alpha as Future Ancestors