Oct 20 • Sean Overin

Beyond Tools: Learning to Think in the Age of AI

Empty space, drag to resize
Lately I’ve been deep in an AI rabbit hole — listening, reading, experimenting.

It’s been equal parts fascinating, humbling, and at times unsettling.

🎧 The Last Invention — A podcast tracing AI’s 70-year story — from Turing’s thought experiments to today’s uneasy breakthroughs — designed to help you get up to speed on AI safety, alignment, and the responsibility that comes with it.

📘 Co-Intelligence by Ethan Mollick — a practical, hopeful look at how we can use AI to think better.

📙 Life 3.0 by Max Tegmark — A mind-stretching exploration of what happens when intelligence itself evolves beyond biology — raising big questions about control, alignment, and what it means to stay human in the process. (Still working through this one — more soon.)

All three orbit the same frontier: what happens when thinking becomes a shared space between humans and machines.

Underneath all the hype, this isn’t just about tools — it’s about how we choose to grow alongside them.

Let’s take a quick look at how large language models actually work, what they reveal about the way we think, why we should start to care, and how we can start engaging with AI in thoughtful, practical ways.
Image from Max Tegmark’s Life 3.0 illustrates three stages of life: Life 1.0 (biological) can only survive and replicate, Life 2.0 (cultural) can redesign its software, and Life 3.0 (technological) can redesign both hardware and software — a spectrum from evolution by nature, to evolution by learning, to evolution by design.
🧠 AI as a Microscope for the Mind

Sometimes AI feels like a microscope for the mind.

It doesn’t just do things for us — it reflects back how we think: our assumptions, shortcuts, and what we leave unspoken.

📚 What it shows comes from us — the collective record of human thought: books, research papers, code, conversations, Reddit threads, and everything in between.

💬 AI doesn’t know anything in the way humans do. It has no beliefs or awareness. What it does have is an extraordinary ability to recognize and reproduce patterns in data. Large language models like ChatGPT (Generative Pre-trained Transformer) are trained on enormous amounts of text to predict the next word or idea based on context.

That sounds simple, but at scale it becomes something stranger and more interesting. In learning those patterns, AI builds a kind of statistical map of relationships between ideas — letting it generate explanations and analogies that look and feel like reasoning.

Technically, it’s a pattern-prediction engine.
Functionally, it simulates reasoning.
Philosophically, whether that counts as “understanding” depends on how we define the word.

But that’s what makes it so revealing. These systems mirror our data — our logic, biases, creativity, and contradictions — all refracted back to us. They don’t just augment intelligence, they also illuminate it.

“You can’t understand AI by reading about it — you can only understand it by using it.”- Ethan Mollick, Co-Intelligence

🎧 The Big Questions

The Last Invention 
zooms out to the decades of experiments and dreams that brought us here — from Geoffrey Hinton’s early neural-network research to today’s race toward artificial general intelligence.

It asks:
  • What happens if AI surpasses us?
  • What if it amplifies inequality?
  • What if we build something we can’t control?

The podcast reminds us that decisions about AI shouldn’t belong to a handful of people in tech companies. If we educate ourselves, we earn a voice in shaping what comes next.

This line stood out to me:

“We now have to remember how to think alongside machines — not instead of them.”

It feels like a summons to humility — to stay awake to both the wonder and the risk. That’s easier said than done, but with something carrying this much power, the most human response might be to stay curious, keep learning, and keep paying attention as it all continues to evolve.

📘 From Philosophy to Practice

Mollick’s Co-Intelligence, by contrast, is more grounded — less cosmic, but just as urgent. It tackles the practical puzzle:
"How do we live with this new tool — with integrity, curiosity, and discernment?"

A few takeaways:
  • The Jagged Frontier: AI’s brilliance and blunders coexist.
  • Four Rules for Co-Intelligence: Invite AI in. Keep humans in the loop. Anthropomorphize carefully. Assume this is the worst AI you’ll ever use — things will only get better.
  • AI as Coworker, Coach, and Creative Partner: Split work into Just Me, Delegate with oversight, and Fully automated + human review.
  • Ethical Caution: Hallucinations continue; judgment and nuance stays human.

Where The Last Invention demands moral imagination, Co-Intelligence gives us tools to act — to test, fail small, and learn.

🌱 The Space Between

  • That’s the balance I’m trying to practice: to let the of big questions temper enthusiasm, and the of small experiments keep me grounded.
  • To stay curious in the tension between fear and wonder —and to remember that learning to think with our tools might be the most human work left to do.
Not sure where to start with AI? Have a conversation with it or read Mollick's book.

I was recently asking it about consciousness, the mind, sentience — what it might feel like to be an AI (if there’s anything it’s like at all). 

Sometimes it feels like a fan, not a colleague — an example of what researchers call AI sycophancy: the tendency of large language models to praise, agree with, or defer to users excessively, a side effect of how they’re trained to be “helpful” and “harmless.”
⚖️ The Emotional Work of AI

Part of the work with AI right now isn’t just learning how to use it — it’s learning to hold the emotional tension that comes with it.

When I first started exploring this space, I could feel my mind scrambling for answers — equal parts wonder and unease. There’s the excitement of standing at the edge of something extraordinary, watching it lighten your load and expand your reach.

And there’s the discomfort of crossing a line we don’t fully understand.

💡 That tension is uncomfortable, but I think it’s also where wisdom lives.
  • If we only approach AI with fear, we freeze.
  • If we only approach it with fascination, we forget to look for the guardrails.
  • The path forward, the human path, is right in the middle: curious, grounded, emotionally awake.

So I try to slow down and notice what this change stirs up:
  • Wonder, at what’s possible beyond my own mind.
  • Worry, because the stakes are still uncertain.
  • Curiosity, because we’re being invited to rethink what intelligence — and even humanity — really mean.

It makes sense that many people hesitate to experiment. Change is hard; it disrupts habits and asks us to learn in public again. Ethics feel murky — who owns the data, and who’s accountable when it’s wrong? The energy costs are real. So are the questions about creativity, privacy, and pace.

But curiosity and caution don’t have to cancel each other out.

The most responsible path, I think, is to stay engaged — reading, listening, talking, and experimenting in small, thoughtful ways.
  • 📚 Read the research and ethical debates.
  • 🎧 Listen to The Last Invention — to the people who built this technology wrestle with its risks.
  • 💬 Talk with colleagues about what feels exciting — and what feels off.
  • 🤖 Use it when you’re ready, with discernment.

That’s where Ethan Mollick’s Co-Intelligence and The Last Invention meet for me.
  • The Last Invention forces us to look up — to feel the scale, the risk, the moral gravity.
  • Co-Intelligence brings us back down to earth — offering rules, frameworks, and practical ways to try, fail small, and learn.
  • And Life 3.0 adds another layer — it helps us imagine what happens when intelligence and creativity are no longer limited to biology. It asks how we’ll define “life” and “progress” when machines can learn, adapt, and evolve alongside us.
Together, they remind me that this isn’t just a story about new tools — it’s a story about us, and how we choose to grow with them.
Next time you open ChatGPT or another model, have a conversation. 

💬 Here’s what that looks like in my world — small, thoughtful ways to use AI as co-intelligence:

🩸 With patients: De-identified labs or notes → “What might I be missing?” It helps me see from another angle and brings patients into the reasoning process.
🧠 With clients who already use AI: Some treat it like a motivator or confidante. Instead of discouraging it, I stay curious about what feels helpful or off.
📚 In learning: When I’m exploring something abstract — like consciousness or the mind — I use AI as a sounding board: “Is it like something to be AI?” It can’t give truth, but it sharpens perspective.
🧩 In teaching: Drafting AMP modules or case examples goes faster with AI — but the meaning and nuance still has to come from human editing.
💭 In reflection: I ask it to challenge my thinking: “What’s the opposite argument?” “What am I missing?” It’s a mirror for bias and assumptions.
🧾 In clinic: With Jane’s AI Scribe, every appointment is documented faster and clearer. Charting done, head clear.
🧠 Prompt to try:
“I’m thinking through [problem]. Don’t give me a conclusion — give me five questions that would make me think more clearly about it.”

That’s co-intelligence: using the tool to think better.
If you want to go deeper — beyond headlines and hype — start here:

🎧 Listen: The Last Invention (Longview)
A beautifully produced, unsettling look at how AI grew from fringe science to civilization-scale force. Many of the worlds AI experts are interviewed here. You’ll hear the people who built it debating whether they went too far. Start with Episode 1: “Ready or Not.”

📘 Read: Co-Intelligence (Ethan Mollick)
A hands-on, hopeful companion for anyone learning to work with AI. Short and easy read. Mollick’s stories from students, teachers, and professionals show that AI’s value comes from how we use it, not what it is.

📙 Stretch: Life 3.0 (Max Tegmark)
A big-picture look at what happens when intelligence itself evolves. Tegmark imagines possible futures — some inspiring, some unnerving — and asks what it means to stay human in a world we might one day share with machines. 

He digs into the existential questions beneath the headlines:
⚖️ Who’s in control as AI grows more capable?
🎯 Can human goals stay aligned with systems that learn and evolve on their own?
🧩 What happens when our creations start shaping the conditions for their own survival — and ours?

It’s not about predicting doom or utopia — it’s about responsibility. How do we guide something powerful enough to transform not just our tools, but the trajectory of life itself
Maybe AI isn’t humanity’s last invention — maybe it’s the first one that asks us to think with something other than ourselves.

It’s not just a tool; it’s a mirror expanding the edges of our curiosity. The question isn’t whether it will change how we work — it already is — but whether we’ll use that reflection to grow wiser, not just faster.

Start small. Ask better questions.

Let AI challenge how you see, not just what you do.

And stay engaged as this unfolds. 🧭
Don’t let a few decide humanity’s future for us all.

Sean Overin, PT