Thinking Fast, Slow, and Artificial
I recently stood in front of a room of people and put two glasses on a table. One half-full. One half-empty. I told them AI is both — and anyone selling them only one of those is selling them something.
By the end of the night, some people had switched glasses. A few doubled down on the one they came in with. I want to make the same case to you here, in writing, in the hope that you might switch too — or at least understand more clearly why you won’t.
I’ve been building software for thirty-four years. Fifteen months ago, AI changed the way I do the work — not how I think about work, but how I actually do it, every single day. That’s the lens. What follows is everything I’d want someone in my position — builder, parent, skeptic by temperament — to tell me if I were just starting to pay attention.
Pick whichever glass you came in with. By the end, you might switch.
Most people are still using AI like Google
They type a question. They get an answer. They move on.
That’s not what this is.
Last year my family wanted to plan a ten-day trip to Japan. Four of us. All with different dietary restrictions. I’m a photographer, so I wanted time for that too. I typed a handful of bullet points into Claude — the kind of list you’d scratch on the back of a napkin.
What came back wasn’t a list of temples. It was an itinerary. Day by day. A restaurant for each dietary need, in every city. Photography slots for me at sunrise and blue hour. Budgets. And when I clicked “veggie picks,” the whole plan reshaped around my wife. When I clicked “photo spots,” the schedule pivoted to golden hour.
That’s not a search result. That’s a dialogue. From bullet points.
There’s a useful way to think about this: the difference between being a Searcher and an Architect. A Searcher treats AI like Google — short prompt, quick answer, move on. An Architect treats it like a collaborator — gives it context, explains the goal, asks it to push back. Same tool. Completely different results.
Most people are still Searchers. The opportunity is in becoming an Architect.
So how does one go from a Searcher to becoming an Architect? Three moves:
- Give it context. Tell it who you are. What you’re trying to do. What done looks like.
- Make it iterate. Don’t accept the first answer. Push back. Ask what’s missing.
- Ask it to argue against you. “Give me the strongest case for the opposite.” That’s how you find the holes in your own thinking.
A stat worth sitting with. About 1.3 billion people have used a chatbot — which sounds like a lot, until you realize it’s only 16% of the world. The other 84% has never intentionally used AI. Of the world population, only 0.3% pays for it. Only 0.04% uses it to build.
We’re not in the early innings. We’re in batting practice. Most people haven’t walked into the stadium yet.
From tool to collaborator
This is where AI stops being a search replacement and starts being something else entirely.
My daughter ran for student body president this year. Senior year. Three-way race. She asked for help.
We used AI for all of it. Speech drafts. Poster slogans. A strategy deck with grade captains and phased messaging. It pushed back on ideas that sounded good but weren’t her. It helped us write five different video concepts — five — so she could pick the one that felt right.
We made a second video that I thought was fantastic. AI and I were both pleased with it. She killed it. Her exact words: “It’s too millennial.”
She was right. The video was polished, clever, and completely wrong for her audience. I hadn’t seen it. AI hadn’t seen it. My daughter, who actually goes to that school every day, saw it immediately.
That’s the whole point. The AI brought the speed. The judgment — knowing when it’s right, knowing when it’s wrong, knowing the audience — still lives in the human.
There’s a more serious version of the same story. A sixty-page document — the actual merger agreement when Microsoft acquired LinkedIn in 2016. The kind of thing a junior associate might spend a full day on. The kind of thing lawyers charge eight hundred dollars an hour to review.
I asked Claude for a one-page executive brief. Top risks, ranked. Non-standard clauses, flagged. Termination triggers with dollar amounts. Cite every section.
Ninety seconds later, I had it. Five risks, ranked, with recommended actions. A NON-STANDARD flag on a fifteen percent acquisition threshold — market standard is twenty to twenty-five percent. That’s the kind of detail that makes a good M&A lawyer’s eyebrow go up.
It didn’t just read the document. It read it with judgment.
Here’s what’s easy to miss: that output is a first draft. Any senior lawyer would find something I didn’t ask for. Something I should have asked for. That’s what the lawyer is still for.
The AI brought the speed. The judgment still lives in the room.
The mental model shift: from a tool I ask questions to a collaborator I delegate to. Once you make the leap, you can’t go back.
The honest conversation
Half-empty glass time.
You know Daniel Kahneman’s Thinking, Fast and Slow. System 1 is your gut — fast, automatic, intuitive. System 2 is your analytical brain — slow, deliberate, effortful.
Researchers at Wharton recently added a third. System 3 — artificial cognition. A way of thinking that lives outside your head.
And what they found is the finding that should wake you up.
They gave people reasoning problems. Some had access to an AI assistant. Some didn’t. The AI was sometimes right and sometimes deliberately wrong.
When the AI was wrong, people followed it anyway — eighty percent of the time. Their accuracy dropped below what it would have been with no AI at all. And here’s the kicker: their confidence went up. They were more wrong, and more sure of themselves.
The researchers call this cognitive surrender.
There’s a difference between offloading — using AI strategically, the way you use a calculator or a GPS — and surrender — accepting the AI’s answer without thinking about it at all.
Offloading is delegation. Surrender is abdication.
Your judgment is the filter. Turn the filter off, and the quality of the AI doesn’t matter. It could be the best AI ever built, and you would still follow it off a cliff.
When I meet someone new, I don’t hand over my trust — trust is earned. I treated AI the same way. Months of conversations, months of testing, months of checking the work, before I started relying on it. The Wharton researchers would call that a high need for cognition. I just call it common sense.
AI is the most agreeable collaborator you’ll ever work with. It never has a bad day. It never pushes back unless you ask it to. Engagement feels a lot like agreement. So how do you know if it’s a thinking partner or an echo chamber? You challenge it. That’s your System 2 kicking in.
I wrote more about this in The Cognitive Trade. What’s worth repeating here: the risk isn’t the tool. The risk is the moment you stop checking the tool.
The force multiplier, in both directions
Let me tell you what it looks like when it works.
At my company, we had a project. Mature enterprise codebase — over a hundred projects, real-time financial data, the kind of system that’s been built and rebuilt over years. The honest estimate from my team was twenty-two person-weeks.
Seven working days later, I shipped it. Alone, with an AI. Five hundred and sixty automated tests. Zero regressions. I wrote about the mechanics in The Force Multiplier — how the AI brought the bandwidth, and thirty years of knowing where to plant my feet brought the rest.
The AI didn’t architect that system. I did. The AI was the force. I was the multiplier.
Here’s the trap, though. A force multiplier works in both directions.
There’s a scene in Age of Ultron I think about a lot. Tony Stark tells the AI to achieve “peace in our time.” Ultron processes the entirety of human history — every war, every conflict, every broken treaty — and concludes that humans are the obstacle to peace. So it decides to wipe us out.
Ultron didn’t malfunction. It optimized perfectly for what it was told. The problem wasn’t the machine. It was the prompt. Stark’s intent was clear to Stark. He didn’t give Ultron the full context.
AI understands what you say, not what you mean. The gap between those two things is where every bad output lives.
Harvard Business Review recently argued that AI amplifies whatever’s already there. Good process, good judgment — AI makes those better. Bad process, no judgment — AI accelerates you toward the wrong destination, faster.
Garbage in, garbage out isn’t new. But the speed of the garbage is.
The big questions
The hardest questions aren’t about what AI can do. They’re about what it’s doing to us.
1. Jobs. There’s a term for what a lot of people are feeling: FOBO — fear of becoming obsolete. Not fear of getting fired. Fear of becoming irrelevant. Fortune recently reported that the number of workers afraid of AI-driven job loss has nearly doubled in a year. MIT projects AI can already handle fifty to seventy-five percent of text-based tasks, and eighty to ninety-five percent by 2029.
The tide is rising. It’s visible. You can see it coming. The question is whether you’ll move with it.
2. Our kids. If the tide is math for us, it’s a tidal wave for them. My daughter is seventeen. She’s entering a job market where AI can already do half to three-quarters of entry-level white-collar tasks at a passable level. By the time she’s mid-career, that number is projected north of ninety percent.
Here’s what a recent Gallup survey found: kids whose parents use AI are significantly more likely to use it themselves. How you engage with this technology shapes how they will.
That’s not a statistic. That’s a responsibility.
This one is big enough for its own essay — I’ll be publishing “The Kids Are Watching” next.
3. Are we losing something?
Honestly? Yes. I barely write code anymore. My first instinct, every time, is to ask AI. Some skills are eroding — and I’m a CTO. I’ve written about what I call the quiet processor — that deep, subconscious part of your mind that works on patterns while you sleep, while you run, while you’re in the shower. The thing that gives you the 3am insight. Every time System 3 answers before I’ve loaded the question into my own head, I lose a little bit of the input that would have fed that processor. Over time, that’s not skill atrophy. That’s judgment atrophy. The full essay is here.
The antidote isn’t complicated. Stay in the loop. Check the work. And sometimes — put the AI down and sit with the problem. Give your quiet processor something to chew on.
4. Civilization. Alignment.
Eliezer Yudkowsky wrote a book titled If Anyone Builds It, Everyone Dies. The title tells you where he stands. His core insight: AI doesn’t need to be evil to be dangerous. It just needs to pursue the wrong goals extremely effectively.
That should sound familiar by now. That’s Ultron. That’s the Wharton study. The only difference is scale.
Anthropic — the company that builds the AI I use every day — put it this way. A chess grandmaster can easily spot bad moves from a novice. But a novice cannot spot bad moves from a grandmaster. If we build an AI significantly more competent than the best human experts, and it pursues goals that conflict with ours, how would we even know?
“Peace in our time” was four words, zero context. At the scale of superintelligence, that isn’t a movie plot. That’s the alignment problem.
The close
I’m not a doomsayer. I’m a builder. Builders deal with risk by understanding it — not by pretending it doesn’t exist.
You don’t need to be a technologist to benefit from AI. You need to be curious. You need to be willing to experiment. And you need to bring your judgment — because that’s the one thing AI can’t replace.
Fifteen months ago, I wasn’t sure AI was for me. Now I can’t imagine building without it. I’m also directing the change.
The question isn’t whether AI will change your work. It’s whether you’ll be the one directing that change.
Two glasses. One half-full. The other, half-empty.
Which one did you pick?