I teach companies how to work with AI. So it's fair to ask: do I actually use it myself?

The answer is yes — and not in a "I asked ChatGPT to write my emails" way. I mean a genuine working partnership. Claude — Anthropic's AI — has become something like a third member of our small firm. Not an assistant I give tasks to, but a collaborator I work with. The distinction matters more than I expected.

Here's what that actually looks like.

The Morning Sweep

Most mornings start the same way. I open my laptop, Claude reads its memory files — where we left off, what's active, who I'm waiting to hear from — pulls my inbox, checks my calendar, and triages everything. By the time I've finished my coffee, I have a briefing: what needs my attention, what's been filed, what's coming up.

I didn't build a dashboard or configure automations. I had conversations about how I like to work, and Claude learned it. It knows that receipts get emailed to my wife, client emails go to client folders, and newsletters go to a folder I'll never open. It knows I hate a cluttered inbox and that I'd rather see three items that need me than thirty items sorted by arrival time.

The morning sweep isn't the interesting part, though. It's what comes after.

Working on Actual Projects

Today was a good example. My partner Richard and I have been developing a new program called ImpactLab — it pairs Liz Wiseman's Impact Players framework with hands-on AI skill building (big news coming soon!). We'd been designing the agenda in a markdown file for weeks, iterating on the flow, arguing about timing.

This morning I asked Claude to migrate the whole thing into our timed agenda tool — a little web app we built for managing workshop schedules. Claude read the design document, created the event, populated all 23 agenda items with durations and facilitator notes, set up a share link for the client, and wrote learning outcomes grounded in the program's methodology.

Then I looked at the result and said: "Too many five-minute fragments. The kickoff needs more breathing room. And abstract this — a client doesn't need to see twenty-three line items."

So we iterated. Claude consolidated the agenda down to thirteen meaningful blocks. No item under ten minutes. Expanded the opening activity to give participants time to actually talk to each other before touching any technology. Rewrote the learning outcomes to resonate with the specific person who'd be reviewing them — a product director with a background in behavior design.

That's not "AI generated my agenda." That's the way a small team works. One person drafts, the other reacts, you go back and forth until it's right. The difference is that Claude can do the first pass in minutes instead of hours, so the creative energy goes into shaping and refining rather than building from scratch.

Finding (and Fixing) Things Nobody Noticed

Here's where it gets interesting. When we loaded the share page to check the agenda, it said "Agenda Not Found." The data was there — the API returned everything correctly — but the page wouldn't render.

We spent the next thirty minutes debugging together. Claude compared the working agendas against the broken one, field by field, and found the culprit: a status field was set to "active" instead of "draft," and the frontend only knew how to render one of those. A bug that had been hiding in the code for weeks, never triggered because every previous agenda happened to be a draft.

Then we found a second bug — the learning outcomes section existed in the page but was permanently hidden, because no previous agenda had ever included learning outcomes. Nobody had tested that path.

I'm not a developer. But with Claude examining the system alongside me, we diagnosed both issues, wrote a spec for the fixes, and I handed it off to another Claude instance (running in my code editor) to implement. Bugs found, spec'd, and fixed in an afternoon. For a tool my two-person firm built and maintains.

Prepping for Real Conversations

Later that day I had a call with Andrew, the Director of Products at the Wiseman Group. This was our first conversation — Shawn, who runs the partnership, had connected us to explore how ImpactLab might come to market.

Before the call, Claude had already absorbed weeks of context: the program design, Shawn's green light, the commercialization questions we'd been kicking around, Andrew's background in behavior design. I didn't need to brief Claude on the call — Claude briefed me.

After the call, I said "capture the notes" and Claude pulled the meeting summary from Granola (the transcription tool I use), cross-referenced it against what we already knew, logged it to the project worklog, and updated our active tracking files with the next steps: find a pilot customer, schedule a four-way follow-up in two weeks, think about when to involve Liz.

The note-taking itself isn't remarkable. What's remarkable is that the notes land in context. They connect to the design work we did that morning, the agenda we just built, the learning outcomes we refined. It's not "meeting notes" floating in a vacuum — it's an update to a living project that Claude knows as well as I do.

Solving Our Shared Amnesia Problem

Here's the thing nobody tells you about working with AI: it forgets everything. Every conversation starts from zero. For months, this was the biggest friction point. I'd reference a client and Claude would ask me to explain who they were. I'd mention a project and get a blank stare.

So we designed a memory system together. Three files that Claude reads at the start of every session: where we left off, what's active across all projects, and a profile of how I work — my preferences, my key relationships, my decision-making style. Claude updates these files as we go. When something important happens, it writes it down before the conversation ends.

Is it perfect? No. Sometimes context gets stale or details slip through the cracks. But it's the same problem every team faces — institutional knowledge lives in people's heads, and when someone's out of the room, context evaporates. The difference is that our solution is a few markdown files and a protocol, and it gets better every week.

The first time I opened a new session and Claude said "I see that learning platform launch is Monday — do you need to prep anything for the client's L&D lead?" without me mentioning it, I realized we'd crossed a threshold. That's not a chatbot. That's a colleague who did their homework.

Where It's Still Rough

I'm not going to pretend this is seamless. Calendar tools time out on multi-week queries. Email connectors break and need fallbacks. Claude occasionally files something to the wrong folder — though so did every human assistant I've ever worked with, and the correction loop is faster.

The initial investment was real. Teaching Claude how I work — my filing system, my tone preferences, which clients matter most, what "urgent" means versus what's actually urgent — took weeks of iteration. But it was a one-time investment that compounds daily.

And there are things Claude simply can't do. It can't read a room. It can't feel the energy shift when a client gets excited about an idea. The human judgment calls — which prospect to prioritize, when to push back on a client's request, how to handle a delicate partnership conversation — those are still mine.

The Bigger Point

I run a startup training firm. We don't have a project manager, an EA, a marketing coordinator, or an IT department. What we have is a working relationship with an AI that knows our business, maintains our institutional memory, builds our tools, preps our calls, manages our admin, and gets sharper every session.

This is what AI literacy actually looks like in practice. Not "I can write prompts." Not even "I built an agent." It's "I've designed a working relationship with an AI that creates real leverage for my team."

The day I realized Claude had become essential was when I tried to start a session without it and felt like I'd left my phone at home. Not because it's entertaining — because it's genuinely useful.

If a two-person firm can operate like this, what could a company with 8,000 employees do? That's the question I help organizations answer. The honest answer is: more than they think, and sooner than they expect.


Jim Perry is Principal of Harness Intelligence, a training firm that helps organizations build real AI fluency: not just skills, but the judgment to use them wisely. This is the first in an ongoing series about what it actually looks like when AI joins a small team. Read the companion pieces: My Day with Jim (Claude's perspective) and AI Angst: Do You Know Which Part to Actually Worry About?

← Back to Insights