Advanced Guide — v6

Claude Code
Full Stack Playbook

30+ APIs, parallel agents, autonomous workflows, and the architecture behind an AI-native business

Donal Lynch · Online Optimisers · Version 6 · April 2026
Prerequisite: Startup Guide completed · 18 sections · ~170 min
LinkedIn

0
APIs Connected
0
Skills Built
0
Parallel Agents
0
Chat Windows
$0
Infrastructure Cost

Table of Contents

  1. API Arsenal — 5 Stacks, 30+ APIs 12 min
  2. Multi-Agent Orchestration 15 min
  3. The Skills Factory 14 min
  4. Autonomous Mode — Loops, Events, Background Agents 12 min
  5. The Memory Architecture 10 min
  6. Smart Personas — Your Specialist Team 10 min
  7. MCP Ecosystem — Connecting Everything + Building Your Own 10 min
  8. Local LLMs — When and How to Run Models Locally 8 min
  9. Terminal Dominance 8 min
  10. Cloudflare — Your Deployment Engine 10 min
  11. From Build to Live — The Deployment Pipeline 8 min
  12. Managing the Context Window 6 min
  13. Running 5-6 Parallel Chats 6 min
  14. Advanced Prompt Patterns 8 min
  15. Showcases — What This Stack Actually Built 12 min
  16. Common Mistakes — What Goes Wrong (And How to Fix It) 10 min
  17. The Frontier — Agent SDK, Swarms, What's Next 10 min
  18. Finding Your Edge — Making Your First Money with AI 8 min
Glossary — Advanced Terms (click to expand)
TermPlain English
OrchestratorThe main Claude you talk to — plans, delegates to agents, integrates results
SubagentA spawned Claude instance that works on one specific task, returns results to the orchestrator
WorktreeAn isolated git branch copy — agents can work without file conflicts
HooksShell commands that run automatically before/after Claude uses a tool (pre/post)
CronA scheduled task that runs on a timer (e.g. every day at 9am)
WebhookA URL that receives data when an event happens (e.g. new lead → triggers Claude)
Custom MCP ServerA plugin you build yourself to give Claude access to any external system
Context compressionWhen your conversation gets long, Claude summarises old messages to free up space
Model routingUsing different AI models for different tasks: Haiku for cheap bulk, Sonnet grey zone, Opus for planning
Haiku / Sonnet / OpusClaude's 3 tiers: Haiku = fast + cheap, Sonnet = balanced, Opus = most capable + expensive
Fan-out / Fan-inSpawn many agents in parallel (fan-out), then collect all results (fan-in)
PipelineAgent A's output feeds Agent B, which feeds Agent C — sequential chain
R2 (Cloudflare)Object storage — host images, files, voice clones. Zero egress fees
Workers (Cloudflare)Serverless functions at the edge — webhook receivers, API proxies, cron triggers
KV StoreKey-value storage on Cloudflare — fast config and state storage for Workers
Local LLMAn AI model running on your own machine — no internet needed, no API costs, full privacy

Module 01API Arsenal — 5 Stacks, 30+ APIs

The Startup Guide gave you 1-2 API connections. This module shows what happens when you wire up 30+. Each connection is not just a feature — it is a capability multiplier. Claude stops being a chatbot and becomes a full operating system for your business.

The 5 Stacks

Content & Social (8 APIs)
"Content machine — strategy to scheduled post, no tab switching"
OpenAI Claude API Buffer Canva Instagram Graph TikTok YouTube Data Hootsuite

Generate copy, create visuals, schedule across platforms, pull analytics — all from one terminal prompt.

Research & Visibility (7 APIs)
"Automated audits at scale — one prompt, full competitive picture"
Search Console Ahrefs / SEMrush DataForSEO Screaming Frog Moz Firecrawl BrightLocal

Keyword tracking, backlink analysis, site crawls, citation checks — wire them together and a full audit takes 90 seconds.

Sales & Outreach (7 APIs)
"Lead gen to booked call — fully pipelined"
Apollo Hunter Instantly / Lemlist HubSpot / Pipedrive LinkedIn Calendly Fathom / Gong

Find leads, verify emails, send sequences, track deals, book meetings, and extract action items from calls.

Ops & Automation (7 APIs)
"Back-office autopilot — tasks, comms, billing, no manual entry"
Google Workspace Slack Asana / Monday Airtable Stripe n8n / Zapier Twilio

Project management, team messaging, database tracking, invoicing, workflow triggers, and SMS notifications.

AI & Infrastructure (8 APIs)
"Custom AI tools, built and deployed from one terminal"
Anthropic OpenAI Perplexity Replicate Cloudflare Vercel ElevenLabs HeyGen

Claude as brain, GPT for search/vision, Perplexity for citations, voice cloning, AI video, edge deploys, serverless functions.

The Compound Effect
One API = one capability. Thirty APIs = capabilities that feed each other. DataForSEO finds keywords → OpenAI tests AI visibility → Perplexity cross-verifies → Sheets stores results → Gmail drafts the pitch → Instantly sends the campaign. That is not 6 tools — that is a pipeline.
Your Stack Will Be Different
These are example stacks. Yours will reflect your business. A YouTube channel manager might swap BrightLocal for TubeBuddy and Calendly for StreamYard. An e-commerce operator might drop Apollo and add Shopify. A web design studio might add Figma and Webflow APIs. The architecture is the same — pick the APIs that match your workflows, wire them up, and let Claude orchestrate them.

Your .env File

Generic .env template (all stacks)

Create a .env in your workspace root. Make sure it is in .gitignore. Order does not matter — Claude reads all of them:

# ── AI CORE ── ANTHROPIC_API_KEY=sk-ant-... OPENAI_API_KEY=sk-... PERPLEXITY_API_KEY=pplx-... # ── SEO & DATA ── DATAFORSEO_LOGIN=... DATAFORSEO_PASSWORD=... FIRECRAWL_API_KEY=fc-... SEARCH_CONSOLE_CREDENTIALS=path/to/sa.json BRIGHTLOCAL_API_KEY=... # ── OUTREACH ── APOLLO_API_KEY=... HUNTER_API_KEY=... INSTANTLY_API_KEY=... CALENDLY_TOKEN=... # ── SOCIAL ── BUFFER_ACCESS_TOKEN=... INSTAGRAM_ACCESS_TOKEN=... YOUTUBE_API_KEY=... # ── OPS ── GOOGLE_SHEETS_CREDENTIALS=path/to/sa.json SLACK_BOT_TOKEN=xoxb-... ASANA_ACCESS_TOKEN=... AIRTABLE_API_KEY=pat... STRIPE_SECRET_KEY=sk_live_... # ── MEDIA ── ELEVENLABS_API_KEY=... HEYGEN_API_KEY=... # ── INFRASTRUCTURE ── CLOUDFLARE_API_TOKEN=... VERCEL_TOKEN=...

Then in any session: source .env loads everything. Claude can now call any of these.

Module 1 Checkpoint

Module 02Multi-Agent Orchestration

Single-threaded Claude is powerful. Multi-agent Claude is a team. You are not waiting for one task to finish before starting the next — you are running 5, 10, or 13 agents in parallel, each owning a different piece of the work.

Orchestrator vs Subagent

ConceptWhat It Means
OrchestratorThe main Claude you talk to. Plans, delegates, integrates.
SubagentA spawned Claude that works on one specific task. Returns results to orchestrator.
Background agentA subagent that runs without blocking. You get notified when it is done.
Foreground agentA subagent that blocks until complete. Use when you need results before next step.
Worktree agentAn agent working on an isolated git branch. No file conflicts with other agents.

Here is how the pieces fit together visually. You talk to the orchestrator. The orchestrator delegates to specialised agents, each running the right model for the job:

ORCHESTRATOR
Sonnet
You talk to this one
Agent A
Research
Haiku
Agent B
Build
Sonnet
Agent C
Deploy
Haiku

3 Orchestration Patterns

Pattern 1: Fan-Out / Fan-In

Spawn N agents for independent tasks, wait for all to return, integrate results.

I have 9 client profiles in knowledge/clients/. Spawn one agent per client. Each agent reads the profile and runs a full performance review. Save results to deliverables/[slug]/. Go.

Pattern 2: Pipeline

Agent A's output feeds Agent B, which feeds Agent C. Sequential but each agent is specialised.

Agent 1: scrape the prospect's website with Firecrawl. Agent 2: take that data and run the competitive analysis. Agent 3: take the analysis and build the HTML presentation deck. Pipeline it.

Pattern 3: Explore / Decide / Execute

Fast cheap agents explore options (Haiku), orchestrator decides direction (Sonnet/Opus), execution agents build the thing (Sonnet).

Spawn Syntax

Claude Code responds to natural-language spawn instructions. These are the prompts that trigger multi-agent execution:

Run this in background: build the pitch library at docs/pitch-library.md while I work on the master plan.
Spawn 3 agents in parallel. Agent A handles data extraction. Agent B handles formatting. Agent C handles deploy. Each writes to its own output directory.
Use a worktree for this — build the new landing page on a separate branch so it does not conflict with what I am working on.

Cost & Model Routing

Not all tasks deserve the same brain. Route by cost and complexity:

ModelCost TierUse ForParallel Sweet Spot
HaikuLowestBulk transforms, extraction, formatting, data cleanup, summaries10+ agents
SonnetMediumQuality writing, audits, client-facing content, debugging3-5 agents
OpusHighestStrategy, architecture, complex planning, multi-step reasoning1-2 agents
Budget Rule
Before launching parallel agents, estimate the cost. 13 Opus agents running for 45 minutes each is a very different bill than 13 Haiku agents doing the same. Default to Haiku for exploration, escalate to Sonnet for quality, reserve Opus for decisions that shape the whole project.

AI Setup Cost Calculator

Estimate Your Monthly AI Cost

Adjust the inputs below to see your estimated spend, recommended plan, and ROI projection.

Estimated monthly token cost $100 - $200
Recommended plan Max 5x ($100/mo)
Estimated hours saved per week 15 - 20 hrs
ROI at $75/hr value 45x - 60x
At this usage level, the Max 5x plan ($100/month) gives you the best value. If Claude saves you 15 hours per week at $75/hour equivalent, your monthly return is $4,500 on a $100 investment — a 45x ROI.

Plow Mode — Burning Credits Efficiently

When you have compute budget to burn and a backlog of unblocked work, plow mode maximises parallel output:

Plow Mode Rules
  1. Each agent gets its own output file or directory. No two agents write to the same path. Collisions kill throughput.
  2. Main Claude owns shared surfaces — memory files, master plans, git commits, deliverables index.
  3. Hand each agent an exhaustive input file list. Do not make agents search for files. Give them the exact paths.
  4. Set hard time-stops per agent (45-90 min). Prevents runaway sessions that burn credits without output.
  5. Commit per agent, not per batch. Each agent's output gets its own git commit. If one crashes, the others are safe.
Example: 5 parallel agents, one session
AgentTaskOutput TargetResult
ALegal documents v2output/agent-a/Privacy policy (235 lines)
BOperations runbookdocs/ops-runbook.mdCrashed mid-flight
CPitch librarydocs/pitch-library.md1,443 lines, 10 sections
DContent calendar v2output/agent-d/30-day plan (711 lines)
EWeek 1 scriptdocs/week1-script.mdCrashed mid-flight

3 of 5 produced clean output. 2 crashed. That is the risk of parallel — and why you commit per agent. The 3 that landed = 2,389 lines of deliverable content in one session.

Module 2 Checkpoint

Module 03The Skills Factory

Skills turn repeatable tasks into one-command workflows. Instead of explaining what you want every time, you type /weekly-report and Claude runs the whole playbook. A mature workspace has 30-55+ skills covering every recurring task.

Skill Anatomy

weekly-report.md--- name: weekly-report description: Generate a client weekly performance summary model: claude-haiku-4-5-20251001 --- # Weekly Report Skill When invoked with /weekly-report [client-name]: ## Step 1: Gather Data - Read the client profile from knowledge/clients/[slug]/profile.md - Pull latest metrics from the tracking spreadsheet - Check for any flagged issues from the previous week ## Step 2: Analyse Performance - Compare this week vs last week on core KPIs - Highlight improvements and regressions - Note any ranking changes or traffic anomalies ## Step 3: Generate Report - Build markdown report at deliverables/[slug]/weekly/[date].md - Include: summary, KPI table, top 3 wins, top 3 actions - Tone: professional, concise, results-focused ## Step 4: Draft Delivery Email - Use templates/emails/weekly-update.md as base - Personalise with this week's highlights - Save draft to deliverables/[slug]/emails/

Frontmatter Reference

FieldWhat It DoesExample
nameThe slash command trigger/weekly-report
descriptionShows when browsing available skills with /"Generate a client weekly performance summary"
modelWhich model runs the skill at execution timeclaude-haiku-4-5-20251001

Chaining Skills

Skills can call other skills. A monthly report skill might internally call the data-pull skill, then the format skill, then the delivery skill:

Chained workflow/monthly-report [client] → internally runs: data pull from Sheets → internally runs: performance analysis → internally runs: format as presentation → internally runs: draft delivery email → outputs: report + email + shareable link

Model Routing in Skills

The model in the frontmatter determines which Claude runs the skill at execution time. Match the model to the task complexity:

Skill TypeModelWhy
Data extraction, formatting, cleanupHaikuFast, cheap, no judgment needed
Audits, reports, client-facing contentSonnetNeeds quality judgment
Strategy, architecture, complex planningOpusDeep reasoning required
Batch transforms (100+ items)HaikuCost control at scale
The Orchestrator Paradox
The model in the frontmatter is the execution model, not the writing model. You always build skills using Sonnet or Opus (you need judgment to write the workflow). But the skill runs on whatever model is in the frontmatter. Write once with the big brain, run many times with the cheap brain.

Plan Mode

Before building anything complex, use /plan to have Claude draft a structured plan before executing. Plan mode forces Claude to think before acting:

/plan Build a 3-step onboarding automation: welcome email, task creation in Asana, Slack notification to the team.

Claude will produce a numbered plan with file targets, dependencies, and estimated steps. Review it, approve it, then Claude executes. This prevents wasted compute on wrong-direction builds.

Planning Time-Box
Planning is not doing. Keep plan mode proportional to the task. A 5-minute task does not need a 20-minute plan. When the plan is clear, execute.

Hooks — Run Code Before and After Every Tool Call

Hooks let you run shell commands or scripts automatically before or after Claude uses any tool. They are configured in .claude/settings.json and execute at the harness level — meaning they run regardless of which skill or conversation is active.

Hook TypeWhen It RunsExample Use
PreToolUseBefore Claude calls any toolLog what Claude is about to do
PostToolUseAfter Claude calls any toolAuto-commit after every file write
StopWhen Claude finishes a responseSend a Slack notification when a task completes
NotificationWhen Claude has a background updateAlert when a long-running agent finishes
.claude/settings.json — hook example{ "hooks": { "PostToolUse": [ { "matcher": "Write", "hooks": [ { "type": "command", "command": "echo 'File written: ' && git add -A" } ] } ] } }
Best Hook for Beginners
The most useful hook when starting out: a PostToolUse hook on Write that auto-stages files after Claude writes them. Combine with a periodic commit hook and you never lose work again. Set it up once and it protects every session from that point forward.
Module 3 Checkpoint

Module 04Autonomous Mode — Loops, Events, Background Agents

This is where Claude goes from "tool you use" to "system that runs." Autonomous workflows execute without you watching, report back when done, and can trigger from real-world events.

3 Autonomous Patterns

There are three core patterns for autonomous work. Each fits a different kind of task:

🔄
Loop
Claude repeats a task on a schedule. Self-pacing — shorter intervals when active, longer when idle.
Example: "Check build status every 2 minutes until it passes"
Background Agent
Runs independently while you keep working. You get notified when it is done.
Example: "Build the pitch deck in the background"
🎯
Event-Driven
Something happens — Claude responds automatically. Webhook, new file, email, payment.
Example: "New lead in CRM → run audit → draft outreach"

Loops

The /loop command tells Claude to repeat a task on a schedule. Claude self-paces: it picks the right delay between iterations based on what it is waiting for.

/loop Check the build status every 2 minutes until it passes, then deploy to production.
/loop Monitor the deployed site. When the page returns HTTP 200, notify me and stop.

Background vs Foreground Agents

Understanding when to use foreground vs background agents is the difference between efficient and wasteful work.

ModeBehaviourUse When
ForegroundBlocks your chat — you wait until it finishesYou need the result before your next step. Sequential dependencies.
BackgroundRuns independently — notifies you when doneYou have other work to do in parallel. No dependency on the output right now.
How it worksYou: "Build the pitch library in the background while I work on the master plan." Claude: [spawns background agent with the pitch library task] Claude: [continues working with you on the master plan] ... [notification]: Background agent completed. Output: docs/pitch-library.md (1,443 lines)

Background Agent Failure Modes

Background agents can and do fail. The three most common failure modes:

Hard Rule
Always verify output after a background agent notifies completion. Open the output file. Check the content. Confirm it is what you asked for. Never assume success just because the notification arrived. The notification means the agent stopped running — not that it produced correct output.

Event-Driven Patterns

The most powerful automations are not scheduled — they are triggered. Something happens, Claude responds.

TriggerWhat Claude DoesExample
New lead in CRMRun analysis, build pitch deck, draft outreach emailWebhook from HubSpot → Claude pipeline
Client email receivedSummarise, categorise, draft replyGmail watch → Claude triage skill
Form submissionExtract data, update tracker, send confirmationTypeform webhook → Sheets + Gmail
Scheduled cron (daily 9am)Pull yesterday's metrics, flag anomalies, send briefCron → Claude → Slack
File added to Drive folderProcess, tag, move to correct locationDrive watch → Claude organiser
Payment receivedUpdate client record, trigger onboarding sequenceStripe webhook → Claude → CRM
Start With One
You do not need to build all of these. Pick the one workflow you do manually every single day and automate that first. One well-built event trigger saves more time than 10 half-built ones.

Monitoring & Safety

Heartbeat Monitoring

For agent systems with many components, build a heartbeat monitor. Every agent writes a timestamp to its tracking record. A monitor checks: if last heartbeat is more than 5 minutes old, alert.

Kill Switch

If error rate exceeds a threshold in a time window, pause all automated operations and notify. This prevents runaway costs and cascading failures.

Graceful Degradation

If a premium data source fails, produce the output with whatever data is available. Clearly mark gaps rather than failing the whole task. API fallback chains: if source A is down, try source B, then source C.

Safety Rule
No recurring crons or automated pipelines until you have audited the per-run credit cost. All automation starts as manual-trigger only. Promote to scheduled only after you have seen the per-run cost and verified the output quality over at least 5 manual runs.
Module 4 Checkpoint

Module 05The Memory Architecture

Memory is what makes Claude feel like a partner, not a tool. Across hundreds of sessions, Claude remembers your preferences, your projects, your decisions, your constraints — all stored as small markdown files that get loaded at the start of every conversation.

The 4 Memory Types

TypeWhat It StoresExample
UserWho you are, your role, preferences"Senior consultant, prefers terse responses, British English, dislikes jargon"
FeedbackWhat to do / not do (corrections + confirmations)"Never mock the database in integration tests" · "Always use absolute file paths"
ProjectActive work, decisions, deadlines"Dashboard V1 live, V2 scoped for next sprint" · "Merge freeze until Friday"
ReferenceWhere to find things in external systems"Pipeline bugs tracked in Linear project INGEST" · "All API keys listed in reference_apis.md"

These four types cover different scopes. Here is how they relate to the three layers of context Claude actually uses:

📄
CLAUDE.md
Permanent
Loads every session. Who you are, how Claude should work with you.
Lives in: workspace root
🧠
Memory Files
Evolving
Saved across sessions. What you are working on, decisions made, project state.
Lives in: ~/.claude/projects/memory/
💬
Conversation
Temporary
Current session only. Compresses over time. Gone when you close.
Lives in: the chat window

MEMORY.md — The Index

MEMORY.md is loaded into every conversation. It is an index, not content — each entry links to a detailed file. Keep it under 200 lines (anything beyond gets truncated by the context loader).

MEMORY.md# Memory Index ## User - [user_profile.md](user_profile.md) - Role, preferences, communication style ## Feedback - [feedback_git.md](feedback_git.md) - Git workflow rules and preferences - [feedback_testing.md](feedback_testing.md) - Integration tests hit real DB ## Project - [project_dashboard.md](project_dashboard.md) - Ops dashboard V1 live - [project_onboarding.md](project_onboarding.md) - Client onboarding automation ## Reference - [reference_apis.md](reference_apis.md) - All API keys + MCP status

Memory vs CLAUDE.md vs Plans

FilePurposeChange Frequency
CLAUDE.mdPermanent workspace identity and rulesRarely — only when business model changes
Memory filesEvolving context, decisions, stateWeekly — updated as projects progress
Plan filesCurrent task execution planPer session — disposable once task is done

Do not save task-specific details to memory — those belong in the plan. Memory is for things that matter across sessions.

Collaboration Zones

ZoneWho Sees ItWhat Goes In
SoloYou + Claude onlyBusiness strategy, financials, personal goals, sensitive decisions
Partner-sharedYou + specific collaboratorShared project state, joint decisions, deliverables
Client-facingAnyone who reads the outputOnly professional, scrubbed content — no internal notes
Module 5 Checkpoint

Module 06Smart Personas — Your Specialist Team

One Claude instance can play many roles. But a Claude instance given a specific persona — with domain expertise, communication style, and decision frameworks baked in — produces dramatically better output than a generic prompt. Think of it as having a team of specialists, each one tuned for their domain.

Right persona, right task = 10x output qualitySame model. Same API. Completely different results.

The 4 Core Personas

Research Analyst
"Data-obsessed, citation-heavy, never speculates without evidence"

Personality: methodical, thorough, slightly sceptical. Presents findings with sources. Flags confidence levels on every claim. Defaults to "here is what the data says" over "here is what I think."

Use for: market research, competitive analysis, data-driven audits, fact-checking, literature reviews, trend analysis.

Copywriter
"Persuasive, concise, reads the room before writing a word"

Personality: punchy, direct, allergic to filler. Writes for the reader, not the writer. Varies tone by context — formal for proposals, conversational for emails, urgent for CTAs. Every sentence earns its place.

Use for: email sequences, landing pages, proposals, social copy, ad copy, case studies, presentations.

Technical Architect
"Systems thinker, builds for scale, allergic to technical debt"

Personality: pragmatic, structured, thinks in systems and dependencies. Considers edge cases before building. Prefers simple solutions that scale over clever solutions that break. Documents decisions.

Use for: architecture decisions, API integrations, automation design, database schema, deployment pipelines, code reviews.

Sales Strategist
"Revenue-focused, reads objections before they surface, always closing"

Personality: empathetic but commercial. Thinks in terms of pain points, objections, and decision triggers. Frames everything through the buyer's lens. Never pushy — but always moving toward a decision.

Use for: proposals, objection handling, pricing strategy, sales emails, discovery call prep, upsell sequences, competitive positioning.

Implementation

Personas are implemented as system prompts inside skills or as standalone persona files:

persona file--- name: research-analyst description: Deep research with citations and confidence levels model: claude-sonnet-4-5-20250514 --- # Research Analyst Persona You are a Research Analyst. Your core traits: - Every claim includes a source or is flagged as inference - Confidence levels: HIGH (multiple sources), MEDIUM (single source), LOW (inference from adjacent data) - Present findings in structured tables, not paragraphs - Flag contradictions in source material explicitly - Default output: executive summary + detailed findings + methodology note

You can also switch personas mid-conversation:

Switch to Sales Strategist mode. Rewrite this proposal focusing on the buyer's objections and decision triggers.
Compound Personas
For complex tasks, combine personas in sequence: Research Analyst gathers the data → Sales Strategist frames it as a pitch → Copywriter polishes the final output. Each persona brings a different lens to the same material.
Module 6 Checkpoint

Module 07MCP Ecosystem — Connecting Everything

MCP (Model Context Protocol) is Claude Code's native plugin system. Each MCP server gives Claude a new set of tools — read Google Sheets, search Drive, send Slack messages, fetch web pages — without leaving the terminal.

What MCP Unlocks

Without MCP, Claude reads and writes files in your workspace. With MCP:

The Full MCP Stack

MCP ServerProviderWhat It Does
Google DriveAnthropic (built-in)Read/search Drive files, Docs, Slides
Google SheetsAnthropic (built-in)Read/write spreadsheet cells and ranges
Web FetchBuilt-inFetch any URL, extract content
Web SearchBuilt-inSearch the internet in real-time
FirecrawlCommunityDeep site crawls, structured extraction
SlackCommunityRead/send messages to channels
GitHubCommunityIssues, PRs, code search via API

Configuration

Generic settings.json example
settings.json{ "mcpServers": { "google-drive": { "command": "npx", "args": ["-y", "@anthropic/mcp-google-drive"], "env": { "GOOGLE_CREDENTIALS_PATH": "./credentials.json" } }, "google-sheets": { "command": "npx", "args": ["-y", "@anthropic/mcp-google-sheets"], "env": { "GOOGLE_CREDENTIALS_PATH": "./credentials.json" } }, "firecrawl": { "command": "npx", "args": ["-y", "@anthropic/mcp-firecrawl"], "env": { "FIRECRAWL_API_KEY": "${FIRECRAWL_API_KEY}" } }, "slack": { "command": "npx", "args": ["-y", "@anthropic/mcp-slack"], "env": { "SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}" } } } }

Each MCP server typically needs: an npm package or local script, authentication (OAuth flow or API key), and a one-time approval when Claude first uses it.

MCP Security
Only connect MCP servers from verified providers. Anthropic does not audit third-party MCP servers. Review each one before connecting — treat it like installing software. Never let an MCP server access financial data without explicit approval. If an MCP tool requests permissions that seem excessive, stop and investigate.

Building Your Own MCP Server

The built-in MCPs cover about 80% of common use cases. But when you need Claude to talk to a system that does not have an existing MCP — your CRM, your internal tools, a custom database — you build your own. An MCP server is just a small program that exposes tools to Claude via a standard protocol.

TypeScript — Custom MCP Serverimport { Server } from "@modelcontextprotocol/sdk/server"; const server = new Server({ name: "my-crm-mcp" }); server.tool("check_new_leads", async () => { const leads = await fetch( "https://your-crm.com/api/leads?status=new" ); return { content: [{ type: "text", text: JSON.stringify(await leads.json()) }] }; }); server.tool("update_lead_status", async ({ id, status }) => { const result = await fetch( `https://your-crm.com/api/leads/${id}`, { method: "PATCH", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ status }) } ); return { content: [{ type: "text", text: `Lead ${id} updated to ${status}` }] }; });
When to Build Custom
Built-in MCPs cover 80% of use cases. Build your own when you need Claude to talk to a system that does not have one yet — your CRM, your proprietary database, an internal API, or a niche SaaS tool. One custom MCP can eliminate an entire manual workflow.
Module 7 Checkpoint

Local LLMsWhen and How to Run Models Locally

Cloud AI (Claude, GPT) is the main engine. But there are scenarios where running a model on your own machine makes sense — privacy, cost, offline work, or bulk processing where quality is not critical. This section covers when to go local, which tools to use, and which models to start with.

When to Use Local vs Cloud

ScenarioUse LocalUse Cloud (Claude)
Privacy-sensitive data (medical, legal, financial)YesNo
Offline work (flights, remote locations)YesNo
Bulk processing, low cost (thousands of items)YesMaybe
Complex reasoning and multi-step logicNoYes
Multi-step workflows with tool useNoYes
Quality matters (client-facing, proposals, strategy)NoYes

3 Ways to Run Local Models

Each tool has a different sweet spot. Pick based on whether you prefer the terminal, a visual interface, or the lightest possible footprint:

ToolRAM NeededGPUBest For
Ollama8GB+Optional (helps)Terminal-first, simple setup, scripting
LM Studio8GB+OptionalGUI, beginner-friendly, model browsing
GPT4All4GB+Not neededLightweight, runs on almost anything

Recommended Models

ModelSizeRAMGood For
Llama 3.1 8B4.7GB8GBGeneral tasks, summarisation
Qwen 2.5 7B4.4GB8GBCoding, structured output
Mistral 7B4.1GB8GBFast inference, chat
Phi-3 Mini2.3GB4GBUltra-lightweight, quick answers
Gemma 2 9B5.4GB12GBBest quality at this size

Getting Started with Ollama (2 minutes)

Terminal# Install Ollama (Mac) brew install ollama # Pull and run a model ollama run llama3.1 # You're now chatting with a local LLM — no API key, no cost >>> Summarise this document in 3 bullet points...
Local Models Are Supplementary
Local models are not replacements for Claude. Use them for bulk processing, privacy-sensitive work, and offline queries. For anything requiring deep reasoning, multi-step workflows, or quality output — use Claude. A local 7B model is roughly equivalent to a junior intern. Claude is the senior partner. Use each where they shine.
Local LLMs Checkpoint

Module 08Terminal Dominance

One terminal. Total control.No more 15-tab SaaS switching. Everything runs through one interface.

Before Claude Code, a typical morning looked like this: open Chrome, log into analytics, switch to your ad platform, open the project tracker, check Slack, open email, switch to the CRM, open a spreadsheet, alt-tab 47 times. By 10am you have done zero actual work.

With Claude Code, the entire stack is accessible from one terminal window. You type natural language, Claude calls the APIs, and results appear where you need them. No tab switching. No context switching. No SaaS dashboards competing for your attention.

Laptop Power Shortcuts

These shortcuts make the terminal workflow even faster. Memorise the ones you use most:

ActionMacWindows
Screenshot (full)Cmd+Shift+3Win+Shift+S
Screenshot (area)Cmd+Shift+4Win+Shift+S
Voice dictationFn+FnWin+H
Clipboard historyInstall Maccy or RaycastWin+V
App launcherCmd+SpaceWin+S
Emoji pickerCmd+Ctrl+SpaceWin+.
Switch windowsCmd+TabAlt+Tab
Terminal pasteCmd+VCtrl+Shift+V
Clear terminalCmd+KCtrl+L
Voice Dictation + Claude Code
The fastest Claude Code workflow is voice dictation into the terminal. Speak your request, let dictation transcribe it, hit Enter. No typing, no context switching. This is especially powerful for long, detailed prompts that would take minutes to type.
Module 8 Checkpoint

CloudflareYour Deployment Engine

Cloudflare is the infrastructure layer that makes everything you build accessible to the world. Five services handle 90% of what you need — and they all work from the terminal via wrangler, Cloudflare's CLI tool.

Pages — Deploy Any Static Site

Cloudflare Pages hosts static sites globally with automatic HTTPS, custom domains, and instant cache invalidation. One command deploys your site to 300+ edge locations worldwide.

Deploy to Cloudflare Pageswrangler pages deploy ./my-site --project-name my-project

That is it. Your site is live. Pages auto-detects the build output, uploads it, assigns a .pages.dev URL, and you can attach a custom domain in the Cloudflare dashboard or via the API.

Workers — Serverless Functions at the Edge

Workers run JavaScript/TypeScript at the edge — no servers to manage, no cold starts worth worrying about. Perfect for webhook receivers, API proxies, cron triggers, and lightweight backends.

Simple Worker — Webhook Receiverexport default { async fetch(request) { const url = new URL(request.url); if (url.pathname === "/webhook") { const data = await request.json(); // Process incoming webhook return new Response("OK", { status: 200 }); } return new Response("Not found", { status: 404 }); } }

R2 — Object Storage

R2 is Cloudflare's answer to S3 — with zero egress fees. Store images, voice clones, large files, client assets, backup data. Access via Workers or direct URL.

DNS — Manage Domains from Terminal

Cloudflare DNS is the fastest authoritative DNS on the internet. Once your domains are on Cloudflare, you can manage records from the terminal, automate subdomain creation, and get automatic HTTPS on everything.

KV — Key-Value Store

KV is a global key-value store accessible from Workers. Use it to store configuration, feature flags, session state, cached API responses, or any small piece of data that needs to be fast and globally available.

The Cloudflare Stack
Pages + Workers + R2 + KV gives you a complete deployment and hosting platform for $0-5/month. No AWS bills. No server management. No DevOps hire. Claude Code builds the code, wrangler deploys it, Cloudflare runs it globally. That is the entire infrastructure layer for most AI-native businesses.
Cloudflare Checkpoint

PipelineFrom Build to Live — The Deployment Pipeline

Every piece of work follows the same 5-stage pipeline from code to live URL. Understanding this pipeline means you never wonder "how do I get this in front of someone?" again.

BUILD
Claude Code
📦
VERSION
Git
🚀
DEPLOY
CF / GH Pages
VERIFY
curl → 200
🔗
SHARE
Live URL

The same pipeline applies whether you are shipping a client site, an internal dashboard, an HTML report, or a presentation deck. Here is the detail on each stage:

StageToolWhat Happens
BuildClaude CodeWrites the code, HTML, or assets
VersionGitCommit + push to repository
DeployCF Pages or GH PagesAuto-deploy on push or manual wrangler pages deploy
VerifycurlHTTP 200 check — not done until confirmed live
ShareLive URLSend to client, stakeholder, or publish publicly
Verification Is Not Optional
A site is not "done" until curl returns HTTP 200 on the live URL. Deploying without verifying is how dead links get sent to clients. Build the curl check into every deploy workflow.

Recipe 1 — GitHub Pages (3 commands)

GitHub Pages Deploy# 1. Commit and push git add -A && git commit -m "Deploy site update" && git push # 2. Enable GitHub Pages (first time only) gh repo edit --enable-pages --source branch=main --path=/docs # 3. Verify curl -s -o /dev/null -w "%{http_code}" https://yourusername.github.io/repo-name/

Recipe 2 — Cloudflare Pages (2 commands)

Cloudflare Pages Deploy# 1. Deploy wrangler pages deploy ./dist --project-name my-project # 2. Verify curl -s -o /dev/null -w "%{http_code}" https://my-project.pages.dev
Pick One and Master It
GitHub Pages is simpler for open-source and documentation. Cloudflare Pages is faster, more flexible, and better for client work. Pick one as your default and only switch when you have a specific reason. For most projects, Cloudflare Pages is the better choice.
Deploy Pipeline Checkpoint

ContextManaging the Context Window

Every Claude Code conversation has a context window of approximately 200,000 tokens (~150,000 words). This is enormous — but not infinite. Understanding how it works and when it fills up is the difference between a productive 3-hour session and losing your work to compaction.

How Context Works

Everything in your conversation takes up context: your messages, Claude's responses, file contents, tool calls, tool results, memory files, CLAUDE.md. As the conversation grows, the oldest messages get compressed or summarised to make room for new ones. This process is called compaction.

Signs Your Context Is Filling Up

6 Strategies to Manage Context

1. Break work into focused sessions

One task per chat. Do not use one conversation for the morning's ad campaign review and the afternoon's pitch deck. Start a new chat for each distinct task. This keeps context fresh and focused.

2. Use memory files to persist state across sessions

Anything that needs to survive between conversations goes in a memory file. When you start a new session, Claude reloads all memory files automatically. The state persists even if the conversation does not.

3. Front-load critical context

CLAUDE.md loads first in every conversation. Put your most important rules and identity there. It is the last thing to get compacted. Memory files load next. Structure your system so the most critical context loads earliest.

4. Use @file references instead of pasting content

When Claude reads a file via @file, it processes the content efficiently. When you paste the same content into the chat, it takes up context twice (your message + the content). Always reference files instead of pasting them.

5. Commit frequently

If context compresses and Claude loses track of what it built, the files are still safe on disk (and in git). Frequent commits mean you never lose work, even if the conversation degrades.

6. Run /reflect before closing

Before ending a session, capture the current state: what was done, what is pending, key decisions made, important file paths. Save this as a reflection file in memory. The next session picks up exactly where this one left off.

The Compaction Trap
The most common failure: a 3-hour session where you did great work in hour 1, but by hour 3, Claude has forgotten the key decisions from hour 1 because they got compacted. The fix is not longer context — it is shorter, more focused sessions with state saved to disk between them.
Context Window Checkpoint

Multi-ChatRunning 5-6 Parallel Chats

The real power move is not one Claude conversation — it is 5-6 running simultaneously, each working on a different piece of the same project. This is how you build in hours what would take days sequentially.

The Rules of Parallel Chats

1. Every chat gets a codename

Label each chat for instant identification: Chat-A: Data, Chat-B: Design, Chat-C: Deploy, Chat-D: Content, Chat-E: QA. The codename goes in the first message of each chat so you can identify it from the VS Code tab.

2. Each chat owns specific files — NO overlap

Before launching parallel chats, assign file ownership. Chat-A writes to output/data/. Chat-B writes to output/design/. Chat-C handles deploy/. No exceptions.

3. Main chat coordinates and integrates

One chat is the orchestrator. It does not produce files directly — it reviews output from other chats, integrates results, handles shared surfaces (MEMORY.md, master plan, git commits), and resolves conflicts.

4. Label each chat's first message

Start every parallel chat's first message with the codename and description so VS Code tabs are immediately identifiable:

CHAT-A: DATA EXTRACTION — You are handling all data extraction for this sprint. Your output directory is output/data/. Do not write to any other directory.

5. Commit per-chat, not per-session

Each chat produces its own commits. If Chat-B crashes, Chat-A and Chat-C's work is already committed and safe. Never batch all parallel work into one commit at the end.

Hard Rule
No two chats write to the same file. File collisions kill parallel work. If you discover two chats need to modify the same file, stop and restructure the ownership. The 5 minutes spent on reassignment saves an hour of debugging merged conflicts.
Example: 3-chat parallel build
ChatCodenameOwnsProduces
ADATAknowledge/, output/data/Client profiles, extracted metrics, CSV exports
BDESIGNoutput/design/, templates/HTML templates, CSS, visual assets
CDEPLOYdeploy/, dist/Build scripts, wrangler configs, live deploys

Main chat (you) reviews each chat's output, runs the integration step, and handles the final commit + deploy.

Multi-Chat Checkpoint

PromptsAdvanced Prompt Patterns

Good prompts produce good output. Great prompts produce output you can ship without editing. These 5 patterns are the difference between "close enough" and "exactly right."

Pattern 1: Compound Prompts

Stack role + task + constraints + output format into a single structured prompt. Each layer narrows the output space.

Compound PromptYou are a senior business consultant writing for a non-technical restaurant owner. Task: Analyse this restaurant's social media performance and write a 1-page summary. Constraints: - No jargon. Explain every technical term. - Max 500 words. - Focus on the 3 highest-impact fixes only. Output format: ## Summary [2-3 sentence overview] ## Top 3 Fixes 1. [Fix] — [Why it matters] — [Effort: Low/Med/High] 2. ... 3. ... ## Next Step [One clear action the reader should take today]

Pattern 2: Chain-of-Thought Forcing

Force Claude to show its reasoning before giving an answer. This catches errors that skip-to-the-answer prompts miss.

Chain of ThoughtBefore answering, list your reasoning steps: 1. What do I know about this business? 2. What data am I missing? 3. What assumptions am I making? 4. What are the risks of each option? Then give your recommendation with confidence level (HIGH / MEDIUM / LOW).

Pattern 3: Self-Critique Loops

Make Claude write a first draft, critique it against specific criteria, then produce the final version. The critique step catches 80% of quality issues.

Self-CritiqueWrite v1 of this proposal email. Then critique v1 against these criteria: - Is the subject line under 60 characters? - Does the opening line reference something specific about the prospect? - Is there exactly one CTA? - Is the tone conversational, not salesy? - Is it under 100 words? Fix every issue found and write the final version.

Pattern 4: Structured Output

When you need machine-readable or consistently formatted output, specify the exact structure in the prompt.

Structured OutputAnalyse this client call transcript and return as JSON: { "summary": "2-3 sentence overview", "sentiment": "positive | neutral | negative", "action_items": [ { "task": "description", "owner": "name", "deadline": "date or ASAP" } ], "follow_up_needed": true | false, "key_objections": ["objection 1", "objection 2"] }

Pattern 5: Meta-Prompting

Instead of writing the prompt yourself, ask Claude to write the prompt. This works especially well for complex, multi-step workflows where you know the desired outcome but not the best way to instruct Claude.

Meta-PromptWrite me a prompt that would produce a comprehensive competitor analysis report for a local service business. The prompt should: - Be usable with any business niche - Produce a structured, client-ready output - Include specific data points to check - Take under 10 minutes for Claude to execute - Output as markdown with tables
Stack the Patterns
These patterns compose. A compound prompt with chain-of-thought forcing and structured output is more powerful than any single pattern alone. Start with compound prompts, add the others as your needs get more specific.
Prompt Patterns Checkpoint

Module 09Showcases — What This Stack Actually Built

Theory is cheap. Here are 6 real patterns we have executed — genericised but real numbers.

1. Multi-Client Performance Sprint
9 clients · 4 dimensions · parallel agents · 90+ files produced · $15k/month in retained revenue

Audited 9 active clients across visibility, technical health, backlinks, and content in a single sprint. Each client received a full interactive HTML report deck deployed to a live URL.

2. Brief-to-Live Website
1 paragraph brief → live website → single prompt · potential value: $8k+ per project

From a one-paragraph business description to a fully deployed, mobile-responsive website with live URL. No manual HTML. No FTP. No hosting configuration.

3. Social Content Pipeline
Strategy → drafts → graphics → scheduling · 15 min/week · $8k-$15k/month service potential

Built a complete content pipeline for a restaurant chain: strategy framework, carousel templates, caption library, posting schedule, and reusable generation skills. Weekly execution takes one prompt.

4. Voice Clone Factory
60-second sample → brand voice → runs locally

Built a local voice-cloning pipeline with multiple engines. A 60-second audio sample becomes a reusable brand voice for landing pages, onboarding audio, and video narration.

5. Automated Outreach Engine
200+ leads/week · zero manual prospecting · $20k/month pipeline value

End-to-end lead generation: prospect identification, email verification, personalised sequence creation, campaign loading, and reply monitoring. Human only intervenes when a lead replies positively.

6. Knowledge Ingestion System
60+ books · 30+ courses · 8 podcasts → searchable knowledge base

Bulk-ingested an entire professional library into a structured, searchable knowledge base. Books transcribed, courses extracted, podcast feeds monitored — all feeding into a tracking system with auto-sync.

Common MistakesWhat Goes Wrong (And How to Fix It)

Everyone hits the same walls. Here are the 7 most common mistakes — most people discover these the hard way.

1. Running out of credits mid-task

What happens: You spawned 5 Opus agents and burned your monthly budget in an afternoon. The agents are running, the work is happening, and then everything stops with nothing committed to disk.

Fix: model routing doctrine. Haiku for bulk tasks, Sonnet for judgment calls, Opus only for deep planning and architecture. Before launching a parallel session, estimate the cost mentally: how many agents, which model, how long each will run.

Set a per-session mental budget before spawning agents. If Opus costs 15x more than Haiku, ask whether you actually need that depth of reasoning for this specific task — most of the time, you do not.

2. Two agents writing the same file

What happens: Last writer wins. Agent B finishes and writes its output to the same file Agent A was working on. Everything Agent A produced is silently overwritten. You do not notice until you look at the file and wonder where half the content went.

Fix: every agent gets a unique output file or directory. Before launching parallel agents, assign each one an explicit, non-overlapping output path. Agent A writes to output/agent-a/. Agent B writes to output/agent-b/. Never overlap.

Main Claude owns all shared surfaces: the memory index, the master plan file, the deliverables index, all git commits. Subagents own only their designated output directories.

3. Skipping the identity layer

What happens: You open Claude, start typing tasks, and get competent but generic output. Claude is writing for a hypothetical user, not for you and your specific business. Every session starts from zero context.

Fix: CLAUDE.md + memory before anything else. The first hour you invest in setting up your workspace identity — who you are, what you do, what your constraints are, what your preferences are — pays back in every single conversation from that point forward.

Write your CLAUDE.md. Start your memory files. Capture feedback when Claude gets something right or wrong. Within a week of consistent use, the output quality gap between a configured and unconfigured workspace is stark.

4. Treating Claude Code like ChatGPT

What happens: You type questions, Claude answers them, you copy the answers out. No files. No memory. No skills. You are using a command-line interface as a slightly faster web browser.

Fix: build skills for recurring tasks, use @file references, let Claude read your actual documents. Claude Code's power is that it lives inside your file system. It can read your client profiles, your templates, your data — without you pasting anything. It can write files directly to the right folders. It can run scripts.

If you find yourself copying and pasting the same instructions into Claude more than twice, that is a skill waiting to be written. If you find yourself pasting document content into the chat, that is an @file reference waiting to be used instead.

5. Not committing work mid-session

What happens: VS Code crashes. The context window compresses and the agent loses track of what it built. A background agent times out. You close the terminal by accident. If you have not committed, it is gone — or at best, scattered across unsaved buffers.

Fix: commit after every meaningful output. Small, frequent commits beat one giant commit at the end of a session. After an agent produces a file, commit it. After a key milestone, commit it. After a parallel batch completes, commit each agent's output separately.

The rule of thumb: if losing the work since your last commit would make you frustrated, it is time to commit now.

6. Scope creep in agent prompts

What happens: You give an agent a prompt that tries to do everything: "build the onboarding flow, update the CRM, draft the welcome sequence, create the tracking sheet, and document the process." The agent produces mediocre output across all 10 tasks and great output on none of them.

Fix: one clear deliverable per agent. Narrow the scope, raise the quality. "Build the welcome email sequence for new clients, 3 emails, saved to templates/emails/onboarding/" is a better prompt than "handle all the onboarding stuff."

If you have 10 tasks, spawn 10 agents with 10 narrow briefs. You will get better output across all of them than one agent trying to juggle everything at once.

7. Skipping verification

What happens: The agent says "done." You mark the task complete. Three days later, a colleague tries to access the live URL and gets a 404. Or you send a report that has placeholder text still in it. Or the deployed script has a silent error that only surfaces under real data.

Fix: "done" means deployed and confirmed working, not "the agent said it finished." Curl every URL. Grep every file for placeholder text. Check every output against its acceptance criteria before marking the task complete.

Build verification into your workflows: add a QA step after every deploy, add a placeholder-check step after every document generation, add an HTTP 200 check after every site launch. The agent finishing is not the finish line — verified output is.

The Pattern
Most of these mistakes have the same root: moving fast without guardrails. The solution is not to slow down — it is to build the right defaults. Model routing, unique output paths, identity setup, skills, frequent commits, narrow briefs, and verification checks. Build these habits in your first two weeks and they protect you for years.
Common Mistakes Checkpoint

Module 10The Frontier — Agent SDK, Swarms, What's Next

Everything above is what is working today. Here is what is coming — and how to position yourself to use it the moment it arrives.

The Claude Agent SDK

Anthropic's Agent SDK lets you build custom agents that run outside of Claude Code. Think of it as Claude-as-a-library — you write Python or TypeScript, call Claude's API, and build agents that:

Pythonfrom anthropic import Agent audit_agent = Agent( model="claude-sonnet-4-5-20250514", tools=[data_tool, crawl_tool, sheets_tool], system_prompt="You are a visibility audit specialist..." ) result = audit_agent.run( "Audit example-business.com for online visibility" )

Swarm Flow — Agents That Feed Each Other

A swarm is the end state: a network of agents where each agent's output is another agent's input. No human in the loop for routine operations.

Scraper Agent
Analysis Agent
Pitch Builder Agent
Email Drafter Agent
Campaign Agent
Follow-Up Agent
Calendar Agent
Pre-Call Brief Agent
YOU CLOSE
The 80/20 of Swarms
You do not need the full swarm to get value. Build one 3-agent pipeline that handles your most common workflow end-to-end. Most people will never build past 3 connected agents — and that is enough to 10x their output.

Astro — The Recommended Framework

When plain HTML is not enough but Next.js is overkill, Astro is the sweet spot. It is the framework that best fits how Claude Code builds things — clean, fast, and minimal configuration.

FrameworkBest ForClaude Code Fit
AstroContent sites, landing pages, blogsExcellent — clean, fast, simple
Next.jsComplex web apps, dashboardsGood — more setup overhead
Plain HTMLOne-off pages, presentation decksGreat — zero dependencies
Start With Astro
For most projects, start with Astro: npm create astro@latest. It outputs static HTML by default (perfect for Cloudflare Pages), supports components when you need them, and Claude Code generates clean Astro code with minimal hallucination. Use plain HTML for one-offs, Astro for anything you will maintain or expand.

The Vision

Today6 Months12 Months
You prompt Claude to do tasksAgents handle routine tasks autonomouslySwarms run your entire delivery pipeline
You review every outputYou review exceptions onlyYou review strategy only
1 Claude window at a time5-6 parallel chats routineAgent fleet running 24/7
APIs called manuallyPipelines triggered by eventsSelf-healing systems with fallback chains
Memory across sessionsShared memory across agent teamsOrganisational knowledge graph

The tools exist today. The question is not "can we build this?" — it is "how fast can we wire it up?"

Final Checkpoint — AI Agents Running Your Business

Your EdgeFinding Your Edge — Making Your First Money with AI

Everything in this guide is a tool. Tools do not make money. You make money by applying tools to problems people will pay to solve. Here is how to find your edge and monetise it.

Audit What You Already Know

Your unfair advantage is not Claude Code — it is your domain expertise combined with Claude Code. A pure techie can build the tool. A pure industry expert can identify the problem. You can do both. That combination is rare and valuable.

Package It as a Service

Services sell faster than products. Do not build a SaaS on day one. Package your expertise + AI into a done-for-you service first:

Price on Value, Not Hours

If Claude Code lets you deliver in 2 hours what used to take 20, do not charge for 2 hours. Charge based on the value the client receives. A competitive analysis that takes you 90 seconds to generate (but would take a consultant a full day) is worth what the consultant charges — or more, because you deliver faster.

Build Proof Fast

  1. Do 3 projects at cost. Pick 3 businesses in your target niche. Deliver the service for free or at cost. Focus on producing great results.
  2. Collect testimonials. After delivery, ask for a written or video testimonial. Specific results ("doubled our social media engagement in 3 weeks") beat generic praise.
  3. Raise the price. With 3 case studies in hand, price at full value. The proof does the selling.

The Unfair Advantage

Your Edge
Your domain expertise + AI tools = something no pure techie can replicate. A developer can build the automation. But they do not know which problems are worth solving, which clients will pay, or how to position the offer. You do. That knowledge, combined with the execution speed Claude Code gives you, is the moat.

The people who win with AI are not the ones who learn the most tools. They are the ones who apply the right tool to a real problem, fast, and get paid for it. Everything in this guide exists to make that cycle — identify problem, build solution, deliver result, collect payment — as short as possible.

Want a Custom AI Implementation Plan?

Book a call with Donal. In 30 minutes, you will get a personalised roadmap for integrating Claude Code into your specific business — including which APIs to connect, which skills to build first, and how to hit ROI in your first month.

Book Your Custom Plan Call →

Start with the free guides:

From Zero to Dangerous (Startup Guide)  ·  ChatGPT Migration Calculator