The Problem
I wanted a place to document AI experiments — not a polished product blog, not a tutorial site, more like a lab notebook. Honest, casual, shows the mess. And since the first thing I was going to do was use AI to build things, the first experiment basically wrote itself: get Claude to build the blog, end-to-end, and document what happens.
The goal was simple: describe what I wanted, Claude figures out the stack, builds it, deploys it, domain live. No manual steps if possible.
Spoiler: there were manual steps.
The Plan
Claude's suggested stack was solid from the start — Next.js with MDX files for posts, deployed to Vercel, custom domain via Cloudflare. No database, no CMS, no admin panel. Just markdown files in a folder and a build step. The content structure we landed on for each experiment:
- The Problem
- The Plan
- What Went Wrong (human side)
- The AI's Take (what Claude struggled with)
- How We Fixed It
- The Outcome
- The Bill (tokens, models, cost)
Clean, repeatable, honest. Good plan. Then we started executing.
What Went Wrong (Human Side)
First: the domain name. I told Claude the site was "osioperator.com". Claude started building, created the project folder as osioperator, spun up the whole Next.js scaffold — and I had to stop it mid-run to say actually it's ocoperator.com. Not osi. oc. The first tool call got cancelled and we restarted with the right name. Classic.
Second: I didn't have a GitHub account. The plan assumed I had one. I didn't. Had to go create one mid-session. Same for Vercel. So the "end-to-end deployment" had a few "hold on, let me go sign up for this" pauses baked in.
Third: I shared API tokens in the chat. Claude needed a GitHub token and a Vercel token to authenticate without interactive prompts. I generated both and just... pasted them straight into the conversation. Claude flagged this immediately — told me to revoke them and generate fresh ones after. Which is the right call. I'm mentioning it here because it's a real thing that happens and worth knowing: don't paste credentials into chat windows, even with an AI you trust.
The AI's Take
A few things genuinely tripped Claude up during this one.
Next.js 16 is newer than its training data. The project scaffolded with Next.js 16.2.1, which Claude hadn't seen before. Before writing a single line of code, it went and read the actual docs bundled inside node_modules/next/dist/docs/. That's not a workaround — that's the right behaviour. It found that params in Next.js 16 is now a Promise (breaking change from earlier versions), that Tailwind v4 uses a CSS-first config instead of tailwind.config.ts, and that the MDX setup had some new conventions. Reading first, coding second. This is how it should work.
Homebrew wasn't installed. Claude's first move to install the GitHub CLI was brew install gh. That failed — no Homebrew on this machine, and installing it requires sudo which requires an interactive password prompt that can't go through Claude Code. Plan B: find the latest gh binary release on GitHub, download it, unzip it, drop it into ~/.local/bin. First attempt 404'd because Claude guessed a version number (v2.67.0) that didn't exist. Second attempt hit the GitHub API to find the actual latest release (v2.89.0), grabbed the right arm64 zip, and installed it manually. Worked fine, just took two tries.
Interactive terminal commands don't work through Claude Code. Claude suggested running gh auth login, which normally opens a browser flow. But interactive prompts don't come back through the Claude Code interface — the command just hangs. So the whole browser-based login flow was a dead end. We switched to Personal Access Token auth instead: generate a token, pipe it into gh auth login --with-token. That worked.
The Vercel CLI needed three attempts to deploy. First attempt: a CLI flag parsing bug where --token value wasn't being read correctly. Second attempt: missing --scope flag — Vercel didn't know which team to deploy to. Third attempt: had to run vercel link first to connect the local project to the Vercel project, then vercel deploy --prod. Each failure gave a clear error with the exact next command to run, so it wasn't painful, just iterative.
The DNS check lied. After adding the A records in Cloudflare and waiting for propagation, Claude ran curl https://ocoperator.com — and it came back with "could not resolve host". But dig ocoperator.com showed the correct IP (76.76.21.21). The issue: the machine Claude runs on had a stale DNS cache that hadn't picked up the new record yet. Fix was curl --resolve "ocoperator.com:443:76.76.21.21" — force the connection to Vercel's IP while bypassing local DNS. Got a 200. Site was live, the local resolver just didn't know it yet.
How We Fixed It (Each Thing)
- Wrong domain name → cancelled the first run, restarted with
ocoperator - No GitHub account → created one mid-session, same for Vercel
- No Homebrew → downloaded
ghbinary directly from GitHub releases API - Interactive auth → switched to Personal Access Token flow
- Vercel deploy failures → ran
vercel linkfirst, added--scope, then deployed - DNS cache false negative → used
curl --resolveto bypass local cache and confirm the real answer
The Outcome
You're reading it. The blog exists, it's live at ocoperator.com, and every future experiment gets published here. Auto-deploy is connected to GitHub — whenever a new MDX file gets committed, Vercel rebuilds and it's live in under a minute.
The whole thing took about 90 minutes. Maybe 20 of that was actual build time. The rest was troubleshooting the deployment chain, waiting for accounts to be set up, and the occasional "wait, wrong domain" moment.
Not bad for experiment zero.