Back to blog
AIApril 6, 2026·9 min read

I Built My Entire Portfolio With AI

How I used Claude Opus 4.6 to build a production-ready portfolio site from scratch -- including GEO optimization, structured data, a blog engine, and a 20-section writing playbook. A behind-the-scenes look at AI-driven web development in 2026.

I Built My Entire Portfolio With AI

I didn't write a single line of code for this website. Not the portfolio pages, not the GEO optimizations, not the blog you're reading right now. Every commit, every component, every JSON-LD schema -- all of it was written by Claude Opus 4.6 in a single 90-minute session.

That's not a flex. It's an observation about where we are in 2026 -- and a blueprint you can follow to do the same thing.

TL;DR

This site was built end-to-end by AI using a structured workflow: brainstorming, design specs, implementation plans, and parallel subagent execution. The result is a Next.js 16 portfolio with full SEO, structured data, a GEO audit score that went from 0 to 36 (with a clear path to 70+), and a complete MDX blog engine. The key insight: AI works best when you treat it as a junior engineer who needs clear specs, not a magic box you prompt once.

Development workflow from idea to ship

Why I Let AI Build Everything

I've been a software engineer for 10 years. I've built 12 microservices, 100+ APIs, and two products from scratch. I know how to build websites.

But here's the thing: I didn't want to spend a weekend on my portfolio. I wanted to test a hypothesis -- can AI handle an entire project, from architecture decisions to production deployment, if you give it the right structure?

The answer is yes, with caveats. And those caveats are the interesting part.

The Workflow That Made It Work

I used three key tools working together:

  1. Claude Opus 4.6 (1M context) as the orchestrator and primary engineer
  2. Superpowers -- a skill system that enforces structured workflows (brainstorming before coding, specs before implementation, TDD)
  3. Subagent-driven development -- dispatching fresh AI agents per task, with code review between each

Here's how it actually played out.

Phase 1: The GEO Audit That Exposed Everything

Before adding the blog, I ran a full Generative Engine Optimization audit on my existing site. GEO measures how well AI systems (ChatGPT, Claude, Perplexity, Google AI Overviews) can discover, understand, and cite your content.

The results were brutal.

GEO audit results showing a score of 36/100

Overall GEO Score: 36/100 (Poor). The breakdown:

CategoryScore
AI Citability32/100
Brand Authority18/100
Content E-E-A-T44/100
Technical GEO66/100
Schema & Structured Data5/100
Platform Optimization12/100

The Schema score of 5 out of 100 was the wake-up call. My site had zero JSON-LD structured data. No Person schema, no WebSite schema, no ProfilePage. AI systems couldn't build an entity graph for "Alexandre El Khoury" -- I was effectively invisible to AI search.

💡

GEO (Generative Engine Optimization) is the practice of optimizing web content so AI systems can discover, understand, and cite it. Sites optimized for GEO see 30-115% more visibility in AI-generated responses.

Phase 2: Fixing What AI Found

Claude immediately fixed the critical issues it discovered:

  • Added Person + ProfilePage + WebSite JSON-LD to the homepage, linking my identity to LinkedIn and GitHub via sameAs
  • Created llms.txt -- the emerging standard for helping AI systems understand site structure
  • Fixed the broken OG image (it was returning 404 on every social share)
  • Added canonical tags to every page
  • Updated robots.txt with explicit AI crawler directives for GPTBot, ClaudeBot, PerplexityBot

These fixes took about 5 minutes. The Schema score alone would jump from 5 to roughly 40 just from the Person schema.

Phase 3: Designing the Blog

This is where the Superpowers workflow proved its value. Instead of jumping straight into code, the system enforced a structured brainstorming phase.

Claude asked me questions one at a time:

  • How do you want the writing experience to feel? Markdown files in the repo, commit and push.
  • What posting cadence? Weekly, with categories and tags.
  • Navigation structure? Top-level /blog route.
  • Category taxonomy? Fixed categories + freeform tags.
  • Rich content needs? MDX -- I want interactive components in posts.

Each answer narrowed the design space. After 5 questions, Claude proposed three architecture approaches with trade-offs. I picked next-mdx-remote + file system. Then it presented the full design section by section, getting my approval at each checkpoint.

The result was a complete spec document before a single line of code was written.

Phase 4: The 20-Section Writing Playbook

This is the part I'm most proud of, and I didn't write any of it.

I asked Claude to research blog writing best practices -- not casually, but deeply. It ran 16 parallel web searches across content strategy, writing psychology, SEO, and social media promotion, synthesizing insights from sources like Backlinko, Ahrefs, Copyhackers, Buffer, and the writing processes of developers like Josh Comeau and Shawn "swyx" Wang.

Preview of the RULES.md writing playbook

The result is a RULES.md file that lives in the content/blog/ directory -- a 20-section playbook covering:

  • Hook formulas -- 6 types that work for technical content, with examples
  • The inverted pyramid + narrative hybrid structure for AI-era blog posts
  • SEO checklist -- title tags, meta descriptions, header hierarchy, featured snippet optimization
  • AI citability rules -- how to write "quotable paragraphs" that AI systems extract and cite
  • Platform-specific promotion -- LinkedIn (link in comments, not post body!), Reddit (90/10 rule), X threads, Hacker News
  • Content repurposing pipeline -- how one blog post becomes 10 pieces of social content over 30 days

Every blog post on this site -- including the one you're reading -- follows this playbook.

The RULES.md approach is something anyone can replicate. Put your writing standards in a file that AI can reference. Claude reads it before every post and follows the rules automatically.

Phase 5: Subagent-Driven Implementation

Here's where it got fast. The implementation plan had 13 tasks. Instead of executing them sequentially in one context, Claude dispatched fresh subagents for each task -- isolated AI agents that each received only the context they needed.

Parallel subagent execution for the blog implementation

The subagent workflow:

  1. Orchestrator reads the plan, extracts task details, and dispatches an implementer subagent
  2. Implementer writes the code, runs tests, commits
  3. Orchestrator verifies the output and moves to the next task

Tasks 2-5 (utility module, MDX config, custom components, sample post) went to one subagent. Tasks 6-7 (the two main page components) each got their own. Tasks 8-12 (config, nav, sitemap, RSS, llms.txt) were batched into another.

Total implementation time: about 20 minutes for 13 tasks producing 10 new files and 6 modified files.

Phase 6: The Build Fix

Not everything went smoothly. The first build failed with three issues:

  1. border-accent isn't a valid Tailwind class -- the site uses custom CSS variables, not Tailwind's color system for the accent color
  2. TypeScript strict mode caught a nullable parameter the subagent missed
  3. React version mismatch -- next-mdx-remote@6 requires React 19 for RSC support, but the project was on React 18

Claude diagnosed each issue, upgraded React to 19 and Next.js to 16, fixed the CSS to use raw rgb(var(--accent)) values, and got the build passing.

This is the part that matters: AI-generated code will have bugs. The workflow handles this by treating build verification as an explicit task, not an afterthought.

The Blog Architecture

The blog engine that powers this post is straightforward but well-optimized:

MDX blog architecture showing the processing pipeline

  • Content: .mdx files in content/blog/ with YAML frontmatter
  • Processing: gray-matter parses metadata, next-mdx-remote compiles MDX server-side, rehype-pretty-code handles syntax highlighting with theme-aware dark/light support
  • Custom components: <Callout> (the info/warning/tip boxes you see in this post) and <LinkCard> for rich link previews
  • SEO: Every post auto-generates Article JSON-LD, OG tags, Twitter cards, and canonical URLs from frontmatter alone
  • Discovery: Dynamic sitemap, RSS feed at /blog/feed.xml, and llms.txt all auto-include new posts

Writing a new post means creating one .mdx file and pushing. Everything else is automatic.

What I'd Do Differently

Three things:

1. Start with React 19 from the beginning. The React 18 -> 19 upgrade mid-build was the biggest time sink. If you're starting a new Next.js project in 2026, use React 19 from day one.

2. Run the GEO audit before designing the site. I ran it after the portfolio was already built. If I'd run it first, the structured data and AI-optimization would have been baked into the initial architecture instead of bolted on.

3. Write the RULES.md before the first post, not during. Having the writing playbook ready before content creation means every post starts from a strong baseline. We got lucky that this was part of the design phase, but on a different project it might not be.

The One Big Idea

AI doesn't replace engineering judgment. It amplifies it.

The reason this worked wasn't because Claude Opus is magic. It worked because the Superpowers workflow enforced discipline: brainstorm before building, write specs before code, review after implementation, verify before shipping. The AI followed a process that produces good software regardless of who -- or what -- is writing the code.

If you give AI a vague prompt, you get vague output. If you give it a spec, a plan, and a review process, you get production-quality code in a fraction of the time.

That's the real lesson from building this site. Not "AI can write code" -- we knew that. The lesson is: the engineering process matters more than ever when AI is doing the engineering.

Frequently Asked Questions

How much did it cost to build this site with AI?

The entire session -- GEO audit, blog design, RULES.md research, and full implementation -- cost approximately $21 in API usage for Claude Opus 4.6 and took about 90 minutes of wall clock time. That covers 3,500+ lines of code added across the session.

Can I use this same workflow for my own project?

Yes. The Superpowers skill system is available as a Claude Code plugin. The workflow (brainstorm -> spec -> plan -> subagent execution) works for any software project, not just portfolios. The key is enforcing the structure rather than jumping straight to "write me code."

Does the blog have good SEO out of the box?

Every post auto-generates complete metadata from frontmatter: title tags, meta descriptions, canonical URLs, Open Graph tags, Twitter cards, and Article JSON-LD schema. The site also has a dynamic sitemap, RSS feed, and llms.txt for AI discoverability. The RULES.md playbook covers on-page SEO best practices for every post.

Alexandre El Khoury

Alexandre El Khoury

Senior Software Engineer

Senior Software Engineer with 9+ years building scalable backend systems. Expert in Go, TypeScript, and cloud infrastructure. Currently at Weaviate.