1180 words
6 minutes
Building a Behavioral Interview Prep Site with an AI Coding Agent

Over the years I’d been through enough interview cycles to accumulate a solid stash of behavioral questions, notes, and frameworks. Problem was, they were scattered across files with zero structure. Every time prep season rolled around, I’d spend more time hunting for notes than actually using them. So I did what any engineer would do in 2026: handed the problem to an AI coding agent.

Here’s how it went.

The Starting Point#

The raw material: a list of categories, a dump of behavioral questions for engineering manager and technical leadership interviews, and some partial notes on my own responses. My goal was to organize it properly and make it easy to recall when it actually mattered β€” and share it with the community.

Iteration 1: Structured Markdown#

The first thing I asked Cursor (with Claude under the hood) to do was organize the questions into categories and create individual markdown files for each one.

The agent produced 14 categories β€” Leadership & Management, Technical Leadership, Hiring & Team Building, and so on β€” with 117 questions spread across them. Each question got its own file with:

  • Why this is asked β€” the intent behind the question
  • Key points β€” what interviewers are actually looking for
  • STAR template β€” a fill-in-the-blank structure (Situation / Task / Action / Result)
  • Tips β€” common mistakes to avoid

Good for starters, but I wanted more.

Iteration 2: Example Responses and Company Context#

I asked the agent to add two more sections to every question:

  1. An example response β€” a full, realistic STAR-method answer written as if by a senior EM
  2. Companies known to ask this β€” which companies ask this question or a variant, and what they’re looking for

The company research required actual web lookups. The agent used search tools to pull patterns from Glassdoor, Blind, and engineering blogs to populate these tables. 117 questions, all enriched. That would have taken me weeks manually.

Iteration 3: A Real Website#

A markdown folder is useful. A website you can share is better.

I asked for a GitHub Pages site. The agent scaffolded an Astro project with Tailwind CSS β€” a combination I’d already landed on for this blog β€” and built out:

  • A landing page with a STAR method explainer and category grid
  • Dynamic category pages listing all questions
  • Individual question pages parsing and rendering the markdown sections with distinct styling

The content pipeline was interesting: Astro reads the markdown files from the parent repo at build time, parses them into sections using a custom parseSections function, and renders each section with its own card style. No CMS, no database β€” just files.

One issue that came up: Astro’s import.meta.env.BASE_URL returns /interview-prep without a trailing slash, which caused all internal links to break on the deployed site (URLs like /interview-prepcategory/... instead of /interview-prep/category/...). The fix was a one-liner added to every page:

const base = import.meta.env.BASE_URL.replace(/\/?$/, '/');

Small bug, easy fix once identified. The kind of thing you’d spend an hour debugging alone.

Iteration 4: Privacy, GDPR, and Analytics#

Since I wanted this to be a properly built site β€” not just a quick demo β€” I added a full GDPR consent layer: a cookie banner with Accept / Reject / Manage options, a ConsentManager dialog using the native HTML <dialog> element (no library needed), a consent.ts library with versioned localStorage persistence, and Privacy Policy and Terms pages.

One non-obvious detail: Google Fonts makes an external request on page load, which technically constitutes a data transfer without consent. The fix was to drop it entirely and self-host Inter using @fontsource-variable/inter. Zero third-party requests on first load.

GA4 tracking followed the same pattern β€” wired to the consent system so the script never loads until the user explicitly accepts analytics cookies. GA is injected dynamically, never present in the HTML, never loaded without consent.

Iteration 5: A Cursor Skill for Writing Your Own Responses#

The question files give you structure and context. But the actual work β€” writing your own STAR stories from experience β€” is still manual. I wanted to make that easier too.

I built an AI skill that ships with the repo. You give it rough notes, it writes the response. The same skill works in both Cursor and Claude Code β€” the SKILL.md format is identical between the two, so both are included in the repo under .cursor/skills/ and .claude/skills/ respectively.

The trigger is simple:

create STAR response for @q01_prioritize_tasks.md β€” my notes: [your rough notes here]

The skill reads the question file to understand what the interviewer is actually looking for β€” the intent, the key points, the pitfalls β€” and uses that context to shape your rough notes into a clean first-person STAR response. It writes directly to the _response.md file, which is gitignored and stays local to your machine. Run it again on the same question with a different story and it appends it as ## Example 2 rather than overwriting.

Here’s a real example. My rough notes were:

β€œI was balancing two competing AI projects β€” one in my home org, one in a cross-functional org with higher exec visibility. I had to carve out shared work since the codebase was the same, set resource commitments with both sets of stakeholders, and manage expectations on both sides.”

The skill turned that into a structured Situation / Task / Action / Result response, with coaching notes on what metrics to add and which follow-up angles to prepare for.

The skill file itself is under 100 lines of markdown β€” no code, no scripts. Just clear instructions for the agent on what to read, how to derive the output path, and how to write the response. It’s a good example of how AI skills work across tools: you describe the workflow once, and both Cursor and Claude Code can execute it from the same file.

AI vs Me#

What the AI did

Speed. Scaffolding, content generation, and site setup that would have taken several weekends happened in a single session. The agent held the full project context β€” file naming conventions, markdown structure, Astro routing patterns β€” and applied it consistently across 117 files.

Research. For the company-specific sections, it used web search to pull real data. Not perfect, but directionally accurate and far faster than doing it manually.

What I did

The starting structure. I created the initial set of categories and gave the agent examples of questions I’d actually been asked β€” that seed shaped everything downstream.

Bug fixes. AI-generated code isn’t bug-free. I caught and fixed a fair number of issues along the way.

The actual content. The responses are personal β€” stories from my own experience. Those live in *_response.md files that are gitignored and never checked in. That part is still very much a manual effort.

The Setup#

The site is open source and MIT licensed:

If you want to fork it for your own prep:

  1. Fork the repo
  2. Replace the question content with your own (or keep it as-is)
  3. Add your GA Measurement ID to site/src/components/GoogleAnalytics.astro
  4. Push β€” GitHub Actions handles the rest

The *_response.md files for personal answers are gitignored by default. They never leave your machine.

Closing Thought#

The most useful thing the AI did wasn’t generate code β€” it was eliminate the gap between having an idea and having a working thing. That gap is usually where projects die.

The questions are still mine to answer.

Cookie Preferences

Strictly Necessary
Required for the site to function. Cannot be disabled.
Always on
Analytics
Helps us understand how visitors use the site (page views, interactions). No personal data is sold.
Marketing
Used to show relevant ads. Currently not used on this site.