Back to Blog

Prompts Are the New Architecture Decisions

The 5-step framework for shipping production code with AI agents: context, constraints, code review, iteration, and CI

Faizan Ahmed
Faizan Ahmed

February 6, 2026

The engineers who are shipping the fastest right now aren't the ones writing the most code. They're the ones who know how to direct AI agents effectively. They know what context to give, how to write constraints, and how to review the output.

This isn't a prompting tips article. This is the exact workflow I use every day to ship production code with AI, and it's the same workflow we use at Headstarter.

1. Give the Agent the Right Context

Here's what most people get wrong: they tell the AI what they want but never show it where the new code needs to fit.

An AI agent generates code in a vacuum unless you give it context. The quality of what it produces is directly tied to what you show it. Your type definitions, your existing patterns, your folder structure: that's what the agent needs to write code that actually works in your codebase.

With Headstarter, you @ specific files to bring them into the conversation. You're not just “showing the AI your code”: you're giving it your types, your interfaces, your middleware chain, so it generates code that respects your architecture. Try it below:

headstarter/web-app
middleware.ts
1import { NextRequest, NextResponse } from "next/server";
2
3type MiddlewareHandler<T = unknown> = (
4req: NextRequest,
5context: { params: Record<string, string> }
6) => Promise<NextResponse<T>>;
7
8// TODO: Add auth middleware
9// TODO: Add rate limiting
10
11export function withLogging<T>(handler: MiddlewareHandler<T>) {
12return async (req: NextRequest, context: { params: Record<string, string> }) => {
13const start = performance.now();
14const res = await handler(req, context);
15console.log(`${req.method} ${req.url} ${performance.now() - start}ms`);
16return res;
17};
18}
mainTypeScript
Headstarter Agent

Describe what changes you want to make.
Use @ to add files as context.

middleware.ts
auth.ts

@ to mention files

Think of it this way: choosing which files to @ is an engineering decision. Pick the files that define the contract your new code needs to follow: your interfaces, schemas, middleware, and the modules right next to where the new code will live. The agent picks up naming conventions, error-handling patterns, and type hierarchies from whatever you give it.

Simple rule: if you'd link to a file in a PR description for a teammate, @ it for the agent.

2. Be Specific in Your Prompts

Vague prompts produce vague code. It's that simple.

When you leave gaps in your prompt, the agent fills them with assumptions. And those assumptions almost never match what your system actually needs. The fix is to be explicit about your constraints: what types it should use, how errors should be handled, what it needs to compose with, and what the edge cases are.

Don't

Add authentication to the API routes

No constraints on auth strategy, token format, error responses, or rate limiting. The agent will guess, and it'll probably guess wrong.

Do

Add a withAuth<T> higher-order middleware to api/middleware.ts that verifies JWT from the Authorization header, injects TokenPayload into the handler context, returns 401 with { error: string } on failure, and composes with the existing RateLimiter

You told it the exact signature, where the token comes from, what errors look like, and what it needs to work with. The agent generates code that slots right into your existing middleware.

When I write a prompt, I think about four types of constraints:

  • Type constraints: what generics, return types, and interfaces should the code use?
  • Composition constraints: how does this code plug into what already exists? (middleware, hooks, providers) (middleware, hooks, providers)
  • Error semantics: what should happen when things fail? Be explicit about status codes and error shapes
  • Boundary conditions: what happens with empty arrays, null fields, abort signals? Tell the agent, or it'll skip them

Here's another example. Say you have a basic fetch wrapper and you want to make it production-ready:

Don't

Make the API calls more robust

"Robust" means nothing. The agent might add a try-catch, or retry logic, or validation, or caching — you won't know what you get until you read the diff.

Do

Replace fetchUserData with a generic fetchWithValidation<T> that accepts a Zod schema, AbortSignal, retry count with exponential backoff, and returns { data: T; stale: boolean } using stale-while-revalidate headers

Every design decision is in the prompt. The agent can't guess wrong because you didn't leave room for guessing.

Click each segment of the prompt below to see what role it plays. Every phrase matters: remove one, and the agent starts guessing in that exact area.

Prompt AnatomyClick each segment
Add a withAuth<T> higher-order middleware to api/middleware.ts that verifies JWT from the Authorization header injects TokenPayload into the handler context returns 401 with { error: string } on failure and composes with the existing RateLimiter
Type Constraint
File Target
Behavior Spec
Composition
Error Semantics

3. Actually Review the Code

This is where most people mess up. They accept the AI's output without actually reading it.

AI-generated code can look clean and compile perfectly but still break things the model didn't know about. You need to review it the same way you'd review a pull request from a new engineer on your team: someone skilled but unfamiliar with your system's assumptions.

Here's a diff where the agent generated a typed middleware chain with JWT verification and rate limiting. Walk through it:

middleware.ts
+26-0
1import { NextRequest, NextResponse } from "next/server";
+2import { RateLimiter } from "@/lib/rate-limit";
+3import { verifyJWT, TokenPayload } from "@/lib/auth";
+4import { z } from "zod";
5
+6const limiter = new RateLimiter({
+7windowMs: 60_000,
+8max: 100,
+9keyGenerator: (req) => req.headers.get('x-forwarded-for') ?? 'anonymous',
+10});
11
12type MiddlewareHandler<T = unknown> = (
13req: NextRequest,
+14context: { params: Record<string, string>; user?: TokenPayload }
15) => Promise<NextResponse<T>>;
16
+17export function withAuth<T>(handler: MiddlewareHandler<T>): MiddlewareHandler<T> {
+18return async (req, context) => {
+19const { success } = await limiter.check(req);
+20if (!success) return NextResponse.json(
+21{ error: "Rate limit exceeded" } as T, { status: 429 }
+22);
23
+24const token = req.headers.get("authorization")?.replace("Bearer ", "");
+25if (!token) return NextResponse.json(
+26{ error: "Unauthorized" } as T, { status: 401 }
+27);
28
+29const payload = await verifyJWT(token);
+30if (!payload) return NextResponse.json(
+31{ error: "Invalid token" } as T, { status: 401 }
+32);
33
+34return handler(req, { ...context, user: payload });
+35};
+36}
mainTypeScript

Here's what I check every time I review AI-generated code:

  • Type safety: Does the generic T flow through correctly? Are the error responses typed right?
  • Order of operations: Rate limiting runs before JWT verification (cheaper check first: that's correct). The token gets stripped of the “Bearer ” prefix before verification. Good.
  • Error handling: Three failure modes (rate limit, missing token, invalid token), three different status codes, all with typed error bodies. Clean.
  • Composition: The withAuth wrapper keeps the handler signature intact and extends the context instead of replacing it. That's what you want.

Don't just scan the diff passively. Step through it one concern at a time. Use the controls below to walk through the middleware diff: type safety, ordering, error handling, and composition:

middleware.ts
+6const limiter = new RateLimiter({
+7windowMs: 60_000,
+8max: 100,
+9keyGenerator: (req) => req.headers.get('x-forwarded-for') ?? 'anonymous',
+10});
11
12type MiddlewareHandler<T = unknown> = (
13req: NextRequest,
+14context: { params: Record<string, string>; user?: TokenPayload }
15) => Promise<NextResponse<T>>;
16
+17export function withAuth<T>(handler: MiddlewareHandler<T>): MiddlewareHandler<T> {
+18return async (req, context) => {
+19const { success } = await limiter.check(req);
+20if (!success) return NextResponse.json(
+21{ error: "Rate limit exceeded" } as T, { status: 429 }
+22);
23
+24const token = req.headers.get("authorization")?.replace("Bearer ", "");
+25if (!token) return NextResponse.json(
+26{ error: "Unauthorized" } as T, { status: 401 }
+27);
28
+29const payload = await verifyJWT(token);
+30if (!payload) return NextResponse.json(
+31{ error: "Invalid token" } as T, { status: 401 }
+32);
33
+34return handler(req, { ...context, user: payload });
+35};
+36}
1. Type Safety

The generic T flows from the withAuth<T> signature through MiddlewareHandler<T> to NextResponse<T>. Error responses use 'as T' casts — this is a known trade-off for middleware that returns error shapes different from the success type. The handler context type is properly extended with optional user field.

1 / 4

Now look at this one: the agent replaced a basic untyped fetch wrapper with a validated fetcher that has retry logic:

api.ts
+35-4
1import { z, ZodSchema } from "zod";
2
+3interface FetcherConfig<T> {
+4schema: ZodSchema<T>;
+5signal?: AbortSignal;
+6retries?: number;
+7backoff?: (attempt: number) => number;
+8}
9
+10export async function fetchWithValidation<T>(
+11url: string,
+12{ schema, signal, retries = 3, backoff = (n) => 2 ** n * 100 }: FetcherConfig<T>
+13): Promise<{ data: T; stale: boolean }> {
+14for (let attempt = 0; attempt < retries; attempt++) {
+15try {
+16const res = await fetch(url, {
+17signal,
+18headers: { "Cache-Control": "stale-while-revalidate=60" },
+19});
20
+21if (!res.ok) throw new Error(`HTTP ${res.status}`);
+22const raw = await res.json();
+23const data = schema.parse(raw);
+24const stale = res.headers.get("age") !== null;
+25return { data, stale };
+26} catch (err) {
+27if (signal?.aborted) throw err;
+28if (attempt === retries - 1) throw err;
+29await new Promise(r => setTimeout(r, backoff(attempt)));
+30}
+31}
+32throw new Error("Unreachable");
+33}
34
35// -- Usage --
-36export async function fetchUserData(userId: string) {
-37const response = await fetch(`/api/users/${userId}`);
-38return response.json();
-39}
+40export async function fetchUser(userId: string, signal?: AbortSignal) {
+41return fetchWithValidation(`/api/users/${userId}`, {
+42schema: UserSchema,
+43signal,
+44});
+45}
mainTypeScript

Look at what the agent did: Zod validation at the network boundary, exponential backoff that respects abort signals (line 27: it stops retrying immediately if the request was cancelled), and stale-while-revalidate caching. Every design choice traces back to a constraint from the prompt. That's the point: when you're specific, the output is auditable.

4. Iterate, Don't Start Over

The first output is almost never production-ready. And that's fine.

Think of the first response as a draft. It gives you the structure. Then you refine it: add edge cases, extend the functionality, write tests. Each follow-up prompt should build on what's there, not rewrite it.

Headstarter Agent
FA
You

Add a withAuth middleware to middleware.ts that verifies JWT and injects TokenPayload into the handler context. Compose it with the existing rate limiter.

Agent

I've added a generic withAuth<T> HOF that checks rate limits first, then verifies the JWT from the Authorization header. It returns typed error responses (429, 401) and extends the handler context with the decoded payload.

FA
You

Good structure. Add a withRole('admin' | 'user') that composes on top of withAuth and returns 403 if the role doesn't match. Make sure it's type-safe — the handler should know the user is non-null.

Agent

I've added withRole as a higher-order function that narrows the context type. The handler receives { user: TokenPayload } (non-optional) since withAuth already validated it. Role mismatch returns 403 with the same error shape.

FA
You

Now add unit tests for the middleware chain. Test: valid token + correct role passes, expired token returns 401, wrong role returns 403, rate-limited returns 429. Use vitest and mock the JWT verify function.

Agent

I've created middleware.test.ts with four test cases using vi.mock for the JWT verifier. Each test asserts the correct status code and error body shape. The rate limiter is mocked with a configurable threshold.

Notice the pattern: first prompt gets the core abstraction right. Second prompt extends it. Third prompt writes the tests. This is how you'd work with any engineer: interface first, implementation second, tests third.

Step through the evolution below to see how the codebase grows across turns. Each turn builds on the last without rewriting what already works:

middleware.ts
1import { NextRequest, NextResponse } from "next/server";
2import { verifyJWT, TokenPayload } from "@/lib/auth";
3
4export function withAuth<T>(handler: MiddlewareHandler<T>) {
5return async (req: NextRequest, context: HandlerContext) => {
6const token = req.headers.get("authorization")?.replace("Bearer ", "");
7if (!token) return NextResponse.json(
8{ error: "Unauthorized" } as T, { status: 401 }
9);
10const payload = await verifyJWT(token);
11if (!payload) return NextResponse.json(
12{ error: "Invalid token" } as T, { status: 401 }
13);
14return handler(req, { ...context, user: payload });
15};
16}
feat/auth-middlewareTypeScript
Turn 1 of 3
FA
You

Add withAuth middleware that verifies JWT and injects TokenPayload into the handler context.

Agent

I've added a generic withAuth<T> HOF that verifies the JWT from the Authorization header and extends the handler context with the decoded payload.

Pro tip: “Good structure” isn't just being polite. It tells the agent to keep what it wrote and build on top of it instead of starting over.

5. Ship It Through CI Like Everything Else

AI-generated code goes through the same pipeline as everything else. Type-checking, linting, tests, code review. The agent makes you faster at writing code: it doesn't replace your CI pipeline.

With Headstarter, you push changes as a commit or open a PR directly from the same interface. The diff you reviewed is exactly what gets pushed. No copy-pasting, no switching between tools.

The full workflow looks like this:

  1. Give context: @ the files the agent needs to see
  2. Be specific: types, how it composes, error handling, edge cases
  3. Review the diff: types, ordering, error handling, composition
  4. Iterate: extend, add tests, handle edge cases in follow-ups
  5. Push to CI: let your pipeline validate it like any other code

Try It Yourself

Here's a live editor. Write a prompt, review the diff, accept or reject. This is the full loop in one interface, which is exactly how it works in Headstarter.

web-app
dashboard.tsx
auth.tsx
1import { useState, useEffect } from "react";
2import { fetchUserData } from "@/lib/api";
3import { Card } from "@/components/ui/card";
4
5interface DashboardProps {
6userId: string;
7onNavigate: (path: string) => void;
8}
9
10export function Dashboard({ userId, onNavigate }: DashboardProps) {
11const [data, setData] = useState<UserData | null>(null);
12const [loading, setLoading] = useState(true);
13
14useEffect(() => {
15async function load() {
16const result = await fetchUserData(userId);
17setData(result);
18setLoading(false);
19}
20load();
21}, [userId]);
22
23if (loading) {
24return <Skeleton className="h-48" />;
25}
26
27return (
28<div className="grid grid-cols-2 gap-4">
29<Card>
30<h3 className="text-lg font-medium">
31{data?.name}
32</h3>
33</Card>
34</div>
35);
36}
main3 changes
TypeScript
Headstarter Agent
FA

Add error handling to the data fetching logic

I'll add a try-catch block around the fetch call and display an error state when the request fails.


The Bottom Line

AI agents don't replace engineering judgment. They amplify it. The engineers who understand their systems, write specific prompts, and review code critically are going to ship 10x faster than the ones who treat AI like a black box.

Here's the framework:

  1. Give the right context: the agent's output is only as good as what you show it
  2. Be specific, not vague: types, composition, error handling
  3. Review every diff: types, error handling, edge cases
  4. Iterate, don't rewrite: each follow-up should have one job
  5. Ship through CI: your pipeline is the final check, not the agent

The engineers who figure this out aren't just faster. They're building things that would've been impossible for a single person to ship before. That's the real unlock.