AI Usage

How AI was used in this project—workflows, trade-offs, and what actually happened

AI Tools Used

Cursor (Primary Development Interface)

Used Cursor as the primary development environment for this entire project.

Usage:

  • • Codebase-aware conversations about system design and architecture
  • • Direct code generation and edits with full file context
  • • Rapid iteration on implementation without context switching
  • • Documentation writing with awareness of existing project structure

ChatGPT / LLMs (Prompt Iteration & Reasoning)

Used ChatGPT for prompt refinement and strategic thinking before implementation.

Usage:

  • • Iterating on complex prompts before giving them to Cursor
  • • Reasoning through architectural decisions at a higher level
  • • Simulating edge cases and failure modes
  • • Challenging assumptions without committing to code

How AI Was Used in Practice

Codebase & System Analysis

Used AI to scan and analyze the project requirements and constraints.

Process: Fed the task document to Cursor, asked it to identify:

  • • Unclear requirements that needed clarification
  • • Performance bottlenecks to consider (200 RPS, sub-200ms p95)
  • • Trade-offs between different architectural approaches
  • • Risks in the implementation plan

Generated local _temp/ .md files with TODOs, risks, and decisions. These became the foundation for GitHub issues or implementation tasks.

Prompt Iteration as a First-Class Step

Before implementing complex features, used ChatGPT to refine prompts.

Example: When designing the personalization scoring algorithm:

  • • Started with vague prompt: "build a feed scoring system"
  • • Iterated in ChatGPT to clarify: recency weights, category preferences, decay functions
  • • Final prompt to Cursor included constraints, edge cases, and specific scoring formulas

Result: First implementation was 80% correct vs. typical 40% with rushed prompts.

Visual Thinking & Documentation

Created structured documentation pages before implementation to reason about the system.

Workflow:

  • • Asked AI to generate architecture flow diagrams (using React Flow)
  • • Built data model ERD visualizations before writing schemas
  • • Used these visuals to spot missing relationships and circular dependencies

Documentation wasn't an afterthought—it was a design tool. Writing it clarified thinking.

Pair Programming & Strategic Focus

AI acted as a pair programmer. Human focused on architecture; AI handled execution.

Division of labor:

Human Focus

  • • Choosing PostgreSQL over NoSQL
  • • Deciding on HTTP caching strategy
  • • Defining personalization algorithm weights
  • • Validating edge cases in scoring logic

AI Focus

  • • Writing Express route handlers
  • • Generating mock data structures
  • • Implementing cache middleware
  • • Creating Next.js page layouts

AI didn't make irreversible decisions. It executed on decisions that were already made.

Challenging Assumptions

Actively asked the LLM to critique proposals and identify flaws.

Example conversation:

"I'm planning to cache entire feed responses at the HTTP layer. What am I missing?"

AI response: Pointed out that per-user personalization means feeds aren't cacheable across users. Suggested caching story data separately from personalization scores.

This feedback prevented a major architectural mistake before any code was written.

Edge Case Simulation

Used LLM to simulate failure modes and unusual scenarios.

Questions asked:

  • • "What happens if a user has no view history?"
  • • "How does scoring behave with 10,000 stories vs. 10 stories?"
  • • "What if all stories are from one publisher?"
  • • "What breaks if the cache is cold?"

Answers informed default values, fallback logic, and safeguards in the implementation.

Documentation & Commits

Let AI handle commit messages and documentation—low-risk, high-repetition work.

Rationale: Writing good commit messages takes mental energy. AI can do this competently without supervision. Frees human time for higher-leverage decisions.

Still reviewed outputs—occasionally AI missed context or wrote vague messages—but 90% were good enough as-is.

"Chat With the Codebase"

Used AI as a fast context-reconstruction layer when working across multiple files.

Use case:

  • • "Where is the personalization scoring actually applied?"
  • • "What does the calculateScore function assume about input data?"
  • • "Which routes use the cache middleware?"

Faster than grepping through files manually when context is fragmented or comments are sparse.

Risk Assessment & Irreversibility

Before committing to architectural decisions, asked AI to identify risks and irreversible choices.

Example: When deciding on the data model:

"What are the irreversible decisions in this schema? What will be hard to change later?"

AI response: Pointed out that table partitioning strategy (by publisher) would be difficult to change with production data. Suggested documenting the trade-off and considering tenant_id indexing.

Risk analysis became part of development, not a post-mortem.

Parallelization of Tasks

Used multiple AI agents working on parallel tasks to avoid idle time waiting for sequential completion.

Approach: When tasks were independent, spun up parallel conversations with AI to work on multiple components simultaneously without overlap.

Examples:

  • • One agent building the prototype API while another created documentation pages
  • • One agent designing data schemas while another worked on the rollout plan
  • • One agent implementing feed endpoints while another built UI components

Result: Tasks that would take 10+ hours sequentially were compressed into 4-5 hours of wall-clock time. Human coordinated the parallel streams and integrated outputs, but didn't sit idle waiting for one task to complete before starting the next.

Heavy Lifting

AI handled the majority of implementation work.

Honest assessment:

  • • ~85% of code was AI-generated
  • • ~10% was AI-generated then modified by human
  • • ~5% was written entirely by human (specific edge cases, test validation logic)

Human focus was on steering strategic direction and creating tests to ensure everything works as it should. The mechanical work—boilerplate, routes, components, schemas, documentation—was almost entirely AI.

This distribution worked because human effort went into high-leverage activities: architectural decisions, test coverage, and validation. AI executed the plan.

What Worked, What Didn't, and Team Usage

What Worked Well

Speed of Iteration

Could test architectural ideas in code within minutes. "What if we cache at this layer?" became a quick experiment instead of a 2-hour implementation commitment.

Better Upfront Decisions

Using AI to challenge assumptions before coding prevented at least two major refactors (caching strategy, data model design).

Reduced Cognitive Load

Didn't have to context-switch between "thinking about architecture" and "writing boilerplate." AI handled the mechanical work while human stayed focused on high-level decisions.

Cleaner Documentation

AI transformed rough notes into structured, readable documentation. Would normally rush this step—AI made it low-effort, so documentation quality improved.

Earlier Risk Detection

Explicitly asking "what could go wrong?" surfaced problems before they became costly. Risk assessment moved left in the development cycle.

What Didn't Work Well

Over-Trusting Early Outputs

First-pass AI code often looked good but missed edge cases. Had to learn to always validate logic, even when it "seemed right."

Needing Strong Prompts

Vague prompts produced shallow solutions. Good results required clear constraints, explicit edge cases, and concrete examples. Quality in = quality out.

Occasional Overconfidence

AI sometimes presented flawed approaches with high confidence. Had to develop a habit of questioning even confident-sounding responses.

Still Requiring Human Judgment

AI couldn't make strategic product decisions (which features to prioritize, which trade-offs to accept). Architecture still needed human ownership.

How I'd Think About AI on the Team

AI as Accelerator, Not Replacement

Position AI as a tool that makes engineers more productive, not a substitute for engineering judgment. Speed up execution; human owns decisions.

AI as Reasoning Partner

Use AI to externalize thinking—not as an authority. The value is in the conversation, not blind acceptance of outputs.

Standardize Workflows

Create shared prompt patterns (risk assessment, edge case simulation), documentation templates, and review checklists. Make good AI usage a team practice, not individual trial-and-error.

Human Ownership of Irreversible Decisions

Database schema design, API contracts, architectural patterns—anything hard to change later should have explicit human review. AI can inform, but human decides.

AI Ownership of Execution-Heavy Tasks

Boilerplate, mock data, route handlers, documentation formatting—let AI own these. Spot-check outputs, but don't micromanage.