Skip to content

First AI Collaboration

In the Quick Start, you set up a project and generated a requirements document. Now let’s go all the way — from a one-sentence idea to a fully tested, production-ready codebase.

Project: Short-Link — a public URL shortening service (like bit.ly) AI Tool: Claude Code (the same workflow applies to Cursor and OpenCode) Time: ~60 minutes across all phases


OpenLogos follows a strict WHY → WHAT → HOW progression. Each phase reads the previous phase’s output, so context accumulates — no “vibe coding”, no guesswork.

PhaseNameSkillOutput
1Requirementsprd-writerScenarios + acceptance criteria
2Product Designproduct-designerFeature specs + HTML prototypes
3-0Architecturearchitecture-designerTech stack + system diagram
3-1Scenario Modelingscenario-architectSequence diagrams per scenario
3-2API + DB Designapi-designer + db-designerOpenAPI spec + SQL schema
3-3Test Case Designtest-writerUnit tests + scenario tests
3-4Code GenerationBusiness code + test code
3-5Verificationopenlogos verifyGate 3.5 PASS / FAIL

Let’s walk through each one.


Prompt: Help me write requirements

The AI loads the prd-writer Skill, reads the project config, then asks you a few key questions about the product — positioning, core features, scope decisions.

AI loads prd-writer Skill, asks questions, and generates a 239-line requirements document

After two rounds of Q&A (about 3 minutes of human input), the AI produces a 239-line requirements document with:

  • Product positioning — “A public URL shortening service for general users”
  • 4 pain points (P01–P04) — long URLs break in chat, no click visibility, etc.
  • 4 business scenarios with priorities:
IDScenarioPriority
S01Create and share a short linkP0
S02Visit a short link and be redirectedP0
S03Check link click analyticsP0
S04Manage existing links (edit/deactivate/delete)P1
  • Key design decisions — anonymous links via management token, custom slugs, no expiration in Phase 1
  • Won’t-Do list — user login, custom domains, QR codes, bulk creation

The document is saved to logos/resources/prd/1-product-requirements/01-requirements.md.


Prompt: Help me create the product design based on requirements

The AI loads the product-designer Skill, reads the requirements, determines this is a Web Application with 4 scenarios, and produces two types of output:

AI loads product-designer Skill and starts writing feature specs and prototypes

A 303-line feature spec covering interaction flows, UI component specs, state transitions, and error handling for all 4 scenarios:

Feature specifications document in the document viewer

Each scenario has:

  • Interaction flow — step-by-step user journey
  • UI component specifications — type, behavior, validation rules
  • Acceptance criteria — normal + exception paths

Four clickable HTML prototypes — one per scenario — with working buttons, form validation, modals, and state switching:

Interactive prototype for S04: Manage Link

The prototypes are real HTML files you can open in any browser. They demonstrate the exact user experience before a single line of business code is written.

Phase 2 complete summary — 1 feature spec + 4 prototypes + information architecture

Phase 2 deliverables:

  • 1-feature-specs/01-feature-specs.md — 303-line feature specification
  • 2-page-design/01-homepage-prototype.html — S01 Create Short Link
  • 2-page-design/02-error-pages-prototype.html — S02 Error Pages
  • 2-page-design/03-analytics-prototype.html — S03 Analytics Dashboard
  • 2-page-design/04-manage-prototype.html — S04 Manage Link

Prompt: Help me design the technical architecture

The AI loads the architecture-designer Skill, reads both the requirements and the product design, asks one question about tech preferences, then produces the architecture:

AI designs architecture — Monolithic Full-Stack Next.js

Pattern: Monolithic Full-Stack (Next.js)

The rationale is documented: focused utility product, 4 scenarios, no complex background processing, small team (1–3 engineers) — a single deployable unit with zero operational complexity.

The architecture document also updates logos-project.yaml with the confirmed tech stack:

tech_stack:
language: "TypeScript 5.x"
framework: "Next.js 15 (App Router)"
ui: "React 19 + Tailwind CSS 4"
database: "PostgreSQL (Neon serverless)"
orm: "Drizzle ORM"
cache: "Upstash Redis"
test: "Vitest"

Architecture diagram with Browser, Next.js Application, and Data Layer

The diagram shows the three-layer structure: Browser → Next.js (Page Components + Route Handlers + Edge Middleware) → Data Layer (PostgreSQL + Redis Cache).


Prompt: Help me model business scenarios

The AI loads the scenario-architect Skill and generates detailed sequence diagrams for all 4 scenarios. Each scenario becomes a step-by-step technical specification with exact API calls, data flows, cache strategies, and exception paths.

5 scenario documents generated with sequence diagrams and API surface

FileContents
00-scenario-overview.mdScenario map, dependency graph, API surface index
S01-create-short-link.md14-step sequence diagram + 6 exception cases
S02-redirect.md2 sequence diagrams (cache hit + cache miss) + 4 exception cases
S03-analytics.md2 sequence diagrams (initial load + polling) + 4 exception cases
S04-manage-link.md4 sub-flow sequence diagrams (load/edit/toggle/delete)

S02 sequence diagram — Visitor → Edge Middleware → Redis Cache → PostgreSQL

The sequence diagram shows 10 steps across 4 participants (Visitor/Browser, Edge Middleware, Redis Cache, PostgreSQL), with detailed step descriptions including cache strategy, async click recording via waitUntil(), and HTTP 302 redirect semantics.

  • S01 — Token pattern: crypto.randomBytes → bcrypt hash, plain token returned once and never stored
  • S02 — HTTP 302 not 301: destinations can change (S04 edit), so permanent-cache 301 is wrong
  • S02 — Async click recording: INSERT fires via waitUntil() after the redirect
  • S04 — Cache DEL on every mutation: explicit invalidation so the next visitor sees the correct state

6 endpoints are identified and feed directly into Phase 3-2:

MethodEndpointScenario
POST/api/linksS01
GET/:slug (Edge Middleware)S02
GET/api/links/:slugS04
PATCH/api/links/:slugS04
DELETE/api/links/:slugS04
GET/api/links/:slug/analyticsS03

Prompt: Help me design the API spec then Help me design the database schema

The AI loads api-designer and db-designer Skills in sequence, reading the scenario sequence diagrams to produce a complete OpenAPI specification and database schema.

API documentation showing POST /api/links with full request/response schema

Every endpoint includes:

  • Request parameters and body schema
  • Response schemas with field descriptions and examples
  • Error response specifications
  • Source traceability back to scenario steps (e.g., “Source: S01 Step 3 → Step 13”)

Database schema for the links table with 6 fields

The links table schema is derived directly from the API spec and scenario requirements:

FieldTypeNotes
slugVARCHAR(50)PK, 3-50 chars [a-zA-Z0-9-]
destination_urlTEXTValid http/https URL
management_tokenTEXTbcrypt hash, plain token never stored
is_activeBOOLEANDefault TRUE, controls 302 vs 410
created_atTIMESTAMPTZDefault now()
updated_atTIMESTAMPTZDefault now()

A separate clicks table tracks analytics with referrer, country, and user_agent fields.


Prompt: Help me design test cases

The AI loads the test-writer Skill and produces two levels of test specifications:

Unit test cases for S02 with source traceability

Each unit test case traces back to its source — the specific line in the OpenAPI spec or scenario document that requires this test:

  • UT-S02-01 — “Slug below minimum length” → Source: openapi.yaml → SlugPath: minLength: 3
  • UT-S02-04 — “Active link returns HTTP 302 (not 301)” → Source: S02-redirect.md → Step 10 note
  • UT-S02-06 — “Successful redirect inserts one row in clicks” → Source: S02-redirect.md → Step 9/5

Scenario test cases with happy paths, exception paths, and coverage validation

Scenario tests cover end-to-end flows:

  • Happy paths — cache hit and cache miss redirect flows
  • Exception paths — slug not found (404), inactive link (410), Redis unavailable (fallback to DB), click insert failure (redirect still delivered)
  • Coverage validation — every acceptance criterion, every exception case in the sequence diagrams is mapped to at least one test

The test spec ensures 100% coverage of design documents before any code is written.


Prompt: Help me implement S01

Now comes the moment where all the accumulated context pays off. The AI reads everything — requirements, design, architecture, sequence diagrams, API spec, DB schema, and test cases — then produces an implementation plan:

AI creates a step-by-step implementation plan with file tree

The AI doesn’t guess — it references specific documents as it writes each file:

AI reads DB schema and writes Drizzle ORM schema.ts matching it exactly

Notice how the AI explicitly states: “First, let me read the DB schema to match it exactly” — then generates schema.ts that mirrors logos/resources/database/schema.sql field by field.

Every source file includes traceability comments pointing back to the design documents:

Generated token.ts with source traceability comments

/**
* Management token utilities.
*
* Sources:
* S01 Steps 7–8 — token generation at link creation
* S03/S04 — token verification on every management request
*/

This is the opposite of “vibe coding” — every function exists because a scenario step required it, and you can trace the chain: Scenario → Sequence Diagram → API Spec → Code.


Prompt: /openlogos:verify

The final phase runs all tests and validates coverage against the design documents:

openlogos verify shows Gate 3.5 PASS — 111 cases, 100% coverage

MetricValue
Total defined111 cases (79 UT + 32 ST)
Total executed111 / 111
Passed86
Failed0
Skipped25
Coverage100%
Pass rate77% (86/111)

The 25 skipped cases are scenario tests (ST-S01 through ST-S04) that require a live Next.js server + database + Redis — they’re intentionally marked skip and covered by the orchestration JSON files in logos/resources/scenario/.

Gate 3.5: PASS — all test cases are covered.

All phases complete — every phase shows ✅

🎉 All Phases Complete!
Phase 1 · Requirements ✅ 01-requirements.md
Phase 2 · Product Design ✅ Feature specs + 4 HTML prototypes
Phase 3-0 · Architecture ✅ Architecture overview
Phase 3-1 · Scenario Modeling ✅ Sequence diagrams for S01–S04
Phase 3-2 · API + DB Design ✅ OpenAPI YAML + SQL schema
Phase 3-3a · Test Case Design ✅ Test specs for S01–S04
Phase 3-3b · Orchestration ✅ JSON orchestration for S01–S04
Phase 3-4 · Code + Test Code ✅ Business code + test code with reporter
Phase 3-5 · Test Acceptance ✅ 100% coverage, Gate 3.5 PASS

In about 60 minutes of human involvement (mostly answering questions), you went from “A URL shortening service” to:

  • 239-line requirements document with 4 scenarios and acceptance criteria
  • 303-line feature specification with interaction flows and UI components
  • 4 interactive HTML prototypes you can click through in a browser
  • 257-line architecture document with system diagram and tech stack rationale
  • 5 scenario documents with 22+ sequence diagrams and 18 exception cases
  • OpenAPI specification with 6 endpoints, 4 schemas, full examples
  • Database schema with 2 tables, indexes, and constraints
  • 111 test cases (79 unit + 32 scenario) with full source traceability
  • Production-ready code — scaffolded Next.js project with all route handlers, DB layer, and tests
  • Gate 3.5 PASS — 100% coverage of design documents

Every artifact is a plain text file in logos/resources/. Every decision is documented. Every line of code traces back to a scenario step. If a new developer joins tomorrow, they can read the full design chain and understand why every piece of code exists.


  1. Activate change management

    Terminal window
    openlogos launch

    This enables the Delta workflow — any future modification goes through a structured change proposal in logos/changes/ before implementation. The AI performs impact analysis across all layers to ensure nothing falls out of sync.

  2. Run orchestration tests against a live server

    Deploy locally or to Vercel, then execute the S01–S04 JSON orchestration tests to cover the 25 currently-skipped scenario tests with real HTTP requests against a live DB + Redis.

  3. Explore real project tours

    See how this methodology plays out on real projects:

    • FlowTask — a Rust/Tauri desktop app built with Claude Code
    • Money-Log — an Electron app built with OpenCode
  4. Dive deeper into concepts