What This Template Is For
A technical specification translates product requirements into an engineering plan. It sits between the PRD and the first pull request. The PRD defines what to build and why. The tech spec defines how to build it, what trade-offs are involved, and what the system will look like when it ships.
Without a written tech spec, architecture decisions happen inside pull request reviews, dependencies surface mid-sprint, and testing coverage is an afterthought. The spec forces engineers and PMs to agree on scope, interfaces, and risks before code is written. This is not about process for the sake of process. It is about catching the expensive mistakes when they are still cheap to fix.
This template is designed for mid-to-large features that touch multiple services, require new data models, or introduce external dependencies. For smaller changes (a single endpoint, a UI tweak), a brief write-up in the ticket is sufficient. If you are evaluating whether a feature is worth building at all, start with the RICE framework to score it against alternatives. For the broader delivery process, the Technical PM Handbook covers how tech specs fit into planning and execution cycles.
How to Use This Template
- Start after the PRD is approved but before sprint planning begins. The author should be the engineering lead or the senior engineer who will own the implementation.
- Fill in the Context section by copying the problem statement and goals directly from the PRD. Do not rewrite them. The spec should reference the PRD, not replace it.
- Draft the Architecture section with a diagram or written description of the system changes. Keep it at the level of services and data flows, not classes and methods.
- Define API contracts with enough detail that frontend and backend engineers can work in parallel. Include request/response schemas, status codes, and authentication.
- Document data model changes as schema diffs. Flag any migrations that require downtime or backfill scripts.
- List all dependencies, both internal (other teams, shared services) and external (third-party APIs, infrastructure changes).
- Write the testing strategy before implementation starts. This prevents the common failure mode where tests are "planned" but never written because the sprint ran out of time.
- Share the draft with the full engineering team, the PM, and the tech lead for review. Use the Open Questions section to track unresolved decisions.
The Template
Context and Scope
| Field | Details |
|---|---|
| Feature Name | [Name] |
| PRD Link | [Link to approved PRD] |
| Author | [Engineer name] |
| Reviewers | [Names] |
| Date | [Date] |
| Status | Draft / In Review / Approved / Implemented |
Summary. [1-2 sentences describing what this spec covers. Reference the PRD for full product context.]
Goals.
- [Goal 1 from the PRD]
- [Goal 2 from the PRD]
Non-goals.
- [What this spec explicitly does not cover]
- [Adjacent work being handled separately]
Architecture Overview
Current state. [Describe how the system works today in the area this feature touches. Include a simple diagram if helpful.]
Proposed changes. [Describe the high-level architecture of the new system. What services are added, modified, or removed? How does data flow through the system?]
[ASCII diagram or link to architecture diagram]
Example:
Client --> API Gateway --> Notification Service --> Message Queue
--> Database
--> Push Provider (FCM/APNs)
Key design decisions.
| Decision | Chosen Approach | Rationale | Alternatives Considered |
|---|---|---|---|
| [Decision 1] | [What we chose] | [Why] | [What else we considered] |
| [Decision 2] | [What we chose] | [Why] | [What else we considered] |
API Contracts
Endpoint 1: [Method] [Path]
| Property | Value |
|---|---|
| Method | GET / POST / PUT / DELETE |
| Path | /api/v1/[resource] |
| Auth | Bearer token / API key / None |
| Rate Limit | [Requests per minute] |
Request body:
{
"field_1": "string (required)",
"field_2": 123,
"field_3": true
}
Response (200):
{
"id": "uuid",
"field_1": "string",
"created_at": "2026-03-04T00:00:00Z"
}
Error responses:
| Status | Code | Description |
|---|---|---|
| 400 | INVALID_REQUEST | [When this occurs] |
| 401 | UNAUTHORIZED | [When this occurs] |
| 404 | NOT_FOUND | [When this occurs] |
| 429 | RATE_LIMITED | [When this occurs] |
[Repeat for each endpoint]
Data Model Changes
New tables:
CREATE TABLE [table_name] (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
[column_1] VARCHAR(255) NOT NULL,
[column_2] INTEGER DEFAULT 0,
[column_3] JSONB,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
CREATE INDEX idx_[table]_[column] ON [table_name]([column_1]);
Schema changes to existing tables:
| Table | Change | Migration Type | Downtime Required |
|---|---|---|---|
| [table] | Add column [name] | Additive | No |
| [table] | Add index on [column] | Background | No |
| [table] | Backfill [column] | Script | No (async) |
Data migration plan. [Describe any backfill scripts, their estimated runtime, and rollback strategy.]
Dependencies
Internal dependencies:
| Team/Service | What We Need | Status | ETA |
|---|---|---|---|
| [Team/Service] | [Description] | Not started / In progress / Ready | [Date] |
External dependencies:
| Vendor/Service | What We Need | Fallback Plan |
|---|---|---|
| [Vendor] | [API, SDK, or infrastructure] | [What happens if unavailable] |
Testing Strategy
| Test Type | Scope | Owner | Estimated Effort |
|---|---|---|---|
| Unit tests | [What they cover] | [Name] | [Days] |
| Integration tests | [What they cover] | [Name] | [Days] |
| Load tests | [What they cover] | [Name] | [Days] |
| Manual QA | [What they cover] | [Name] | [Days] |
Critical test scenarios:
- ☐ [Scenario 1: Happy path]
- ☐ [Scenario 2: Error handling]
- ☐ [Scenario 3: Edge case]
- ☐ [Scenario 4: Performance under load]
- ☐ [Scenario 5: Rollback verification]
Rollout Plan
| Phase | Audience | Duration | Success Criteria | Rollback Trigger |
|---|---|---|---|---|
| Canary | 1% of traffic | 24 hours | [Metrics] | [Threshold] |
| Beta | 10% of traffic | 1 week | [Metrics] | [Threshold] |
| GA | 100% | Permanent | [Metrics] | N/A |
Feature flag. [Flag name, configuration, and who controls it]
Monitoring. [Dashboards, alerts, and on-call expectations during rollout]
Open Questions
| # | Question | Owner | Status | Decision |
|---|---|---|---|---|
| 1 | [Question] | [Name] | Open | |
| 2 | [Question] | [Name] | Open |
Timeline
| Milestone | Target Date | Dependencies |
|---|---|---|
| Spec approved | [Date] | Reviewer availability |
| Implementation start | [Date] | Spec approval |
| Integration testing | [Date] | All endpoints complete |
| Canary deploy | [Date] | Tests passing |
| GA release | [Date] | Canary success |
Filled Example: In-App Notification System
Context and Scope
| Field | Details |
|---|---|
| Feature Name | In-App Notification System |
| PRD Link | PRD-2026-018 |
| Author | Alex Rivera, Senior Backend Engineer |
| Reviewers | Sarah Kim (PM), Marcus Chen (Tech Lead), Jordan Lee (Frontend) |
| Date | March 2026 |
| Status | In Review |
Summary. Build a real-time notification system that delivers in-app, email, and push notifications for user-relevant events (mentions, task assignments, status changes, comments). References PRD-2026-018 for full product requirements.
Goals.
- Deliver notifications within 2 seconds of the triggering event
- Support 3 channels: in-app (bell icon), email digest, and mobile push
- Allow users to configure per-channel preferences for each notification type
Non-goals.
- SMS notifications (evaluated and deferred to Q4)
- Notification analytics dashboard (separate spec planned for Q3)
- Marketing/promotional notifications (handled by the marketing automation system)
Architecture Overview
Current state. The application has no centralized notification system. Email alerts are sent synchronously from 4 different services using direct SMTP calls. There is no in-app notification UI. Push notifications are not supported.
Proposed changes. Introduce a Notification Service that acts as the central hub for all notification delivery. Event producers publish to a message queue. The Notification Service consumes events, applies user preferences, renders templates, and dispatches to the appropriate channel.
Event Producers (5 services)
|
v
Message Queue (SQS)
|
v
Notification Service
|---> In-App Store (PostgreSQL) --> WebSocket --> Client (bell icon)
|---> Email Renderer --> SES --> User inbox
|---> Push Dispatcher --> FCM/APNs --> Mobile device
Key design decisions.
| Decision | Chosen Approach | Rationale | Alternatives Considered |
|---|---|---|---|
| Message queue | SQS Standard | Already in our stack, sufficient throughput, lower ops cost | Kafka (overkill for our volume), Redis Streams (less durable) |
| Real-time delivery | WebSocket via existing Socket.io server | Reuse existing infra; client libraries already integrated | Server-Sent Events (simpler but no existing infra), polling (too slow) |
| Preference storage | JSONB column on users table | Flexible schema for adding new notification types without migrations | Separate preferences table (more rigid), user service API (extra hop) |
API Contracts
Endpoint 1: GET /api/v1/notifications
| Property | Value |
|---|---|
| Method | GET |
| Path | /api/v1/notifications |
| Auth | Bearer token |
| Rate Limit | 60/min |
Query parameters:
status(optional):unread|read|all(default:all)limit(optional): 1-100 (default: 25)cursor(optional): pagination cursor
Response (200):
{
"notifications": [
{
"id": "ntf_8a3b2c1d",
"type": "task_assigned",
"title": "New task assigned to you",
"body": "Sarah Kim assigned 'Update pricing page' to you",
"actor": { "id": "usr_123", "name": "Sarah Kim", "avatar_url": "..." },
"resource": { "type": "task", "id": "tsk_456", "url": "/tasks/tsk_456" },
"read_at": null,
"created_at": "2026-03-04T14:30:00Z"
}
],
"cursor": "eyJpZCI6Im50Zl8...",
"has_more": true,
"unread_count": 7
}
Endpoint 2: PATCH /api/v1/notifications/:id/read
| Property | Value |
|---|---|
| Method | PATCH |
| Path | /api/v1/notifications/:id/read |
| Auth | Bearer token |
| Rate Limit | 120/min |
Response (200):
{
"id": "ntf_8a3b2c1d",
"read_at": "2026-03-04T14:35:00Z"
}
Endpoint 3: PUT /api/v1/notifications/preferences
| Property | Value |
|---|---|
| Method | PUT |
| Path | /api/v1/notifications/preferences |
| Auth | Bearer token |
| Rate Limit | 10/min |
Request body:
{
"preferences": {
"task_assigned": { "in_app": true, "email": true, "push": true },
"comment_added": { "in_app": true, "email": false, "push": false },
"status_changed": { "in_app": true, "email": false, "push": true },
"mentioned": { "in_app": true, "email": true, "push": true }
},
"email_digest": "immediate"
}
Data Model Changes
New tables:
CREATE TABLE notifications (
id VARCHAR(20) PRIMARY KEY,
user_id UUID NOT NULL REFERENCES users(id),
type VARCHAR(50) NOT NULL,
title VARCHAR(255) NOT NULL,
body TEXT NOT NULL,
actor_id UUID REFERENCES users(id),
resource_type VARCHAR(50),
resource_id VARCHAR(50),
read_at TIMESTAMP WITH TIME ZONE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
CREATE INDEX idx_notifications_user_unread
ON notifications(user_id, created_at DESC)
WHERE read_at IS NULL;
CREATE INDEX idx_notifications_user_created
ON notifications(user_id, created_at DESC);
Schema changes to existing tables:
| Table | Change | Migration Type | Downtime Required |
|---|---|---|---|
| users | Add column notification_preferences JSONB DEFAULT '{}' | Additive | No |
| users | Add column push_tokens JSONB DEFAULT '[]' | Additive | No |
Dependencies
Internal dependencies:
| Team/Service | What We Need | Status | ETA |
|---|---|---|---|
| Auth Service | WebSocket token validation endpoint | Ready | N/A |
| Mobile Team | Push token registration integration | Not started | March 20 |
| Frontend Team | Bell icon UI + notification panel | In progress | March 25 |
External dependencies:
| Vendor/Service | What We Need | Fallback Plan |
|---|---|---|
| AWS SQS | Message queue | Already provisioned; failover to direct DB writes |
| AWS SES | Email delivery | Already in use; no change |
| Firebase Cloud Messaging | Android push | Graceful degradation: in-app only |
| Apple Push Notification Service | iOS push | Graceful degradation: in-app only |
Testing Strategy
| Test Type | Scope | Owner | Estimated Effort |
|---|---|---|---|
| Unit tests | Notification service logic, preference filtering, template rendering | Alex | 2 days |
| Integration tests | End-to-end: event publish to notification delivery | Alex + QA | 2 days |
| Load tests | 1,000 notifications/second sustained for 10 minutes | Alex | 1 day |
| Manual QA | All notification types across in-app, email, and push channels | QA team | 2 days |
Critical test scenarios:
- ☑ User receives in-app notification within 2 seconds of triggering event
- ☐ User with push disabled receives only in-app and email
- ☐ Notification preferences correctly filter delivery channels
- ☐ Pagination returns correct results with 500+ notifications
- ☐ System handles 1,000 concurrent notifications without queue backup
- ☐ Failed email/push delivery does not block in-app delivery
Rollout Plan
| Phase | Audience | Duration | Success Criteria | Rollback Trigger |
|---|---|---|---|---|
| Canary | Internal team (30 users) | 3 days | Zero errors, <2s delivery | Any P0 bug |
| Beta | 5% of users | 1 week | Error rate <0.1%, latency P95 <2s | Error rate >1% or latency P95 >5s |
| GA | 100% of users | Permanent | All metrics green for 48h | N/A |
Feature flag. notifications_v2 in LaunchDarkly. PM controls the rollout percentage. Kill switch disables all notification processing and hides the bell icon.
Monitoring. New Datadog dashboard: Notifications Overview. Alerts on: queue depth >10K, delivery latency P95 >5s, error rate >1%. On-call: backend rotation for first 2 weeks post-GA.
Timeline
| Milestone | Target Date | Dependencies |
|---|---|---|
| Spec approved | March 8 | Reviewer feedback |
| Database migration deployed | March 12 | DBA review |
| Notification service MVP (in-app only) | March 22 | Migration complete |
| Email + push channels | March 29 | Mobile push token integration |
| Canary deploy | April 1 | All tests passing |
| GA release | April 15 | Canary success |
Common Mistakes to Avoid
- Writing the spec after coding has started. The spec exists to surface disagreements and dependencies early. If you are writing it after sprint 1, you have already committed to an architecture you may need to change.
- Over-specifying implementation details. The spec covers architecture, interfaces, and data models. It does not prescribe variable names, class hierarchies, or specific library versions. Leave room for the implementing engineer to make tactical decisions.
- Skipping the alternatives section. If you only document what you chose, reviewers cannot evaluate whether you chose well. Include at least 2 alternatives for every significant design decision.
- Ignoring the rollback plan. Every feature should be deployable behind a feature flag with a documented kill switch. If something goes wrong at 2am, the on-call engineer should not need to read the spec to figure out how to revert.
- Forgetting to update the spec. When implementation diverges from the spec (and it always does), update the document. A stale spec is worse than no spec because it creates false confidence.
Key Takeaways
- Write the tech spec after the PRD is approved and before sprint planning starts
- Focus on architecture, interfaces, and data models. Leave implementation tactics to the engineer
- Document at least 2 alternatives for every significant design decision
- Include a rollback plan and feature flag strategy for every production change
- Update the spec when implementation diverges from the plan
About This Template
Created by: Tim Adair
Last Updated: 3/4/2026
Version: 1.0.0
License: Free for personal and commercial use
