How to build a threat model
11 November 2025 • 9 min read

In today’s environment, every digital system—no matter how small—faces constant pressure from attackers, misconfigurations, and rapidly evolving technologies. To build anything secure, teams need a clear way to understand what they are protecting, who might target it, and how those attacks could realistically unfold. This is exactly what a threat model provides. It is a structured method that maps the system, highlights its weak points, and guides teams toward the right security measures before issues become expensive or damaging. In this article, we walk through a complete, practical guide to threat modeling: how it works, why it matters, and how to build one you can rely on for real-world systems.
Plain, practical, step-by-step. This guide gives you a repeatable process, templates, examples (including a sample web-app), and ready-to-use artifacts (DFD, JSON threat table, prioritization). Use it for applications, APIs, cloud workloads, or even physical systems.
Quick summary
Threat modeling = identify what you care about → map how it moves and where it can be attacked → enumerate threats → prioritize by impact/likelihood → design mitigations and verify them.
When to do threat modeling
- Early in design (best): reduces rework and expensive fixes.
- Before major deployments/architecture changes.
- After incidents, to root-cause and harden.
- Periodically for long-running systems or regulatory needs.
Core concepts & terms
- Asset: what you protect (data, keys, uptime, reputation).
- Actor: who might attack or interact (users, admins, external systems, attackers).
- Entry point / surface: where attacker can interact (endpoints, UI, APIs).
- Trust boundaries: places where trust level changes (client → server, network → private).
- Data Flow Diagram (DFD): visual map of components + flows.
- Threat: a potential adverse event (ex: SQL injection).
- Risk = Likelihood × Impact: used to prioritize.
- Mitigation: technical or process control to reduce risk.
- Assumption: things you trust to be true — document them.
High-level process
- Define scope & goals — what system, assets, and questions are in scope.
- Create architecture/DFD — components, data flows, external actors, trust boundaries.
- Enumerate assets & security properties — confidentiality, integrity, availability, privacy.
- Identify threats — use STRIDE, attack trees, or targeted lists (privacy: LINDDUN).
- Assess & prioritize — estimate impact and likelihood, produce risk score & remediation order.
- Define mitigations & controls — design fixes, residual risk, acceptance criteria.
- Validate & test — code review, security tests, pen-test, and update the model as changes happen.
Step 1 — Define scope & goals
- Write a 1-paragraph scope: Target: myapp.example.com — public web app with mobile clients, backend APIs, DB in AWS RDS. In scope: login, session handling, file upload, payments. Out of scope: third-party analytics provider.
- List top assets: user PII, payment tokens, session tokens, private keys, uptime of checkout.
Step 2 — Build the DFD
Create a DFD with processes (boxes), data stores (cylinders), external entities (people), and data flows (arrows). Mark trust boundaries.
Example (Mermaid for visualization):
flowchart TD
A[User Browser] -->|HTTPS| B(Web App)
B -->|REST /api/login| C[Auth Service]
C -->|validates| D[(User DB)]
B -->|POST /upload| E[File Service]
E -->|store| F[(Object Store)]
B -->|payments| G[Payment Gateway]
subgraph INTERNAL
C
E
F
D
endKey DFD notes:
- Label data flows with the kind of data (passwords, tokens, files).
- Draw trust boundaries: browser (untrusted) vs backend (trusted), third-party payment (external).
- Keep DFDs at two levels: high-level architecture, and a detailed DFD for specific features (e.g., login flow).
Step 3 — Enumerate assets & security properties
For each asset, list required properties.
Example:
- User credentials — Confidentiality: high, Integrity: medium, Availability: low
- Session tokens — Confidentiality: high, Integrity: high, Availability: medium
- Uploaded files — Confidentiality: depends (user could upload PII), Integrity: high (malicious files), Availability: medium
Use a table (CSV/JSON) for automation. Example JSON template:
[
{"asset":"user_credentials","owner":"Auth Service","confidentiality":"high","integrity":"high","availability":"low"},
{"asset":"session_token","owner":"Web App","confidentiality":"high","integrity":"high","availability":"medium"}
]Step 4 — Identify threats
Methods to use
- STRIDE (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege) — great for system components.
- Attack trees — break a high-level goal into subgoals and leaf attacks.
- LINDDUN — privacy threats (Linkability, Identifiability, Non-repudiation, Detectability, Disclosure, Unawareness, Non-compliance).
- PASTA — process model for risk-centric threat modeling (if you need a formal methodology).
Example: apply STRIDE to “login flow”
- Spoofing: attacker impersonates user via credential stuffing.
- Tampering: attacker tampers with JWT token if no signature verification.
- Repudiation: no audit logs for admin actions → attacker denies action.
- Information disclosure: verbose error messages revealing DB structure.
- DoS: login endpoint brute force causing service unavailability.
- Elevation of privilege: insecure role checks allow normal user to access admin API.
Attack tree example (abbreviated)
Goal: Get admin access
- Exploit known SQL injection → escalate to user with admin privileges
- Steal admin credentials via phishing → reuse session cookie
- Exploit vulnerable deserialization on admin API → run remote code
Step 5 — Assess & prioritize threats (practical scoring)
Pick a scoring method:
- Simple 1–5 Likelihood × 1–5 Impact → risk = L × I (range 1–25), or
- Use CVSS (if you can compute precisely), or
- Use business impact categories: Critical / High / Medium / Low.
Example risk table row:
| Threat ID | Component | Threat | Likelihood (1–5) | Impact (1–5) | Risk | Mitigation priority |
|---|---|---|---|---|---|---|
| T-001 | Auth Service | Credential stuffing | 4 | 5 | 20 (Critical) | P0 (immediate) |
Step 6 — Define mitigations
Mitigations should map to threats and include acceptance criteria and verification tests.
Common mitigation patterns:
- Authentication: rate limiting, progressive delays, MFA, password policy, credential stuffing detection.
- Authorization: deny-by-default, role-based checks, enforce server-side checks, test with fuzzers.
- Input validation & output encoding: parameterized queries, strict schemas, whitelist validation.
- Secrets management: use vaults (no hard-coded secrets), rotate keys, least privilege for keys.
- Encryption: TLS everywhere, encrypt sensitive data at rest with KMS.
- Logging & monitoring: structured audit logs, anomaly detection, retention policies.
- Isolation: network segmentation, container sandboxing, least privileged services.
- Supply chain: pin dependencies, SCA for vulnerable libs, reproducible builds.
- Availability: autoscaling, backpressure, rate limits, circuit breakers.
- Privacy controls: data minimization, retention policies, consent capture.
Example mitigation mapping:
| Threat ID | Mitigation | Acceptance criteria | Test |
|---|---|---|---|
| T-001 | Add rate limit + IP blocklist + MFA | No more than 5 failed logins/min per IP; MFA required for risk login | Simulated credential stuffing; MFA challenge present |
Step 7 — Validate & test
- Threat validation: code review, threat re-walk, table-top exercise with devs + product owners.
- Security tests: dependency scans, SAST/DAST, fuzzing, authenticated pen tests.
- Automation: fail build if high severity dependencies or tests fail.
- Metrics: track number of high/critical threats open, mean time to remediate.
Example: full mini threat model for a web login
Scope: web.example.com, login, session, password reset.
DFD: as shown earlier.
Threats (short list):
- T1: Credential stuffing → High (L4 I5) → Mitigation: blocklists, rate limit, MFA.
- T2: Password reset token leak via email → Medium (L3 I4) → Mitigation: short token TTL, single-use tokens, no token in URL (use POST link).
- T3: Session fixation → Low (L2 I4) → Mitigation: rotate session id after login.
- T4: CSRF on account change → Medium → Mitigation: CSRF tokens, SameSite cookies.
- T5: Open redirect in login return_url → Medium → Mitigation: whitelist return URLs.
Represent each threat as JSON (useful to automate):
{
"id": "T1",
"title": "Credential stuffing on login",
"component": "Web App / Auth",
"description": "Automated attempts using leaked passwords to take over accounts",
"likelihood": 4,
"impact": 5,
"risk_score": 20,
"mitigations": [
{"desc":"Rate limit failed login per IP and per account","owner":"Platform Team","due":"2025-11-30"},
{"desc":"Enable adaptive MFA on high risk logins","owner":"Auth Team","due":"2025-12-15"}
]
}Prioritization templates & matrix
Use a simple matrix:
- Risk 16–25: Critical → Blocker/P0 — immediate action
- Risk 9–15: High → P1 — fix in next sprint
- Risk 4–8: Medium → P2 — plan
- Risk 1–3: Low → P3 — monitor or accept
For mapping to sprint backlog, include:
- Threat ID, Story/Task, Estimate, Owner, Definition of Done.
Attack trees
Goal: steal user PII
- Path 1: exfiltrate DB via SQLi → exploit SQLi endpoint → dump table
- Path 2: compromise admin credentials → phishing → reuse on admin UI → download CSV
- Path 3: access object store with public ACL → find backups → download
Use this to ensure mitigations cover leaf-level attacks.
Common pitfalls & anti-patterns
- Treating threat modeling as a one-time checklist.
- Leaving assumptions undocumented (e.g., “internal network is trusted”).
- Over-optimistic likelihood estimates (assume determined attackers).
- Too many threats without prioritization → nothing gets fixed.
- Not involving product/owners/devops/security ops together.
Tools & automation
(Choose according to your stack)
- Diagramming: draw.io, Mermaid, Lucidchart
- Threat modeling tools: Microsoft Threat Modeling Tool, OWASP Threat Dragon
- SAST/DAST: static analysis (ESLint/semgrep), dynamic scanners (ZAP), dependency scanners (Snyk, Dependabot)
- Secret scanning: git-secrets, truffleHog
- CI integration: fail build on critical security test results
- Pen test automation & management: bespoke scripts, Burp Suite for manual testing
Report structure — what to deliver
- Title page: scope, date, authors, session ID
- Executive summary: top 3 risks, residual risk, recommendations
- System overview & DFD (visual)
- Asset list & security properties
- Threat inventory (table) with risk scores
- Attack trees / examples
- Mitigation plan (who, when, verification)
- Test plan & validation results
- Residual risks & acceptance
- Appendix: logs, evidence, threat model JSON/CSV
Playbook: turning model into backlog
- Create issues for each P0/P1 with:
- description linking to DFD and threat row
- acceptance criteria (e.g., “failed login rate limited to 5/min per IP; verified by automated test”)
- test cases and owner
- Schedule verification in CI / staging / pen test.
Checklist — quick actionable list
- Scope documented and approved
- DFD created (high & detailed)
- Assets and trust boundaries labeled
- STRIDE (or chosen method) run against each DFD element
- Threat table with likelihood & impact
- Mitigations mapped + acceptance criteria
- Issues created in tracker + owners assigned
- Tests added to CI for critical mitigations
- Pen test scheduled for high-risk areas
- Threat model stored (JSON/MD) and versioned with architecture changes
Example templates you can copy
Threat table CSV columns:
id,component,threat,description,likelihood,impact,risk_score,mitigation,owner,due,date_identified,status
Mitigation task template:
- Title: [SEC][P0] Rate limit login to prevent credential stuffing
- Description: link to threat T1 + DFD snippet
- Acceptance: automated load test shows failed attempts blocked; MFA prompts on suspicious logins
- Estimate: 3d
- Owner: Auth Team
How to keep the model alive
- Treat model as living doc in repo (e.g., /security/threat-models/<service>.md).
- Review after each major feature, release, or dependency upgrade.
- Run a quarterly quick review with dev/product/security.
- Maintain versioning and changelog (who changed what and why).
Closing notes
- Involve non-security people (developers, product, ops, legal) — they surface real business context.
- Prioritize based on business impact, not just theoretical severity.
- Combine manual reasoning (attack trees, STRIDE) with automated tooling to scale.
- Document assumptions and explicitly call out residual risk for acceptance.