Resource

Security Requirements Universe

1004 sub-elements across 28 hybrid cloud security capabilities × 7 DoD Zero Trust pillars.

What this is

This file is a security requirements traceability matrix. It enumerates 1004 sub-element security requirements, each anchored to authoritative federal cybersecurity guidance: NIST SP 800-53 Rev 5, CSF 1.1 (with CSF 2.0 forward-compat annotations), DIAT CSSP Evaluator Scoring Metrics v11 (CSSP-ESM), DIAT CSSP Alignment Risk Validation V1 (CSSP-CARV), DoD Zero Trust Reference Architecture v2, CORA Grading Criteria, DoD Cyber Reference Architecture v5, DoDM 8530.01 and related DoD cyber doctrine, OMB M-21-31 / M-22-09, DoD Cloud SRG, DISA SCCA, and applicable Network/Boundary STIGs.

The 1004 sub-elements decompose 28 hybrid cloud security capabilities (SIEM, EDR, MDM, NAC, IDS, CASB, NetDLP, HostDLP, PKI, Secrets/KMS, CyberBackup, and the rest) across the seven DoD Zero Trust pillars.

Why it exists

A practitioner's hybrid cloud security architecture analysis has to answer one question repeatedly: for capability X, what does authoritative guidance require, and what does the architecture choice (Course of Action) discriminate? This matrix answers that question for 28 commonly-deployed security capabilities.

I built this universe during a Course of Action (COA) analysis for a federal government hybrid cloud architecture engagement. The decomposition stayed at capability-class (no vendor names, no client identifiers) because that's the abstraction that travels. A SIEM is a SIEM regardless of whose SIEM. The worked example here is one specific engagement's universe; the methodology and the matrix structure generalize.

How to use this file

Tab 2 (Sub-elements) is the raw data: every sub-element ID, its capability, R-section, primary citation, NIST controls, CSF subcategory, cross-cites, testable claim, cloud applicability prose. Filter by capability or pillar to scope down to what you care about.

Tab 3 (Per-capability roll-up) is the executive view: 28 capabilities, R-bucket counts, AD percentages, transition pattern classification, threshold-exceeded flags.

Tab 4 (NIST 800-53 inventory) shows every 800-53 control cited in the universe with first-primary attribution and reuse counts. Use it as a coverage map.

Tab 5 (CSF subcategory inventory) does the same for NIST CSF 1.1 subcategories, with CSF 2.0 forward-compat mapping where the renumbering is non-trivial.

Tab 6 (Methodology-firsts catalog) catalogs 146 instances of project-wide first-primary anchors, trust-anchor LANDED dispositions, and recurring-pattern lineage chains (exception-authority pattern at R2.4-class rows, ZT-RA Capability 2.2.1 lineage at R6.x rows, PR.AT-5 lineage at R8.x rows, CM-8 / CP-2 / AC-17 use-count series). This is the methodology audit trail.

Tab 7 (Retain-archetype catalog) classifies every retained ANALYST-DERIVED row (124 of 1004) into one of seven legitimate retain archetypes. See the "What 'AD' means" section below for context.

Tab 8 (Audit baseline) records the state of six independent audits run against the universe: counts, NIST 800-53 inventory completeness, CSF coverage, AD-rationale completeness, R-ID-first-wins normalization, and forward-pointer closure. Five of six pass cleanly. The remaining audit reports informational inventory totals only, not actionable findings.

What "AD" means

About 12.4% of the sub-elements (124 of 1004) are flagged ANALYST-DERIVED. These are requirements where no authoritative source in the 12-slug stack mandates the specific organizational decision being made.

Concrete examples: cost ceilings ("operate within an annual cost envelope of X for cross-CSP egress charges"); cross-CSP normalization mastership ("which CSP-class's policy is authoritative when cross-CSP normalization diverges"); deployment-mode choices ("inline-proxy CASB enforcement vs API-mode CASB enforcement"). NIST anchors the underlying activity: key management, system monitoring, access enforcement, configuration settings. It doesn't anchor which-entity-holds-mastership questions.

The methodology produces this 12-15% floor as a structural property. Pre-resweep the AD rate was 23.6% (237 of 1004 rows). A per-row resweep against authoritative anchors recovered 113 promotions over four days of work. The remaining 124 retains classify into seven archetypes:

  1. Cost ceilings and per-CSP cost governance
  2. Cross-CSP normalization mastership
  3. Strategic-constraint deployment-model choices (inline-vs-API, per-CSP-native-vs-unified)
  4. Trust-anchor NOT-LANDED dispositions (methodology-internal candidate evaluations)
  5. Explicit no-relationship dispositions (scope-boundary closures, no-C2C-relationship, no-EDR-overlap)
  6. Methodology-novel architectural patterns (three-plane / four-plane / five-plane defense-in-depth ratifications)
  7. Deployment-mode mastership decisions (which mode is canonical when NIST mandates an outcome but not the mode)

Tab 7 of this workbook classifies every retained AD row into one of these seven.

Methodology summary

Three discipline rules drive the decomposition.

The 12-slug authoritative anchor stack constrains what counts as a valid citation. Every primary citation pulls from one of the 12 ratified sources listed in the Authoritative sources table below, or from ANALYST-DERIVED as the relief valve. When a sub-element resolves to ANALYST-DERIVED, the rationale must enumerate all 12 authoritative slugs as searched. No slug can be silently omitted.

The R-ID-first-wins formula prevents citation proliferation. When the same NIST control anchors multiple sub-elements across the universe, the lowest R-ID at the earliest-authored capability retains first-primary; all subsequent uses cite "reuse-from-X-RY.Z at " rather than re-claiming first-primary. Tab 4 (NIST 800-53 inventory) and Tab 5 (CSF inventory) show the resulting first-primary attribution.

The COA-discriminating-governance-mastership framework drives what gets decomposed at sub-element scope. Sub-elements aren't generic technical requirements; they're architectural decisions where alternative COAs would commit differently. This is why a SIEM capability surfaces 31 sub-elements rather than five: each sub-element is an explicit architectural-mastership choice rather than a hand-waved "logging is configured." The Testable_claim column on Tab 2 makes this explicit: every row asks "does the COA architecture define X?" with X being the specific architectural commitment.

Authoritative sources

Slug Source
ZT-RA DoD Zero Trust Reference Architecture v2 (7 pillars, 152 capabilities)
NIST-80053 NIST SP 800-53 Rev 5
SRG DoD Cloud Computing Security Requirements Guide
SCCA DISA Secure Cloud Computing Architecture
STIG Applicable Network and Boundary STIGs and SRGs
OMB-LOG OMB Memoranda M-21-31, M-22-09
DOD-CYBER DoDM 8530.01, DoDI 8530.01 / 03, DTM-24-001, TASKORD 20-0020, C2C Step 2 / 3 doctrine bundle (Reporting Guide v1.0, Implementing Steps 2 & 3, Interrogation Guidance v1.1.0), DIAT Flash 23-01
DOD-IR CJCSM 6510.01B
DOD-CRA DoD Cyber Reference Architecture v5
CORA CORA Grading Criteria Worksheet V1R11 + 6 topical xlsx checklists + KIOR bundle + DRSI Hybrid Inspection Guide
CSSP-ESM DIAT CSSP Evaluator Scoring Metrics v11 (CSSP-side scoring)
CSSP-CARV DIAT CSSP Alignment Risk Validation V1 (subscriber-side scoring)
(relief valve) ANALYST-DERIVED; used only when no authoritative anchor mandates a claim

ESM v11 and CARV V1 are both CSF 1.1 (verified: no GV.* function, no PR.AA family). CSF 2.0 forward-compat annotations are inline in citations where the 2.0 renumbering is non-trivial.

Scope and caveats

The worked example reflects DoD federal hybrid cloud architecture. Some authoritative sources are DoD-specific (CORA, CSSP-ESM, CSSP-CARV, CJCSM 6510.01B, DoDM 8530.01). The methodology itself, the 12-slug anchor discipline, the R-ID-first-wins formula, and the seven retain archetypes are portable to non-DoD engagements; the slug stack would change but the structure holds.

(Note that the universe is built for 3-COA analysis: three alternative Courses of Action for a hybrid cloud migration. Sub-elements carry cloud-applicability prose framing what each COA must commit to. Single-COA engagements would compress this; engagements with more than three COAs would expand it.)

Vendor names are absent throughout the universe. The decomposition is capability-class (SIEM, EDR, CASB) rather than vendor-specific (no product names). Vendor selection is a downstream activity that sits below this matrix, not within it.

Client identifiers are absent. This is portfolio reference IP, not an engagement-scoped deliverable.

The 124 retained ANALYST-DERIVED rows are the methodology floor (see the "What 'AD' means" section). They aren't a defect; they're a property of the COA-discriminating-governance-mastership emphasis. A different methodology, one that doesn't push governance-mastership questions to the requirement level, would produce a lower AD-%. It would also produce architecturally less-discriminating requirements.

How to regenerate

Generated from the canonical universe at working/decomposition-step2.md via scripts/generate_universe_csv.py and scripts/generate_universe_xlsx.py. The methodology snapshot lives at docs/working-notes/methodology-snapshot.md (40 calibration items captured during authoring). The full audit baseline lives at working/j5-scratch/findings-*.md.

Run the two generator scripts to refresh both the CSV and the XLSX. The matrix updates as the underlying universe evolves through the project lifecycle.


JAKSecurity portfolio · Hybrid Cloud Security Stack universe · Generated 2026-05-16