AI Hygiene Review — Parent Admin Guide
This guide covers the AI Hygiene Review feature for Parent admins (Portfolio
Principal in a PE-firm deployment, Group Admin in a conglomerate deployment).
It assumes you are already familiar with campaign management — see
docs/ADMIN_GUIDE.md for general campaign workflow.
What the AI Hygiene Review is
The AI Hygiene Review is a campaign-driven self-attestation that lets the parent fund verify tenant cybersecurity hygiene around AI features shipped to customers. It anchors on the open-source AI SAFE² Framework v1.0 (Cyber Strategy Institute, dual-licensed MIT + CC-BY-SA) with crosswalks rendered to NIST AI RMF, ISO/IEC 42001, EU AI Act, and OWASP LLM Top 10.
The assessment is scoped to AI in product — customer-facing AI shipped by tenants. Internal employee tooling and AI-assisted developer tooling are out of scope by design; the Q0 gate handles that boundary automatically.
Creating an AI Hygiene Campaign
The AI Hygiene Review is an add-on assessment module that can be layered onto any campaign or run standalone.
Standalone (AI-only campaign)
- Open Create Campaign.
- In Step 2 (Framework & Control Baseline), click “Running an add-on-only campaign? Skip framework selection.”
- Scroll to Add-on Assessment Modules and tick AI Hygiene Review.
- Continue through the wizard and submit.
The campaign is created with 30 SAFE² scoring questions and no framework controls. Each assigned tenant gets the Q0 scope gate, the Q1 third-party override, and (on Q0=Yes / Q1=No) the full 30-question questionnaire.
Combined with CIS or NIST
- Open Create Campaign.
- In Step 2, choose your primary framework (CIS / NIST / etc.) as you would today.
- Scroll to Add-on Assessment Modules and tick AI Hygiene Review.
- Continue through the wizard and submit.
The campaign now has both the framework’s controls AND the 30 SAFE² scoring questions. Tenants see the AI Hygiene flow alongside the framework’s standard assessment.
Required prerequisites
The AI Hygiene add-on requires two fixtures to be loaded on the deployment:
python manage.py load_ai_safe2— registers the AI SAFE² v1.0 framework + its 30 controlspython manage.py load_ai_question_bank— populates the 30 question bank rows tied to those controls
If either is missing at campaign-creation time, the API returns 409 with a message naming the missing command. Tenants can also be redirected to /assessments/<assignment_id>/ai-hygiene from their assignment detail page.
1. Launching an AI Hygiene Review campaign
Navigate to Compliance > Campaigns, then click New Campaign.
In the campaign wizard:
- Name and description — give the campaign a name (e.g., “AI Hygiene — Q2 2026”) and optional description.
- Framework — select the AI Hygiene Review preset from the framework
dropdown. The preset pre-populates the question bank (30 SAFE² questions
across 5 pillars), the AI Hygiene Officer attestation requirement
(
CampaignPolicyAttestation.policy_type = 'ai_hygiene_officer',backend/apps/assessments/models.py:2611), and the default scoring weight profile. - Control scope — for AI Hygiene campaigns the scope is automatically set
to
ai_in_product(backend/apps/assessments/services/ai_hygiene_constants.py:18). No manual scope adjustment is needed. - Scoring config — the default pillar weights are loaded from
AI_HYGIENE_DEFAULT_WEIGHTS(ai_hygiene_constants.py:31–37): audit_inventory 0.25, sanitize_isolate 0.20, fail_safe_recovery 0.15, engage_monitor 0.20, evolve_educate 0.20. Leave these at their defaults unless you have a fund-specific rationale — audit is weighted highest because procurement DDQs concentrate on that pillar. - Document requirements — optional. You may require tenants to upload an AI Bill of Materials, model cards, or red-team reports as supplemental evidence. These are separate from Q1 third-party override documents.
- Assignments — select which tenants to assign. A tenant with
subsidiary_oversight_enabled = Truein your family is visible here if you have the correct permission class (IsSubsidiaryOverseerOrPortfolioAdmin,backend/apps/core/permissions.py:135–143). - Activate — click Create & Activate to notify assigned tenants immediately, or Save Draft to review before activating.
2. Reading the rollup
Once tenants start responding, navigate to Compliance > Campaigns, open the campaign, and select the AI Hygiene tab (or the Scores tab, depending on your deployment version).
The rollup table has one row per assigned tenant. Columns:
| Column | What it shows |
|---|---|
| Tenant name | Rendered via the terminology dictionary — “Portfolio Company” (PE-firm deployment) or “Subsidiary” (conglomerate deployment). |
| AI Hygiene Score | 0–100 overall score, or — if the tenant has not yet submitted (status is assigned or in_progress). Null when the tenant took the Q0 scope-out path. |
| Pillar scores | Per-pillar breakdown on hover or in the detail pane. Each pillar 0–100. |
| Status badge | Current CampaignAssignment status (see Section 4 below). |
| Provenance | One of: Self-attested, Audited externally — accepted, or AI Out-of-Scope. Sourced from AIHygieneListItemSerializer.provenance_label (backend/apps/assessments/serializers/ai_hygiene_serializers.py:82–87). |
| Evidence-backed | Icon present when the tenant attached at least one optional evidence file to a questionnaire response, OR when status is submitted_via_third_party_accepted. |
The table is sortable by any column. Click a column header to sort; click again
to reverse. Use the status filter to show only a specific status (e.g., show all
submitted_via_third_party_pending to work the review queue).
3. Reviewing third-party uploads (the Q1 path)
When a tenant takes the Q1 override path — uploading an existing third-party AI
governance assessment — their assignment moves to
submitted_via_third_party_pending. A badge appears on the Third-Party
Review tab of the campaign.
Finding the review queue
Open the campaign and select Third-Party Review. Each row in the queue shows the tenant name, submission time, document filename, assessment type, and SHA-256 hash. The hash matches what was recorded at upload time and is re-verified on every download — chain of custody is intact even if the file is later re-retrieved.
What qualifies
The acceptable third-party assessment types are defined in
THIRD_PARTY_ASSESSMENT_TYPES (ai_hygiene_constants.py:61–69):
| Enum value | Assessment type |
|---|---|
iso_42001_cert | ISO/IEC 42001 Certification |
hitrust_ai_risk_mgmt | HITRUST AI Risk Management Assessment |
hitrust_ai_security_cert | HITRUST AI Security Certification |
nist_ai_rmf_audit | NIST AI RMF Audit (Big4 or accredited auditor) |
big4_ai_audit | Big4 AI Audit Report |
ai_red_team_report | Independent AI Red-Team Report (last 12 months) |
other | Other — tenant must provide description |
Red-team reports have a 12-month recency constraint (enforced by Q1 question text). For all other types the constraint is scope and coverage: the assessment must cover the tenant’s customer-facing AI practices, not just internal AI governance.
Accepting a submission
Click Review on the row. A PDF preview pane opens alongside the metadata (type, hash, submitter, submission timestamp). When you are satisfied the document covers the scope:
- Click Accept.
- The assignment transitions to
submitted_via_third_party_acceptedand the AI Hygiene Score for that tenant is set to 100 (this is the terminal accepted score — see Section 5 for how the score is computed for questionnaire paths). - The provenance label changes to
Audited externally — accepted. - The tenant is notified.
Rejecting a submission
- Click Reject.
- Enter a
rejection_reason(required — the serializer enforces this:ThirdPartyReviewDecisionSerializer.validate(),ai_hygiene_serializers.py:111–115). - The assignment transitions to
submitted_via_third_party_rejected, and then immediately back toin_progressso the tenant can either upload a different document or switch to the full questionnaire. - Your rejection reason is visible to the tenant on their assessment dashboard.
4. Assignment status reference
All statuses are defined in CampaignAssignment.STATUS_CHOICES
(backend/apps/assessments/models.py:1300–1311). The four statuses added for
AI Hygiene Review are from NEW_ASSIGNMENT_STATES
(ai_hygiene_constants.py:86–91).
| Status | Meaning |
|---|---|
assigned | Campaign has been activated and the tenant notified. No action taken yet. |
in_progress | Tenant has opened the assessment and answered at least one question, OR has been sent back from a rejected third-party submission. |
submitted | Tenant submitted the full questionnaire. Awaiting your review if your campaign requires review; auto-completes if not. |
under_review | You have opened the submission for review. |
completed | You approved the submission. Score is final. |
overdue | Campaign due date passed before the tenant submitted. |
not_applicable_attested | Tenant answered Q0 = No (does not ship AI features). Signed attestation text is stored. Score is null — this tenant is out of scope. |
submitted_via_third_party_pending | Tenant uploaded a third-party assessment via Q1. Awaiting your accept/reject decision in the review queue. |
submitted_via_third_party_accepted | You accepted the third-party submission. Score is 100 and provenance is Audited externally — accepted. Terminal state. |
submitted_via_third_party_rejected | You rejected the third-party submission. Assignment returns to in_progress. |
5. How the AI Hygiene Score is computed
Note: The scoring service is
backend/apps/assessments/services/ai_hygiene_score.py(ships with the p1-services branch). The algorithm below documents what that service implements.
Per-pillar score
For each of the five SAFE² pillars, collect all question responses from the
submission where the response is not N/A. Map:
| Response | Score |
|---|---|
| Yes | 1.0 |
| Partial | 0.5 |
| No | 0.0 |
| N/A | excluded from denominator |
The pillar score is the arithmetic mean of the in-scope responses, multiplied by 100. If every question in a pillar is answered N/A, that pillar is excluded from the overall score calculation (it does not count as zero).
Overall score
The five pillar scores are combined using the weight profile from
AI_HYGIENE_DEFAULT_WEIGHTS (ai_hygiene_constants.py:31–37):
audit_inventory: 0.25
sanitize_isolate: 0.20
fail_safe_recovery: 0.15
engage_monitor: 0.20
evolve_educate: 0.20
When one or more pillars are excluded (all-N/A), the remaining pillar weights are renormalized so they sum to 1.0 before applying. The result is an overall score between 0 and 100.
Special cases
- Q0 = No (scope-out): Score is
null. The tenant is recorded asnot_applicable_attested; they do not appear in score-based sorts. - Q1 accepted: Score is fixed at 100 regardless of questionnaire content.
Provenance is
Audited externally — accepted. - Incomplete submission: The score is not computed until the tenant submits.
The rollup shows
—while status isassignedorin_progress.
Where this score shows up
The AI Hygiene Score appears in:
- The campaign rollup table (this guide).
- The tenant’s own campaign assignment detail view.
Integration of the AI Hygiene Score into the existing Exit Readiness Score
(backend/apps/core/services/exit_readiness.py:182) is planned for Phase 3 of
the AI Hygiene roadmap, when the apps/ai_governance/ module ships per-tenant
AIInventoryItem rows.
6. Subsidiary-overseer access
If your organization uses subsidiary-oversight (a parent admin with
Tenant.subsidiary_oversight_enabled = True), you can see AI Hygiene
assessments across your entire family of tenants in the same rollup view.
The permission gate on AI Hygiene endpoints is
IsSubsidiaryOverseerOrPortfolioAdmin (backend/apps/core/permissions.py:135–143),
which accepts both Portfolio Principals and subsidiary-overseer admins. Cross-tenant
queries use User.get_visible_tenants() (backend/apps/core/models.py:303–347).
To enable subsidiary oversight for a parent-tenant, navigate to
Administration > Organization, find the tenant, and toggle
subsidiary_oversight_enabled. See docs/KNOWN_BUGS.md for current limitations
with the subsidiary-overseer create path (the read-side rollup works; create
operations have a deferred bug).
Framework attribution
The AI Hygiene Review is anchored on the AI SAFE² Framework v1.0, an open-source taxonomy by Cyber Strategy Institute (https://github.com/CyberStrategyInstitute/ai-safe2-framework), dual-licensed MIT (code) + CC-BY-SA (taxonomy). Attribution is included in every fixture header per the CC-BY-SA license requirement.