AI Governance at Layer8
Layer8 Tech Group uses AI to power assessment products that inform major financial decisions — business valuations, acquisition risk, and franchise investments. We take that responsibility seriously. This page summarizes our AI governance framework and links to our full policy documentation.
Our governance framework is aligned with the NIST AI Risk Management Framework (AI RMF 1.0) and the NIST Generative AI Profile (AI 600-1) — the federal standard for responsible AI development and deployment.
We maintain an approved model allowlist. Only claude‑sonnet‑4‑6, claude‑haiku‑4‑5, claude‑opus‑4‑6, and self-hosted Ollama are approved for use in Layer8 products. Any new model requires written approval before deployment.
Protected Health Information (PHI) is never sent to external AI model providers. Our HIPAA engagement policy requires local inference via isolated infrastructure for any healthcare client data until a BAA is in place with our model provider.
No AI-generated score is delivered to a client without human review by a Layer8 advisor. Our AI surfaces findings — our advisors validate them.
Our scoring rubrics may reflect assumptions that disadvantage certain business types or verticals. We document these bias vectors explicitly and validate rubrics against known outcomes as real client data becomes available.
146 automated checks run on every code commit covering secrets management, token limits, observability tracing, permitted models, DNC suppression, and data access controls.
We maintain a documented 5-step incident response protocol. If a client receives a materially incorrect AI-generated report, we contain, assess, notify, remediate, and document within 48 hours.
| System | Provider | Approved Use | Data Sensitivity |
|---|---|---|---|
| claude-sonnet-4-6 | Anthropic | Primary scoring and report generation | Non-PHI only |
| claude-haiku-4-5 | Anthropic | Classification and triage | Non-PHI only |
| claude-opus-4-6 | Anthropic | Complex analysis | Non-PHI only |
| Ollama (local) | Self-hosted | Local inference, healthcare use cases | Any (isolated) |
| Qdrant | Self-hosted | Vector storage and retrieval | Non-PHI only |
| LangFuse | Self-hosted | Observability and tracing | Non-PHI only |
Our complete 9-section governance policy covering permitted models, data handling rules, governance roles, bias acknowledgment, incident response, technical compliance controls, third-party supply chain risk, and review schedule.
View on GitHub ↗Our 12-month roadmap to full NIST AI RMF alignment — 20 initiatives across 4 phases covering governance, mapping, measurement, and management.
View on GitHub ↗If you have questions about our AI governance practices, data handling, or would like to request our full policy documentation for vendor review, contact us directly.