about cairn
A field guide for AI governance, made for small Canadian teams.
CAIRN — Canadian AI Risk & Navigation — a free, lightweight tool for small and medium organizations in Canada that want to govern their AI use.
A cairn is a stack of stones that marks the path through unfamiliar terrain. Walkers build them for each other. This tool doesn't decide your route. It helps you mark what you've seen, what you've decided, and where you're going.
What CAIRN does today
- Inventory. Write down every AI system in use — standalone tools, AI features baked into your everyday SaaS, custom integrations. Comes pre-stocked with the most common “hidden AI” surfaces.
- Redlines. Define what AI must never do in your organization. Start from a preset list of common boundaries and edit the wording until it matches how your team actually talks.
- Policy. Pull your Inventory and Redlines into a Responsible AI policy template. Download a Word file to circulate and sign, or a plain-text version to upload to your AI assistant when you want to ask questions about your own policy.
What's coming next
- Reviews. Per-system check-in worksheets on a configurable cadence: bias spot-checks, drift indicators, data-leakage probes, PIPEDA exposure, redline conformance.
Frameworks this tool draws from
NIST AI Risk Management Framework (AI RMF 1.0)
Released by the U.S. National Institute of Standards and Technology in January 2023, the AI RMF is a free, public framework with four core functions: Govern, Map, Measure, and Manage. CAIRN's modules each align with one of these functions.
NIST AI Risk Management Framework · AI RMF 1.0 (PDF)
AIGN SME & Startup Framework
The AI Governance Network (AIGN) maintains an SME-focused framework with practical tools including the Agentic Risk & Goal Canvas, the Redline Canvas, and a Trust Scan. CAIRN's Redlines module is directly inspired by this work.
AIGN SME & Startup AI Governance Framework
OPC Principles for responsible generative AI (PIPEDA)
Canada's federal, provincial, and territorial privacy regulators jointly released Principles for responsible, trustworthy and privacy-protective generative AI technologies, interpreting PIPEDA and provincial privacy law in the context of generative AI. CAIRN's PIPEDA flags and several preset redlines reflect these principles.
How your data is handled
Everything you enter into CAIRN is stored in your browser's local storage on the device where you entered it. There's no server, no account, nothing uploaded. If you clear your browser data, you'll lose what you entered — so export to a JSON file regularly using the Export button on any page.
To move your governance record between devices or share it with a colleague, export the JSON file and import it on the other device. Treat the file like any other internal document (it describes sensitive aspects of how your organization runs).
What CAIRN is not
CAIRN doesn't monitor AI tool usage live, doesn't scan your network for shadow AI, and isn't legal advice. It's a structured workbook, not a security product. Treat its outputs as drafts that need human review — including review by a qualified lawyer before you rely on any of it for compliance purposes.
Reset everything
If you want to wipe everything stored in this browser and start over: