Why Guardrails Matter More Than Rules
For this edition, I did an enormous amount of research. The more I dug, the more I found. And not all in agreeable. I’ll admit this one did strike a bit of fear in me. Why? Because of the vast uptake in AI usage as a communications tool, without the understanding of where the guardrails should be.
There’s a lot of AI guidance right now that sounds like one of two things:
“Be careful — this is risky”
“Relax — the tools are getting better”
Both miss the point.
AI doesn’t introduce new risk into our work as much as it amplifies existing ones, especially when it comes to trust, credibility, and interpretation. And, as communicators, we already spend a lot of time managing those risks, and we don’t need another layer added into the mix.
That’s why this part of the toolkit is about using professional judgment.
Not all AI risk is technical
When people talk about AI risk, they often mean data protection, security, compliance or vendor reliability. These are real concerns and they usually sit with IT, privacy, or legal.
But communicators carry a different category of risk: how language lands, who sounds like they’re speaking or what people infer from tone, framing, or silence.
These are trust risks, not system risks.
And that’s the difference. We as communicators see this for what it is, but most don’t and that’s a problem.
The real question isn’t “Can we use AI?”
It’s this:
Where should AI be allowed to shape meaning — and where should it stop?
That’s a communications question, not a tool question. And it’s why guardrails matter more than blanket rules.
A useful way to think about guardrails
Instead of asking “Is this allowed?”, it should be:
1. What’s the worst‑case outcome if this is wrong?
Confusion?
Mistrust?
Reputational damage?
The higher the consequence, the closer a human should be to the final message.
2. Is this message acknowledging, or answering?
AI works reasonably well when it:
acknowledges receipt
explains process
points to approved information
It becomes risky when it:
explains decisions
reassures emotionally
interprets intent
That line matters a lot.
3. Would people reasonably expect a human response here?
If the answer is yes, AI’s role should be:
preparation
synthesis
drafting for review
—not delivery.
Why “human‑in‑the‑loop” isn’t enough
You’ll often hear that AI is fine as long as there’s a human reviewing it.
That’s necessary, but not sufficient.
The harder problems show up before review:
Should this be automated at all?
Who is accountable for how this sounds?
Are we using consistency where adaptation is needed?
Those are design decisions, not approval steps.
This is where communicators need a voice. Not ownership of the tools. Not control over systems.
But a clear role in defining:
acceptable use in communication contexts
tone and intent boundaries
when automation should stop
how transparency is handled internally and externally
Without this, standards get set by default, usually by efficiency, not trust.
And as communicators, we are the gatekeepers of trust, meaning we are responsible for developing the guidance needed.
Guardrails aren’t about slowing things down.
They’re about protecting professional judgment in an environment that increasingly rewards speed and polish.
My Quick Win
If you’re navigating AI use in your role, consider this as a baseline:
AI can help prepare communication.
Humans remain responsible for communication.
That one distinction solves more problems than most policies.
I originally planned to cover more of the policy and best‑practice work in this edition, but the material is far too extensive for a single newsletter. For this edition, I’ve intentionally pared down the framework. I’ve done much more work on AI policy development, standards, and organizational ownership. I’ll be spending some time shaping the material into a clear, practical resource. The first pieces will appear on the AI Toolkit page, with the full set to follow.
If you’d like to be the first to know when the deeper guidance goes live, there’s a sign‑up for updates.
Next up: how to start building an AI toolkit that fits your role, context, and constraints — not someone else’s stack.