Compliance
ASIC and AI: what mortgage brokers need to know in 2026
ASIC's 2026 outlook flagged agentic AI as a systemic risk. Here's what that means for brokers using AI tools, what ASIC expects, and how to prepare before regulations finalise.
FinanceLocal · 8 May 2026 · 6 min read
ASIC’s 2026 Information Integrity outlook flagged agentic AI as a systemic risk. The regulator made it clear: it’s auditing operating models, not just outputs. For a mortgage broker using AI tools to generate content, that distinction is the entire story.
The pattern in the broker channel today is generally the same: ChatGPT in one tab, Canva in another, a social-media scheduler in a third, and no audit trail anywhere. The post goes live and the trail evaporates. That’s the gap ASIC has been pointing at across its 2024 AI guidance, its 2026 outlook, and the channel-specific notices that have flowed to MFAA and FBAA members.
The difference between “using AI” and “governing AI”
Using AI is the easy part. Drop a prompt into ChatGPT, get a draft back, post it. The broker channel has been doing this for two years. The trouble is that “using AI” is exactly the model ASIC has indicated isn’t enough.
Governing AI is a different motion. It means a documented process: who reviewed the output, when, what they changed, what was published, what was rejected and why. ASIC’s public position has been consistent — a licensee’s ability to demonstrate that process, on demand, is the load-bearing requirement. The output itself is downstream of the governance.
What ASIC has signalled, repeatedly
Across the documents and notices brokers have access to, four expectations recur:
- Human oversight — a licensed professional reviews AI-generated content before it reaches consumers. Not a delegated team member. Not an automated rule set.
- Clear accountability — a named licensed professional must approve outputs. The line of responsibility cannot end at an algorithm or a vendor.
- Audit trails — the licensee must be able to demonstrate what was generated, what was reviewed, what was changed, and what was published, on demand, for any post.
- Governance framework — a documented process, not ad-hoc usage. The expectation is operating-model evidence, not screenshots after the fact.
None of this is speculative — it’s the direction ASIC has set out. What’s speculative is the timing on a finalised agentic-AI rule set. That uncertainty is precisely why preparing now is cheaper than preparing later.
What most brokers are doing today
The honest picture: the majority of brokers using AI right now have no documented governance whatsoever. They’re generating posts in ChatGPT, designing them in Canva, publishing through a scheduler, and trusting that nothing they post is ever audited. If a regulator turned up tomorrow asking how the AI content is governed, the answer would be a shrug.
The CBA document fraud scandal sharpened the focus. ASIC’s scrutiny of the broker channel intensified through 2025, and the message coming back to the industry was the same: licensees need to demonstrate the governance behind what reaches consumers, not just the consumer-facing artefact.
What compliance-ready AI usage looks like
The shape of a compliant AI workflow is straightforward enough to describe in five steps:
- The AI generates a draft from a documented set of source feeds.
- The draft passes a deterministic compliance check (flagged terms, advice phrasing, undefined claims).
- A licensed broker reviews the draft, approves or rejects it, and adds a rejection reason where applicable.
- The approved post publishes; the rejection — if any — stays in the audit log.
- Every step is timestamped and exportable.
That’s the operating model ASIC has signalled it expects. It’s also the model a broker can demonstrate on a single screen if asked. The brokers who put this in place before the rules finalise will not need to scramble; the brokers who wait will.
How to prepare now
The cheapest preparation is the simple one: stop using ungoverned AI tools for anything that reaches consumers. If a draft is going to a TikTok or LinkedIn account, it needs to pass through a defensible review step before it publishes. The review step has to leave evidence — not a vibe, but a record.
For the broker who has been writing posts in ChatGPT and posting through Buffer, the gap to close is the audit layer that connects them. Either build the governance into the existing tools (which most brokers won’t do) or move to a platform where the governance is the architecture. Either path resolves the regulatory exposure; one of them is harder to execute on time.
The bottom line
ASIC has been clear: the gap between using AI and governing AI is where the regulator’s attention is focused. The brokers who treat that gap as a near-term operational problem will be ahead of where the rules eventually land. The brokers who treat it as a problem for later will pay the cost of catching up — in time, in advisory fees, and in the reputation cost of an audit that turns up gaps.
The good news is that the work to close the gap is finite. A documented, auditable, broker-approved AI workflow is a one-time setup. After that, it just runs.
Editorial post for educational and informational purposes only. FinanceLocal is a technology platform, not a credit licensee or financial-services provider. Nothing on this page constitutes financial advice. Brokers using FinanceLocal operate under their own credentials and aggregator.