AI Contracts and Liability: What Australian SMBs Need to Know
AI tools give confident-sounding advice. Sometimes that advice is wrong. When an Australian small business acts on bad AI-generated advice and suffers a loss: a tax error, a contract clause that doesn’t hold up, a compliance failure: the question of who’s liable is genuinely complex. Here’s what you need to know.
What the AI companies’ terms say
Every major AI provider’s terms of service includes substantial disclaimers. OpenAI’s terms state that outputs “may not always be accurate” and that users should “not rely on ChatGPT for professional advice.” Anthropic’s terms are similar. Microsoft Copilot’s terms describe it as being “for entertainment purposes” in some contexts. The legal effect of these disclaimers is to shift responsibility for the consequences of AI output to the user.
This is enforceable in Australia to the extent that the disclaimer doesn’t conflict with Australian Consumer Law. But the general principle stands: if you use AI output without appropriate verification and something goes wrong, the AI company’s liability is likely to be minimal or zero.
When you’re liable to your clients
The more interesting liability question for small businesses is what happens when you give AI-generated advice to a client and it turns out to be wrong. An accountant who uses AI to prepare tax advice and gets it wrong can’t point to the AI as a defence: the accountant has professional obligations to their client that exist regardless of the tools they use. The same applies to lawyers, financial advisers, real estate agents, and anyone else operating under a professional licence or duty of care.
For non-licensed businesses, the analysis under Australian Consumer Law is whether the advice was misleading or deceptive, or whether a consumer guarantee was breached. If you sell a service that includes AI-generated recommendations and those recommendations are wrong in a way that causes loss, you may be liable even if you didn’t know the AI was wrong.
Practical risk management
Several steps reduce your exposure:
- Always verify high-stakes AI output: legal, financial, medical, and compliance-related information should be checked against authoritative sources before you act on it or pass it to clients.
- Disclose AI use where it’s material: if AI is generating advice you’re providing to clients, telling them that is both ethically sound and legally protective.
- Keep human judgment in the loop: using AI as a drafting tool with human review is lower risk than using AI as the final decision-maker.
- Check your professional indemnity cover: some PI policies have exclusions or conditions related to AI use. Talk to your broker.
- Document your process: if a dispute arises, being able to show that you verified AI output and applied professional judgment will help your position.
What this means for your business
The liability risk from AI isn’t theoretical, but it’s manageable with the right processes. The businesses most exposed are those that use AI output without review and pass it to clients as their own professional advice. The businesses with low exposure are those that use AI as a productivity tool, apply human judgment to the output, and disclose AI involvement where it matters. Get legal advice if you’re building AI into a client-facing service that involves regulated advice.
Sources
- Australian Consumer Law (Competition and Consumer Act 2010, Schedule 2)
- OpenAI Terms of Use
- ACCC. Consumer Guarantees
Related: AI Disclosure Obligations for Australian Businesses: What to Tell Customers | ACCC and AI: Australian Consumer Law in the Age of Chatbots
📊 Compare AI tools side by side | 💼 Free resources & AI prompt packs
📬 The SmallBizAI Brief
One practical AI tip for Australian small business. Every Tuesday. Free.
Join business owners getting smarter about AI every week.