AI Decipher File · November 2022 (chatbot interaction); February 2024 (ruling)
Air Canada Chatbot Ruling: When a Tribunal Decided AI Output Is Still Your Output
The Air Canada chatbot ruling is the Applied AI accountability case that ended the argument over whether a company can disclaim its own chatbot. In February 2024, the British Columbia Civil Resolution Tribunal held Air Canada liable for incorrect bereavement-fare information delivered by its website chatbot, rejecting the airline's defense that the chatbot was a separate legal entity.
Failure pattern
Consumer AI liability and enterprise accountability for AI outputs
Organizations involved
Air Canada, British Columbia Civil Resolution Tribunal, Jake Moffatt (claimant)
Incident summary
Jake Moffatt booked travel from Vancouver to Toronto in November 2022 after his grandmother died. According to the tribunal decision Moffatt v. Air Canada, 2024 BCCRT 149, Moffatt asked the chatbot on Air Canada's website how the airline's bereavement-fare policy worked. The chatbot replied that he could book at standard rates and apply for a bereavement discount up to 90 days after the flight. He purchased the ticket and later filed for the refund.
Air Canada denied the refund. The airline's actual published policy required passengers to apply for the bereavement rate before travel, not after. The chatbot's answer was wrong, and the tribunal found that the chatbot's wrong answer cost Moffatt the difference between the regular fare and the bereavement-rate fare he expected.
Air Canada argued, in a written response that became the most-quoted part of the decision, that the chatbot was a separate legal entity responsible for its own actions. Tribunal member Christopher Rivers rejected this defense in plain language. The airline owns the website, the airline put the chatbot on the website, and the airline is responsible for what the chatbot tells customers. The airline was ordered to pay $812.02 CAD in damages plus tribunal fees.
Failure technique
The technical pattern is straightforward. A retrieval or generation system surfaced incorrect policy information, the customer relied on it, and the company tried to disclaim the answer after harm had occurred. There is no allegation of malicious behavior. The system worked as built. The build did not match the policy.
Several Applied AI engineering choices created the gap. The chatbot likely drew from an internal knowledge base that was either out of date or never aligned with the published bereavement policy. There is no evidence in the public record of an evaluation suite that would have caught the mismatch. There is no evidence of a fallback rule that would have routed bereavement-fare questions to the published policy page rather than letting the bot answer free-form.
From an accountability angle, Air Canada appeared to operate the chatbot under the assumption that customer-facing AI output was advisory rather than binding. The tribunal rejected that framing. Customer-facing AI output is a representation by the company. The same legal standards that apply to a call-center agent apply to a chatbot, and a company cannot put a chatbot on its homepage and then claim distance from its statements.
Impact and consequences
The direct financial harm to Air Canada was small: roughly $812 CAD in damages. The reputational and precedent-setting harm was much larger. The decision produced widespread coverage in Canadian and US legal press, and it became the working reference cited in enterprise AI governance discussions through 2024 and into 2025.
Within weeks of the decision, several Fortune 500 legal departments published internal guidance instructing product teams to assume that AI-generated customer-facing content carried the same liability profile as employee-generated content. The Air Canada case did not change Canadian common law. It made the implication concrete: a company that ships a chatbot owns the answers.
Air Canada removed the chatbot from its website shortly after the ruling. The decision now reads, on AI governance panels and in graduate Applied AI seminars, as the case study for why customer-facing AI requires the same review discipline as a published policy document.
Lessons for builders
Treat customer-facing AI output as a binding representation by your company. The tribunal applied the same negligent misrepresentation standard it would apply to a call-center agent. AI engineers should design systems on the assumption that whatever the system says is what the company said.
Build evaluation suites against the actual published policy, not against the AI knowledge base. The Air Canada gap was a sync gap. Continuous testing that asks the chatbot the same questions a customer would ask, then compares the answer to the canonical published source, would have caught this before a customer did.
Route high-stakes questions to deterministic answers. Bereavement-fare policy is not a free-form question. It has a published, canonical answer. Mature consumer AI builds detect category and route to a templated response that links to the policy page rather than letting the model improvise. This pattern reduces both legal exposure and customer harm.
Keep a policy-update sync loop. When the published policy changes, the AI knowledge source must change in the same change request. This is process, not technology. The Applied AI roles that own this loop are AI Product Manager and Responsible AI Engineer in collaboration with the legal team that owns the published policy.
Mitigations
What builders should put in place to address the failure pattern. Each mitigation maps to operational practice the relevant Applied AI roles own.
- ›Run a daily evaluation suite that compares chatbot answers to the live published policy page for every question category the bot is allowed to answer.
- ›Implement category-aware routing so high-stakes questions (refunds, bereavement, medical accommodations, legal terms) hit a deterministic response, not free-form generation.
- ›Tie policy updates and AI knowledge-source updates to the same change request. The change cannot ship until both the published page and the AI knowledge index are updated and tested.
- ›Maintain a customer-facing audit log of chatbot answers tied to session ID. When a customer disputes an answer, the company can verify exactly what was said.
- ›Treat AI product launches as legal-review-required releases. The same review process used for new published terms should apply to new chatbot capabilities.
- ›Document the chatbot's scope in the user-visible interface. Where the bot cannot answer authoritatively, it should say so and link to the canonical source.
Related Applied AI roles
The Applied AI roles whose day-to-day work would have prevented, detected, or contained this incident.
- AI Product Manager: An AI Product Manager owns AI-powered product features and the roadmap that ships them.
Related AI Decipher Files
Frequently asked questions
What did the Air Canada chatbot get wrong?
The chatbot told Jake Moffatt he could book at standard rates and apply for a bereavement discount up to 90 days after his flight. The airline's published policy required passengers to apply for the bereavement rate before travel. The chatbot answer contradicted the actual policy, costing Moffatt the fare difference.
Why is the Air Canada chatbot ruling significant for Applied AI builders?
It established that a company cannot disclaim responsibility for its own customer-facing chatbot. The tribunal applied the same negligent-misrepresentation standard used for human agents. Applied AI product managers and responsible AI engineers now design customer-facing systems on the assumption that AI output carries the same liability as a published policy document.
What controls would have prevented the Air Canada chatbot incident?
Continuous evaluation against the canonical published policy would have caught the mismatch. Routing of high-stakes questions like bereavement fares to a deterministic templated answer linked to the policy page would have reduced exposure. A policy-update sync loop tying the published policy to the AI knowledge source as a single change request would have prevented drift.
Did Air Canada appeal the ruling?
The decision was issued by the British Columbia Civil Resolution Tribunal in February 2024 and Air Canada paid the awarded damages of $812.02 CAD plus tribunal fees. The airline removed the chatbot from its website shortly after. The decision stands as the working reference cited in enterprise AI accountability discussions.
Which Applied AI roles work on preventing Air Canada-style incidents?
AI Product Manager scopes which questions the system answers free-form and which it routes to deterministic responses. Responsible AI Engineer builds evaluation suites that compare AI output to canonical policy. AI Governance Lead designs the policy-sync process. AI Risk Analyst documents the residual liability so legal and product can decide jointly.
Sources
DecipherU is not affiliated with, endorsed by, or sponsored by any company listed in this directory. Information compiled from publicly available sources for educational purposes.
Get cybersecurity career insights delivered weekly
Join cybersecurity professionals receiving weekly intelligence on threats, job market trends, salary data, and career growth strategies.
Get Cybersecurity Career Intelligence
Weekly insights on threats, job trends, and career growth.
Unsubscribe anytime. More options