Consider a financial firm using an AI chatbot to handle client queries. The client persuades the bot to offer an unapproved discount, and the bot complies. It wasn’t malicious, but the system still made a binding representation on behalf of the company. Under UK law, that offer could be enforceable and could also raise compliance concerns under the FCA’s Consumer Duty, which requires firms to avoid causing foreseeable harm through all customer interactions, including those generated by AI.
AI doesn’t understand regulation. It generates output based on patterns, not policy. Without properly scoped input data and governance, you’re effectively letting an intern draft customer communications with no oversight, and then publishing them live.