Is Redacting a Waste of Time?
The recent discovery of bugs in image editing tools is a reminder that even seemingly innocuous features can have significant implications for data protection.
Get the latest cyber news and updates straight to your inbox.
Even enterprise-grade AI tools like Microsoft Copilot or customer-facing chatbots can introduce legal and security risks if left ungoverned.
When we talk about AI risk, the spotlight usually lands on shadow AI. For example, staff might use free tools like ChatGPT or Claude to handle sensitive work without oversight. However CISOs and COOs often overlook the risks created by the tools you’ve already approved.
These are your Microsoft Copilots, your CRM-integrated AI, and your customer service chatbots. The tools that come pre-installed, board-approved, and are “secure by design.” But just because a tool is on the corporate whitelist doesn’t mean it’s risk-free. In fact, its very legitimacy can create a false sense of security.
Let’s unpack why.
A complete yet concise go-to guide for protecting your firm against AI-related security risks.
Consider a financial firm using an AI chatbot to handle client queries. The client persuades the bot to offer an unapproved discount, and the bot complies. It wasn’t malicious, but the system still made a binding representation on behalf of the company. Under UK law, that offer could be enforceable and could also raise compliance concerns under the FCA’s Consumer Duty, which requires firms to avoid causing foreseeable harm through all customer interactions, including those generated by AI.
AI doesn’t understand regulation. It generates output based on patterns, not policy. Without properly scoped input data and governance, you’re effectively letting an intern draft customer communications with no oversight, and then publishing them live.
Microsoft Copilot and other enterprise-grade AI tools like integrated assistants, document summarisers, or email generators often feel safe. They’re built into your stack, backed by well-known vendors, and rolled out under enterprise controls. But these tools are only as good as the data and permissions you feed them.
We’ve seen scenarios where approved tools:
In each case, the output appeared legitimate. That’s what makes it so risky. Errors from AI often look authoritative.
Another over-looked risk is that these tools evolve. Copilot today is not Copilot tomorrow. Model updates, retraining cycles, or vendor API changes can shift how your tool behaves.
This drift can have consequences:
If you’re not monitoring this, or if you don’t have transparency into changes from the vendor, you won’t see the risk until it becomes an incident.
Define how AI tools can be used internally and externally, by whom, and under what conditions. Ensure that everyone (from customer service to compliance) understands it.
Perform full threat modelling and DPIAs (Data Protection Impact Assessments) on tools like Copilot and chatbots, just as you would for a new cloud provider.
Conduct periodic “red teaming” exercises. Can your chatbot be manipulated into making offers or releasing sensitive data through clever prompting?
Use tools or audit logs to track what these systems are saying and doing. Unexpected behaviour is a warning sign.
Someone at the executive level (often the CISO or Head of Risk) must be explicitly accountable for the safe and compliant use of AI across the organisation.
AI tools embedded in your platforms aren’t just “set and forget.” Many are provided by third-party vendors, and their models evolve without warning. If you don’t have visibility into where those tools live, how they process data, or whether they’re training on your firm’s information, you can’t prepare for failure or recover from it.
This isn’t just a technical concern. It’s a compliance and continuity issue. As part of supply chain risk management, firms should treat AI integrations the same way they would any other critical vendor: know where it is, what it’s doing, and how to shut it down or replace it if needed.
AI isn’t going away. If anything, its footprint inside regulated firms will only grow. But approved doesn’t mean safe, and automation doesn’t mean abdication. Governance, not just approval, is the new bar.
Because when AI makes a promise – your firm is on the hook.
The recent discovery of bugs in image editing tools is a reminder that even seemingly innocuous features can have significant implications for data protection.
In the digital age, ransomware attacks have become one of the most insidious threats to organisations and individuals. These malicious software programs are designed to encrypt important data and render it inaccessible until a ransom
We live in an increasingly connected world and with this in mind, cyber threats are growing every day. Today, ransomware attacks are on the rise while phishing scams are becoming even more convincing. We are