business Artificial Intelligence and Client Confidentiality August 12, 2025 Artificial Intelligence (AI) holds the promises of introducing efficiency into organizations by providing a vehicle to routinize mundane tasks or to assist individuals in research, analysis and reporting. Accompanying this promise is the potential landmine in preserving the confidentiality of client or customer data that sometimes can be overlooked when leveraging AI tools. This blog discusses AI at a high level and provides some best practices to consider as you deploy it in your organization.Public AI vs. Private AIPublic AI are those artificial intelligence systems that are accessible to the general public. These systems are typically cloud based systems that users can access via various services. Examples of Public AI tools would be Gemini (Google), Copilot (Microsoft), Chat GPT (Open AI), Llama (Meta) and Claude (Anthropic).By contrast, and as the name would suggest, Private AI refers to those systems developed and deployed within an organization’s infrastructure for internal use.BenefitsPublic AI is readily available and are designed to be easy to implement and adopt. Since they are typically hosted, the investment in infrastructure is minimal as these costs are typically borne by the provider. Private AI offers greater control over data and customization of the tool that is used. With this control though typically comes more resources to develop and maintain.Risks to Client/Customer ConfidentialityBecause of the way AI systems work in processing requests, it’s often inevitable that confidential information is input (either knowingly or unknowingly). For example, suppose that a Public AI tool is leveraged for synthesizing a 500-page lease agreement into a couple of pages – this is obviously a huge efficiency gain but also introduces the potential for exposure of sensitive information and potential risks.These risks could include a breach of the Public AI provider; data being utilized to “train” the AI and indirectly benefitting competitors or employees of the AI provider reviewing the data provided.Of course, the risks identified above would be significantly mitigated in the cases of a Private AI implementation but, again, the costs of such a solution would often be too significant for the small to medium sized business. Private AI is “trained” on an organization’s data only (not public data), generally never leaves your control and the models are never shared.Public AI - What can I do?The first step in introducing safeguards into your AI environment is to obtain a comprehensive understanding of how it is used, what tools are used, and who is using it. Once this is identified and documented, you can implement various controls as applicable. These controls could include:Informing your customers/clients and obtaining their authorizationOn the surface, this sounds like a recipe for disaster but most contractual relationships between entities obtain confidentiality and privacy clauses. You should consider being direct with your clients/customers and informing them as to the potential use of AI on their data, if applicable and obtain their authorization to do so. Better to obtain this authorization ahead of time versus having to disclose to them a potential breach of their data related to the use of AI. Data Minimization/AnonymizationConsider implementing processes where only the minimal amount of client data is utilized in the AI tool and/or anonymize the data before it is input into the AI tool. This could potentially eliminate the need to obtain client approval if you can sanitize the data to the point where confidentiality issues are eliminated.Make sure you are appropriately trained in the toolBuilding AI “Prompts” is becoming a sophisticated art. Ensure that your resources are not only adequately trained in the tool they are using but also trained in your company’s policies and procedures governing AI (i.e. Data Minimization/Anonymization procedures).Restrict the use of AI in your organizationNot everyone needs to have access to AI most likely. By performing the inventory of how it is to be used, you can restrict the access to the appropriate individuals – just as you would restrict access to your General Ledger, Payroll, and other ERP systems.Vendor Due DiligenceWe’ve stressed numerous times in other blogs, the importance of having a robust vendor management system to monitor those vendors that potentially have access to critical and confidential data. You should take a deep dive into understanding the AI tools you are considering and assess the vendor’s capabilities in addressing your security and confidentiality concerns, regardless of their size. Obtain their SOC reports if applicable and other applicable information so that you can make a decision that you are comfortable with – and that your clients and customers will be comfortable with.Ensuring that your organization selects and implements the appropriate AI tools is increasingly important in today’s environment. Need help identifying and assessing the risks of implementing AI in your company, reach out to a member of our Information Security Services Team.