Skip to content
text

AI’s Impact on Compliance for Financial Services Firms

Published
Oct 20, 2025
Share

There are a handful of considerations chief compliance officers (CCOs) at financial services firms need to take into account when it comes to utilizing artificial intelligence (AI) for their policies and procedures.

EisnerAmper's Global Regulatory Compliance Solutions Team and Technology Enablement Team hosted a roundtable luncheon in New York City for CCs and other financial services regulatory professionals to discuss the impact of artificial intelligence tools and usage on operations and compliance programs for financial services firms, including trends in governance, risks of using AI tools, and business use cases for how AI can make compliance more efficient. This roundtable discussion was moderated by Suzan Rose, a Senior Advisor to AIMA on Government and Regulatory Affairs.

An engaging conversation revealed several key themes about the impacts and challenges of AI:

  1. Compliance as a Catalyst for Innovation: Rather than positioning compliance as a barrier or enforcement tool, organizations can use it as a consultative lens to uncover innovation. By observing how employees are organically adopting AI (even outside formal policy) compliance teams can identify emerging use cases and guide responsible experimentation. This approach transforms compliance from a “stick” into a strategic enabler.
  2. Getting the basics right: Foundational training efforts are essential to effective AI usage, and to mitigating AI risks. This includes 101-level generative AI training for employees, clear, actionable AI usage policies, and an open and communicative culture between employees and leadership to support informed decision-making and responsible AI usage.
  3. Good governance means more than just a policy: Managing compliance risk regarding AI usage requires more than just a policy to be in place: employees need to be adequately trained in how AI works and how to use AI effectively. Many CCOs have also found that it is helpful to have dedicated AI SMEs, and Committee or SteerCo oversight regarding AI usage, as an additional governance best practice.
  4. Desire for guidance from regulators on AI usage: Regulatory professionals and technology professionals alike are eager for any type of guidance on AI usage from regulators but are wary of an "overcorrection" or "overreaction" to AI by these bodies. Both urge incremental guidance and regulation as the technology progresses and becomes even more widely adopted.
  5. Elevating Data Security and Privacy: As AI systems interact with sensitive data, privacy and security become paramount. A twofold strategy, combining automated safeguards with employee-driven controls, helps ensure robust protection. This dual approach reinforces trust and accountability across the organization.

AI is not one-size-fits-all, and every organization’s journey is unique. EisnerAmper supports firms with governance, compliance, strategy, and training solutions to address key AI challenges through a holistic approach that integrates AI as a trusted part of business operations and growth.

What's on Your Mind?

a black and white logo

Andrew Yarnall

Andrew Yarnall is a Director in the Regulatory Compliance Group. With over 10 years of experience, he advises financial services firms on regulatory strategy, compliance program enhancements, and navigating complex rule changes across global and boutique institutions.


Start a conversation with Andrew

Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.