Financial firms' AI use raises data compliance risks
Regulated financial data accounted for 59% of generative AI-related data policy violations in the financial services sector, according to Netskope Threat Labs. The findings highlight compliance risks as financial institutions expand their use of AI tools.
Netskope's latest Financial Services Report found that generative AI use is now widespread in the sector, with 70% of users actively using such tools and 97% interacting indirectly with AI-enabled applications. It also said 94% of those applications rely on user data for training, increasing the risk that sensitive information could be exposed.
The figures come as banks, insurers and other financial firms adopt AI for customer service, fraud detection and automation, while facing tighter scrutiny over how they handle regulated data. The report noted that organisations have started shifting users to company-managed AI services, but overlap between personal and work usage remains a source of risk.
Use of enterprise-managed generative AI tools rose from 33% to 79%, while reliance on personal AI applications fell. Even so, 15% of users still switch between personal and corporate accounts, creating potential channels for data leakage.
Beyond AI
The research also highlighted broader security concerns tied to cloud and web use in financial services. In personal cloud applications, 65% of data policy violations involved regulated data.
LinkedIn and Google Drive were among the most widely used workplace platforms, accounting for 92% and 84% respectively. GitHub was identified as the most exploited platform for malware, affecting 11% of organisations in the dataset.
The findings were based on anonymised usage data from a subset of Netskope's financial services customers worldwide. While the report references India's financial sector, the methodology indicates the underlying customer sample was global rather than country-specific.
That distinction matters because India is one of the faster-growing markets for digital financial services, with lenders, insurers and payment groups increasing spending on automation and AI-assisted operations. Those shifts can expand the volume of sensitive customer and transaction data moving through external applications and cloud services.
For compliance teams, the concentration of regulated financial data in policy violations suggests that AI adoption is adding to existing concerns rather than replacing them. Personal cloud storage, social platforms and code-sharing services remain part of day-to-day work patterns, creating multiple routes through which data can leave controlled environments.
The report also points to a key limitation of managed AI services: they do not remove the risk created when staff alternate between company-approved and personal accounts. In regulated industries, that overlap can complicate auditing, data handling controls and internal governance.
"In financial services, organizations are actively shifting users away from personal AI tools toward managed, enterprise-ready platforms that offer better visibility and control, though some overlap shows there's still work to do. What really stands out is the data: regulated financial information continues to dominate policy violations, making this one of the highest-stakes environments for data protection. As AI becomes more deeply embedded through APIs and integrated platforms, strong governance and effective DLP controls are essential to keep innovation moving without putting sensitive data at risk," said Gianpietro Cutolo, cloud threat researcher at Netskope Threat Labs.
His comments reflect a broader shift in corporate security strategy as businesses try to channel employee demand for AI into tools that can be monitored and governed. The report argues that the challenge is no longer limited to staff typing prompts into public chatbots, but extends to AI functions embedded in business applications and connected through application programming interfaces.
Layered controls
Netskope recommends that financial institutions inspect web and cloud traffic more closely, restrict unnecessary applications and use data loss prevention tools to limit exposure. It also identified remote browser isolation as one way to manage access to riskier websites.
Those recommendations reflect a layered security model that has become more common as threats spread across email, cloud apps, web traffic and developer platforms. In practice, firms are being pushed to align cyber security controls with data governance and compliance policies, particularly as AI tools are introduced into front-office and back-office workflows.
"As financial institutions accelerate their adoption of generative AI, they are also expanding the number of pathways through which sensitive data can be exposed. While the shift towards organisation-managed tools is a positive step, our findings show that risks persist, particularly where personal and enterprise usage overlap. To reduce risk, organisations need a layered approach - inspecting all web and cloud traffic to stop malware, blocking non-essential applications, and using data loss prevention to protect sensitive information. Technologies like remote browser isolation also play a key role in enabling safe access to higher-risk websites," said Ray Canzanese.