CFOtech India - Technology news for CFOs & financial decision-makers
Flux result 6d9e2d17 db4e 49ae a3d5 19fd8506e327

Regulated data dominates AI breaches in finance sector

Thu, 9th Apr 2026

Regulated financial data accounted for 59% of generative AI-related data policy violations in the financial services sector, according to Netskope Threat Labs. The findings highlight mounting compliance pressure as financial institutions expand their use of AI tools.

The report found that use of enterprise-managed generative AI applications rose to 79% from 33%, while reliance on personal AI applications declined. Even so, 15% of users still switched between personal and corporate accounts, increasing the risk that sensitive data could be exposed.

AI use was widespread across the sector. Netskope found that 70% of users were actively using generative AI tools, while 97% were indirectly interacting with AI-powered applications. It said 94% of those applications rely on user data for training, raising concerns about how regulated information may be handled once entered into the tools.

For financial institutions, the issue is especially acute because of the volume of customer, transaction and regulated records they handle. The figures suggest that while firms are steering staff towards managed services with greater oversight, internal controls have not removed the risk that employees may still submit data through personal accounts or consumer-facing applications.

Beyond AI

The research also highlighted other workplace technology risks. It found that 65% of data policy violations in personal cloud applications involved regulated data. LinkedIn and Google Drive also remained widely used in work environments, with usage rates of 92% and 84% respectively.

GitHub was the most exploited platform for malware in the dataset, affecting 11% of organisations. That points to a broader security challenge for financial groups, which face not only data leakage risks from AI adoption but also threats tied to common cloud and collaboration services.

The methodology was based on anonymised usage data drawn from a subset of Netskope customers in the global financial services sector. Netskope said the results show AI adoption is becoming embedded in everyday workflows, even where institutions are trying to apply tighter controls.

India provides important context for the figures, as banks, insurers and other financial groups have been investing in AI for customer service, fraud detection and operational automation. That growth has sharpened focus on how firms govern systems that may process or retain highly sensitive information.

In the Indian market, regulated firms face strict obligations over the handling of customer and financial data. The findings suggest that a shift to managed AI tools may improve visibility for security teams, but does not remove the need for data loss prevention measures and monitoring across web and cloud services.

Gianpietro Cutolo, Cloud Threat Researcher at Netskope Threat Labs, commented on the shift in user behaviour and the persistence of policy breaches. "In financial services, organizations are actively shifting users away from personal AI tools toward managed, enterprise-ready platforms that offer better visibility and control, though some overlap shows there's still work to do. What really stands out is the data: regulated financial information continues to dominate policy violations, making this one of the highest-stakes environments for data protection. As AI becomes more deeply embedded through APIs and integrated platforms, strong governance and effective DLP controls are essential to keep innovation moving without putting sensitive data at risk," Cutolo said.

The report suggests the challenge is no longer limited to whether staff are using generative AI at all. It now centres on how organisations govern a mix of approved applications, embedded AI functions within broader software products, and the continued use of personal accounts alongside corporate tools.

Ray Canzanese, Director at Netskope Threat Labs, said the expansion of AI use had created more routes through which data could leave an organisation. "As financial institutions accelerate their adoption of generative AI, they are also expanding the number of pathways through which sensitive data can be exposed. While the shift towards organisation-managed tools is a positive step, our findings show that risks persist, particularly where personal and enterprise usage overlap. To reduce risk, organisations need a layered approach - inspecting all web and cloud traffic to stop malware, blocking non-essential applications, and using data loss prevention to protect sensitive information. Technologies like remote browser isolation also play a key role in enabling safe access to higher-risk websites," Canzanese said.