Banks struggle to move AI pilots beyond test stage
Banks are struggling to move artificial intelligence projects beyond the pilot stage. Industry analysis points to structural constraints rather than shortcomings in the underlying models.
Financial institutions have invested heavily in AI and automation, but many programmes remain confined to research teams or limited operational tests. The main barriers, according to the analysis, are poor data quality, ageing core systems, regulatory demands, shortages of specialist staff and weak internal governance.
Data problems
Data sits at the centre of the problem. Banks hold vast amounts of customer and transaction information, but much of it is split across business lines, stored in incompatible systems or managed without consistent controls.
That fragmentation makes it difficult to train and run reliable models. In many cases, institutions are trying to build AI tools on incomplete records or data that lacks the business context needed to produce accurate outputs.
A 2023 McKinsey report shows that data quality remains one of the most persistent obstacles to the wider use of AI in finance. Housing lenders ranked it as the biggest issue, while nearly 70% of banks said poor data quality and integration were major barriers to adoption.
The issue is not the amount of data available. Banks often have more than enough information, but it is trapped in legacy databases or spread across departments with different standards. As a result, AI systems draw on inconsistent inputs, increasing the risk of unreliable recommendations or misleading results.
Analysts say institutions need a single, trusted data source and stronger governance for pilots to deliver practical value. Without that groundwork, AI projects can stall indefinitely.
Legacy burden
Older banking systems add another layer of difficulty. Many large lenders still rely on core platforms built decades ago, including software written in COBOL, that were not designed to connect easily with modern AI tools.
These platforms often depend on batch processing and offer limited real-time access to data. AI applications, by contrast, typically require continuous data flows and application programming interfaces, making integration with old banking infrastructure slow and expensive.
As a result, banks may need substantial IT modernisation before they can use AI in core operations. One global bank spent more than AUD $1.5 billion updating its technology estate to support services better suited to AI deployment.
Until then, many institutions restrict AI to lower-risk or peripheral tasks such as fraud alerts and customer service chatbots. Core activities, including underwriting and other regulated decision-making processes, still rely on human oversight.
Regulatory caution
Banking regulation also shapes how far AI can be used. Financial firms face strict obligations around fairness, accountability and explainability, particularly in lending and risk decisions.
That makes opaque models difficult to use in production. Regulators expect firms to explain why a customer was denied credit or how a risk judgment was reached, which is difficult when a system's internal logic is not readily understood.
For that reason, banks have generally taken a cautious approach. AI is more commonly used to support staff in compliance, monitoring and advisory work than to make autonomous decisions in regulated areas.
Citigroup keeps AI in a human-in-the-loop phase in compliance functions, using the technology to assist staff rather than replace human judgment. That reflects a broader pattern across the sector, as executives weigh the potential benefits of automation against the risk of penalties, disputes or reputational damage if systems fail.
Skills gap
Staffing is another constraint. Financial institutions need people who understand both advanced AI techniques and the rules, processes and controls that govern banking.
That combination is in short supply. In a survey of CFOs conducted last year, outsourcing company Personiv found that 87% of financial executives reported shortages of data scientists and AI engineers.
The problem goes beyond hiring technical staff. Banks need teams that can adapt models to specific financial workflows, compliance requirements and operational settings. Programmes often lose momentum when teams understand machine learning but lack knowledge of how banking businesses actually work.
This gap has pushed some firms towards outsourcing or limited internal development. Others are being urged to train domain-specific AI teams or work with fintech partners that can bridge technical and sector expertise.
Governance gaps
Even where funding is available, unclear ownership can hold projects back. Many banks launched AI pilots without a clear strategy for moving them into daily operations or assigning accountability for outcomes.
Only 12.2% of institutions in Wolters Kluwer's survey of 148 financial institutions had a well-defined and properly resourced AI strategy, despite more than half experimenting with some form of the technology. That leaves many initiatives stuck between innovation teams and frontline business units.
Weak governance also undermines trust. If a model performs poorly or produces an error, support for wider deployment can quickly fade. Analysts argue that banks need stronger policies on data use, model validation and accountability if they want AI efforts to scale.
The assessment concludes that the main obstacles to AI in banking are organisational rather than theoretical. Progress depends less on launching new pilots than on fixing the systems, data structures and decision-making frameworks beneath them.
As one analyst put it, "it isn't the technology - it's the team" that aligns AI with the unique demands of banking.