The hands-off approach to regulating artificial intelligence (AI) in financial services could harm consumers, MPs say.
The Treasury Select Committee said today that the Financial Conduct Authority (FCA), the Bank of England and the Treasury are taking a ‘wait and see’ approach to AI and are ‘not doing enough’ to offset any risks.
The committee’s report states that 75% of financial firms in Britain are now using AI, a far greater percentage than any other sector.
The report states that AI could deliver “significant benefits” to consumers, and that financial firms and the FCA should investigate this.
But it added: “However, the FCA, the Bank of England and HM Treasury are not doing enough to manage the risks of AI. By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm.”
The Treasury Select Committee said it has found a significant amount of evidence that AI can harm consumers of financial services.
For example, AI can provide unregulated financial advice, which can mislead consumers or provide bad advice, the committee report said.
Other harms noted by the committee included vulnerable customers losing access to financial services due to AI.
The UK does not have specific legislation or regulations to manage AI in financial services, the report said, but instead uses existing legislation and regulations.
However, the FCA and Bank of England say these pre-existing regulations and laws are sufficient to address any risk from AI to consumers of financial services.
Meg Hillier, chair of the Treasury Select Committee, said: “Businesses are understandably keen to try and gain an edge by embracing new technology, and this is especially true for our financial services sector, which needs to compete on the global stage.
“The use of AI in the city has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the government to ensure that the safety mechanisms within the system keep pace.
“Based on the evidence I have seen, I am not confident that our financial system is prepared if a major AI-related incident were to occur, and that is concerning. I want our public financial institutions to take a more proactive approach to protect against that risk.”
Financial services industry leader Preetham Peddanagari said: “Today’s report from the Treasury Select Committee on AI in financial services underlines the importance of a clearly defined regulatory approach.
“EY research shows strong industry adoption of advanced models – such as agentic AI – and has found that more than a third of UK financial services firms have fully embedded AI into their operations. The challenge now is governance, with many firms admitting they do not have sufficient controls in place to protect customers and ensure compliance with this new technology.
“As AI continues to evolve from experimentation to large-scale deployment, it is critical that companies have robust governance and accountability processes in place.”
Many mortgage brokers are already uses AI. At a recent Mortgage Strategy MIT Live event in London, MAB said brokers’ accuracy with documentation was around 80% without AI, but rose to 99% when the technology was used to help.

