Capital Markets: Putting up the guardrails: the need for accountability (NZ Herald)

Putting up the guardrails: the need for accountability

Tim McCready

Artificial intelligence is rapidly re-shaping the future of finance, and New Zealand’s regulators are taking notice.

With its potential to cut costs, boost efficiency, and spark innovation, AI is increasingly critical to the financial services sector.

The New Zealand Financial Markets Authority (FMA) is making it clear it encourages innovation and believes New Zealanders should have access to the same technological advancements as those in other countries. At the same time, it wants to help facilitate the responsible adoption of AI, working with firms to ensure they have the appropriate oversight to mitigate risks and provide quality service to customers.

“The FMA recognises the transformative potential of AI in finance, and our focus will always be to ensure that adoption is safe and ultimately benefits both consumers and markets”, says FMA’s executive director of strategy and design, Daniel Trinder.

The FMA’s latest research on AI, released last year, shows the New Zealand financial services sector is rapidly gearing up for an AI-powered future. Thirteen firms responded to the FMA’s survey, with representatives across asset management, banking, financial advice, and insurance.

The findings show a clear consensus: AI is a pivotal technology with enormous promise. All respondents said they have either integrated generative AI into their operations or plan to do so soon, driven by a desire for better customer outcomes, improved operational efficiency, and enhanced fraud detection. But companies are still cautious. Firms are taking a measured, risk-assessment-driven approach to deployment, with a strong emphasis on responsible and controlled deployment.

From the FMA’s perspective, there are three categories of risk for industry and regulators to consider – market manipulation, systemic risk, and consumer protection and ethical concerns.

There is concern about AI amplifying herding behaviour, which could increase correlations across markets and make them more susceptible to sudden shocks. Poorly tuned models trained on biased or incomplete data can also pose real dangers, delivering flawed predictions and skewing decision-making. And with AI tools becoming widely embedded and complex, there is an increased exposure to sophisticated cyber threats.

One of the most significant threats is AI-generated fraud. The use of generative AI to create fake content or impersonate individuals (known as “deepfakes”) is a real concern in New Zealand. A recent example saw an AI-generated video of Prime Minister Christopher Luxon circulated as part of a scam targeting pensioners.

To date, the FMA has not opted to introduce standalone AI regulation. Instead, it is relying on existing legal frameworks, while signaling that governance and accountability will be the core expectations placed on financial services firms.

Trinder spoke on this topic at the annual EU-Asia Pacific Forum on Financial Regulation in Vietnam earlier this year.

“Clarifying our expectations will permit greater adoption of AI and other emerging technologies, but also in a way that minimises risks,” he said.

“This would help ensure good governance which is paramount for safe AI adoption. Governance is not a panacea on its own, but without good governance arrangements that keep pace with the application of AI, the risks increase substantially.”

What’s clear is that boards and senior management of financial institutions won’t be able to delegate their AI responsibilities.

Trinder says there is a requirement for accountability across the entire AI lifecycle, including possibly specifying the role of human intervention to minimise harmful outcomes from the adoption of AI.

The FMA is collaborating with industry on guardrails for AI-generated financial advice and tools. It has conducted an industry round table to discuss the current and future applications of AI in financial services with key market players.

The challenge will be to unlock the benefits of AI while ensuring New Zealanders remain protected.

As the technology develops, so too will the expectations around risk, disclosure, and accountability.

That balancing act is only just getting started.