[ad_1]
A couple of years after its initial boom, artificial intelligence (AI) still remains a huge buzzword in the fintech industry, as every firm looks at a new way of integrating the tech into its infrastructure to gain a competitive edge. Exploring how they are going about doing this in 2025, The Fintech Times is spotlighting some of the biggest themes in AI this February.
Ensuring biases are avoided is key in financial decision-making. AI can massively help an organisation decide who should and shouldn’t be onboarded or offered a service, however, rejecting a worthy applicant due to poor habits adopted by the AI and machine learning systems, completely negates the purpose of using the technology: ensuring everyone who should get a financial offering does so extremely quickly.
While firms have a duty to ensure everyone who has a right to a service, gets it, regulations play a big part in ensuring firms do not let this priority slip down the list. In light of this, we hear from more industry experts about which regulations are impacting machine learning in financial decision-making, and how firms need to change their mindsets towards AI regulation.
Global oversight needed
For Dorian Selz, co-founder and CEO at Squirro, the enterprise GenAI platform provider, there are various ways in which organisations can get around regulations. He explores how abiding by one regulation in one country may not mean the regulation is needed in other countries the firm is operating in.
“The issue isn’t just the regulations affecting machine learning – it’s the lack of standardisation across countries in a globalised economy. A financial services company might rigorously apply the regulations in vigour at their HQ, but they might not meet the requirements in other countries where they operate. Despite this, there’s little preventing them from continuing and claiming that they followed ‘their’ rules. This lack of oversight around the use of ML in financial decision-making is dangerous.”
DORA is acting as a wake-up call
Simon Phillips, CTO of SecureAck, the automated security platform, notes that with DORA coming into action, firms will be under much stricter rules and will need to make any collaboration with third-party providers much more official than they previously had to be in order to ensure no hefty fines need to be paid.
“DORA is one of the newest regulations impacting financial services and it has a direct impact on machine learning. However, most people won’t directly associate the regulation with this.
“Machine learning algorithms are often ‘black box’ meaning that we don’t know why a decision or outcome was derived, but this means when something goes wrong, which we have seen before with AI and SPAM detection, it can result in legitimate activities being affected and a denial of service.
“However, in certain cases, where a rogue algorithm causes a denial of service, this is something which could fall under the scope of DORA, as it could threaten the availability of key banking services. Machine learning is also becoming increasing reliant on third parties and cloud providers, but many of these organisations have seen large-scale outages.
“When considering this in relation to DORA, this could turn these providers into critical third parties, which means they will have to sign contracts and adhere to certain standards to safeguard the availability of their services.”
Achieving responsible AI
According to Scott Zoldi, chief analytics officer, FICO, the analytics firm, two fundamental regulations impacting machine learning in financial decision-making are the General Data Protection Regulation (GDPR) and the EU AI Act.
Exploring why these two regulations are so important, he said: “GDPR asserts consumer rights when it comes to automated decisions by an AI where one can contest the automated decision, validate the data used, and obtain a concrete and actionable explanation as to how the AI made the decision.
“The EU AI Act goes further indicating what types of financial decisions are high risk and where many AI may not be appropriate without being robust, interpretable, ethical, and auditable. These two regulations are acknowledged worldwide as standards towards responsible AI.”
Accountability and explainability
Simon Thompson, head of AI, ML and data science at GFT looks at machine learning and AI in the UK, identifying how firms must always put consumers at the heart of everything they do. When implementing technology like AI, firms must remember to think about how new services are protecting consumers.
“The UK has outlined principles for AI regulation for regulators in each sector. The FCA has reiterated that it applies regulatory principles in a technology-agnostic way, focusing on preventing harm to consumers and financial markets.
“For the finance industry, this means considering the impact of ML-based decisions on customers and the market generally – which makes sense, as these factors ultimately support our business.
“In terms of specifics, we need to demonstrate our ability to own, control and explain why ML systems behave as they do (accountability and explainability). We must show the principled construction and implementation of the system that generates the decisions (fairness, privacy, robustness and security).
“In the EU, specific technical prohibitions come into force this month, which limit the technology that can be used in ML, in particular with using biometrics and with respect to high-risk systems.”
Transparency is a top priority
When new regulations are introduced, at their heart, they are done to reduce risk. Andrew Henning, head of machine learning at Markerstudy, the insurance firm, explores how improving transparency in operations surrounding AI’s usage will, in turn, lower risk.
“Regulations that tend to be the most challenging often revolve around governance and transparency. Machine learning is more than just a suite of tools and techniques we use to assess risk and set competitive premiums, it allows us to learn from data so we can do this effectively. Delivering good customer outcomes is at the heart of our operations, so the onus is on us to anticipate issues that may arise before models hit production and a team of highly-trained experts investigate and test all possibilities.
“Robust governance systems must also be established that support best practices and push us to continue operating at a level that minimises the risk and yields the greatest protections for the business and customer.
“Our decisions must be explainable. Many machine learning techniques are notorious for being a ‘black box’ and it is not uncommon to develop models and systems with high performance only to lose the ability to, for instance, tell customers why their premium has increased. Other techniques are more explainable, being extensions of traditional statistics.
“Having good transparency in our systems builds trust and allows us to check our models haven’t learned something wrong or become biased. This is for both the decision to accept a policy, as well as ensuring a fair price is quoted.”
Source link
#Uncovering #Regulations #Impacting #Machine #Learning #Financial #DecisionMaking
[ad_2]
Welcome to “Cryptocurrency Trading,” your comprehensive destination for the latest news and analysis in the world of **cryptocurrencies** and **currency trading**. We provide rich content focused on **market analysis**, **trading strategies**, and **emerging technologies** that impact the **cryptocurrency market**. Join us to discover the **best investment opportunities** in **Bitcoin**, **Ethereum**, and other leading cryptocurrencies. Our goal is to equip you with the information you need to enhance your trading skills and achieve success in the world of **investment**. Follow us for continuously updated content that supports you in making informed decisions.