In 2022, an automated trading system was applied in an experiment in Saudi Arabia to mimic a human trader's decision-making process but leverages AI to test its ability in accuracy, speed, and consistency.
The trading sytem, which integrated various Machine Learning models, achieved remarkable investment returns of up to 86%, outperforming several hedge funds managed by top investment banks in the kingdom.
This underscores the efficacy of AI-driven decision-making in capturing market dynamics and optimizing trading strategies.
But how about if the AI-driven system in another sandbox experiment was to make erroneous predictions that led to selling stocks that actually perform well or allocates too much capital to underperforming stocks?
The result would be disastrous with substantial financial losses in a few minutes.
The contrasting scenarios presented in our analysis serve to underscore a critical and emerging concern within the financial industry and beyond: AI systems, despite their sophisticated capabilities, remain susceptible to significant failures that can result in devastating financial consequences.
This juxtaposition raises a vital question about liability and risk management, specifically, who will provide comprehensive insurance coverage against such potentially fatal mistakes?
As AI continues to permeate sectors from investment management to healthcare in the kingdom, the scope of risks expands exponentially.
These risks are not confined to financial losses but extend to reputational damage, legal liabilities, and even life-threatening outcomes in sectors like healthcare.
In our research, we found out even leading AI developers such as OpenAI are grappling with the absence of a clear, effective mechanism to insulate themselves from the enormous financial and legal liabilities arising from unintended AI failures.
Given the reality that future lawsuits involving massive compensation claims are both plausible in the AI world, how will the insurance industry in the Kingdom of Saudi Arabia grapple with the emerging dilemma of AI-related claims and liabilities?
Behind the premiums of Saudi insurance industry
Before we analyse this dilemma, we briefly present the context of the insurance industry in the kingdom, is expected to undergo a process of consolidation, where smaller or less financially stable companies will likely merge with or be acquired by larger firms, according to a new report by Fitch.
This consolidation is expected to speed up due to new regulations demanding that insurance companies hold more capital to ensure financial stability.
Companies that cannot meet these new requirements may need to merge or restructure to remain compliant.
Several insurers are struggling to make profits from their core insurance activities because competition has led to lowering premium prices.
As a result, some companies may find it unsustainable to operate independently.
The market is dominated by a few large insurers, particularly Tawuniya and Bupa Arabia, which had a combined market share of 52% in 2024, measured by revenue generated from all insurance policies.
Six of the ten largest insurers made an underwriting profit in Q1 2025. This means that insurance premiums collected (income) exceed the claims and expenses associated with policies.
The Fitch-calculated combined ratios were below 100% in 2024. (The combined ratio measures an insurer's underwriting profitability. It is calculated as the sum of claims paid and expenses divided by premiums earned, expressed as a percentage. A ratio below 100% means that the insurer's premiums are sufficient to cover claims and expenses, resulting in an underwriting profit.)
However, some of the six were only marginally profitable, and the other four made an underwriting loss.
Who Pays When AI Goes Wrong
The core dilemma of the AI risks in the insurance industry revolves around the question of payment: will insurance companies shoulder the burden of the expected AI-related failures, or will the cost be transferred to investors and end-users?
Current insurance models, designed primarily for more predictable risks, are ill-equipped to handle the novel, unpredictable, and systemic nature of AI failures.
Let’s have this example, which could really happen. A voice-cloning model was trained using a senior official at one of the banks prior earnings calls to generate realistic speech content.
Board members and other senior officials could believe they were listening to the senior official delivering strategic guidance.
However, it was actually a deepfake, an AI-generated simulation of the senior official’s voice for financial fraud or corporate sabotage.
or insurance companies, this presents a complex challenge as their traditional claims and liability frameworks may not be adequately equipped to address damages caused by AI-generated deepfakes.
Insurers are naturally cautious, hesitant to assume such liabilities without comprehensive standards, validation processes, and regulatory oversight, which are still evolving in the kingdom and around the world.
Another level of risk management
Ultimately, this niche concern underscores a pressing need for the development of robust, specialized insurance frameworks capable of managing AI-related risks across sectors in the kingdom.
Because AI errors can be the result of complex algorithmic decisions or data issues, establishing who is liable, whether the insurer, the policyholder, or third-party developers, becomes more difficult.
It recommends that the insurance companies use AI-Driven Liability Assessment Models, which work like a smart detective that looks at all the available clues to figure out who is responsible when an AI system causes a problem.
In simple terms, these models gather and analyze various pieces of information, such as:
● How did the failure happen? For example, was the AI system making a mistake because it wasn’t trained properly, or did something unexpected occur?
● What role did the AI play? Was it an AI decision supervised by a human, or was it an automated process that went wrong?
● What has happened in previous similar cases? Looking at past claims helps understand patterns or common causes to present an analysis to human supervisors in the insurance company to decide.
The complexity of AI systems, involving multiple stakeholders such as developers, insurers, and end-users, coupled with the ambiguity around fault in algorithmic decision-making, makes pinpointing responsibility a formidable task for an insurance company.
More this Weekend
What SAR50 Investment Could Mean for the Saudi economy
The traditional love affair with gold among Saudi families has long been a symbol of investment, wealth and financial security.
Yet, what if we could gently introduce a different kind of investment targeting the low-income in particular in Saudi society, one rooted not in the shimmer of gold, but in the potential of the stock market in the kingdom for them?
The Sky’s the Limit for Riyadh Air’s Financial Future
Managing an air carrier, especially one as prominent as Riyadh Air, which’s the second flag carrier of Saudi Arabia, is an intricate endeavor fraught with challenges.
The challenges span from operational costs and competitive pressures from regional carriers like Etihad Airways and Qatar Airways to ensuring long-term financial sustainability.