Taking the High Deductible Road: Using Data to Steer Business Success

All too often, inaccurate and incomplete data is used to determine high deductible (HD) insurance program structure for businesses. If your company is turning to HD programs as a solution to skyrocketing property and casualty insurance costs, it’s critical to ensure the quality and integrity of the data being used to evaluate your business so you can achieve the most beneficial HD program possible.

According to Chris Wyard, Head of Technical Data at Allianz Insurance and a modern data expert, “The greatest challenge facing our industry’s leaders is the quality and integrity of data.” When it comes to HD programs, this perspective is especially relevant. To ensure that your company’s data is high-quality with a goal of structuring the most favorable HD program possible, it helps to take a two-step approach:  

  1. First, understand how insurers use data to evaluate your company and determine your pricing.
  2. Next, learn how to ensure the integrity of your data and leverage it to structure the most beneficial HD program possible.

It can be a challenge to accurately determine the most cost-effective retention level for your business insurance policy, a decision that hinges on analysis of your loss history and premium information. To estimate the risk cost for high-deductible insurance policies, insurance carriers first collect this data and estimate the projected loss, also known as the “loss pick.” This involves utilizing historical data, coupled with external data, such as industry trends and economic conditions, to forecast the likelihood and severity of potential losses.

At the heart of determining a loss pick and HD program structure lies the concept of Total Cost Of Risk (TCOR). The TCOR represents the overall cost an insurer anticipates incurring for a particular policy year, encompassing both the expected losses and the premiums collected. 

While the TCOR provides a valuable overall assessment of risk, it is necessary to further analyze data on the distribution of potential losses. This involves examining the projected losses at specific confidence levels, allowing insurers to quantify the likelihood of various loss scenarios. For example, an insurer might consider the 75th percentile loss, representing the loss amount that is exceeded 25% of the time. This figure provides a buffer against potential losses that are higher than the expected value.

However, to gain a comprehensive understanding of the risk landscape from an insurer’s perspective, we must also break down the potential loss data into two distinct components:

  1. the RETAINED loss, representing the portion loss that the insurer expects to cover directly, and
  2. the CEDED loss, representing the portion that will be transferred to a reinsurer.

This data distinction is critical for determining the insurer’s financial exposure, and for allocating risk appropriately. Likewise, it’s critical for your company to maintain data that accurately reflects and protects your business.

Using a calculator and guestimates to determine your HD policy structure is becoming as antiquated as an abacus. Cutting-edge analytics tools are available that enable both your business and the insurers to make informed decisions regarding potential and projected loss, as well as retention levels for HD policies. Yet many insurance companies are ill-equipped to conduct advanced data analytics and risk assessments, as they lack the technology, platforms, and experience required.

By harvesting and harnessing accurate and relevant data, we can better measure risks and costs using proven analytic methods. This process includes integration of data about refinement of projected loss, as well as pricing data.

Advanced modeling techniques, such as machine learning algorithms, have become a game-changer in the insurance world. These sophisticated tools can analyze vast amounts of data, identifying patterns and relationships that may influence loss severity. This leads to more accurate projected loss estimates, and improved retention decisions.

Data analytics can also inform premium pricing strategies, ensuring that premiums adequately reflect the risk profile of your business insurance policy. Insurers use this to achieve a balance between profitability and customer affordability.

The US Insurance Data Management Association contends that “… unreliable, incomplete, or poor-quality data cost organizations between 15% and 20% of their operating budgets.”  More problematic, the majority of insurance analysts and actuaries are trained to examine secondary data “ut provisum,” or as it is given to them, without a means for validating source, accuracy or integrity.

Data quality issues can manifest in various forms, including missing values, inconsistencies, inaccuracies, and duplication. These imperfections can distort risk assessments, leading to mispriced premiums, unwarranted claim payouts, and an overall inability to accurately gauge the true risk profile of policyholders.

While insurance providers routinely tout the use of software and processing platforms, they often overlook the impact of data quality in analytics methodologies used for risk assessment.

It stands to reason that the ability to accurately assess and manage risk is critical. This is why data analytics has emerged as an essential decision-making tool for both businesses and insurers. However, it bears repeating that the quality of data underpinning such analytics is the main driver of their effectiveness. Subpar data quality can lead you down a treacherous road that leads to erroneous risk assessments, jeopardizing an insurer’s financial stability as well as their relationship with you, the customer.

Numerous studies support the impact of data quality on business operations. For example, one IBM study found that poor data quality strips $3.1 trillion from the US economy annually, due to lower productivity, system outages, and higher maintenance costs.

Elsewhere, analysts found that the persistence of low-quality data throughout enterprise systems robs business leaders of productivity, as they must continuously vet data to ensure it remains accurate. Research also indicates that “less than 0.5% of all data is ever analyzed and used”—and that if the typical Fortune 1000 business were able to increase data accessibility by just 10%, it would generate more than $65 million in additional net income!

Achieving data quality excellence requires a comprehensive approach that includes the three building blocks of good data:

  1. Data Governance: Establishing a robust data governance framework is essential for ensuring data integrity and consistency. This framework should define clear data ownership, access controls, and data quality standards.
  1. Data Cleansing: Data cleansing involves identifying and correcting data errors and inconsistencies. This process may involve data scrubbing, de-duplication, and imputation techniques.
  1. Data Monitoring: Continuous data monitoring is crucial for maintaining data quality over time. This involves identifying and addressing data quality issues “as they arise.”

At ICC, it’s a scenario we’ve seen time and again: businesses that prioritize data quality experience distinct advantages. High-integrity information fosters trust and transparency, allowing stronger relationships that benefit your business.

Accurate, data-based risk assessments and feasibility studies give you leverage with insurers, enabling you to make informed pricing decisions and optimize resource allocation. At ICC, we provide tailored safety and risk management programs and solutions that effectively reduce risk and strengthen your position. These include customizable risk analysis and mitigation tools that address cyber risk, proactive response plans and programs for natural disasters, injury and illness investigation, compliance tools, and more.

High deductible insurance policies present both opportunities and challenges for businesses. By embracing data analytics, we can more effectively negotiate with insurers and guide the decisions that optimize retention levels, balance risk and profitability, and ultimately enhance your growth.

SOURCES

Radley, S. Solving the Insurance Industry’s Data Quality Problem. Corinium Intelligence, February, 2020. https://www.coriniumintelligence.com/insights/insurance-data-quality

Javanmardian, K., Ramezani, S., Srivastava, A., Talischi, C.  How Data And Analytics Are Redefining Excellence In P&C Underwriting. In “Our Insights”, September 2021.

https://www.mckinsey.com/industries/financial-services/our-insights/how-data-and-analytics-are-redefining-excellence-in-p-and-c-underwriting

Wilkinson, B. Redefining Industries: Big Data For Financial Regulators. Risk Journal, Vol. 4.

https://www.oliverwyman.com/our-expertise/insights/2014/dec/risk-journal-vol–4/risk-journal–redefining-industries/big-data-for-financial-regulators.html

Corinium Global Intelligence.  The Future of Insurance Data. 2023.https://www.coriniumintelligence.com/insights/future-of-insurance-data

US Insurance Data Management Association. Data Report. 2023. https://www.healthlinkdimensions.com/

Bansal, M.  Flying Blind: How Bad Data Undermines Business. Forbes Innovation, October 2021.

https://www.forbes.com/sites/forbestechcouncil/2021/10/14/flying-blind-how-bad-data-undermines-business/?sh=4467644829e8

Contact us

Fill in the form below or give us a call and we'll contact you. We endeavour to answer all enquiries within 24 hours on business days.
[contact-form-7 id="5208"]