Category

Uncategorized

Insurance Moratorium Management

By | Uncategorized

A blog post by Anil Celik

The complexities of insurance underwriting have always demanded a high level of precision, especially when it comes to managing risks in areas prone to natural disasters. With the increasing frequency and severity of such events, the traditional methods of underwriting are no longer sufficient to keep up with the dynamic risk landscape. UrbanStat’s innovative approach to Underwriting and Moratorium Management offers a solution that not only addresses these challenges but also sets a new standard in the industry.

Understanding Moratorium Management

A moratorium in insurance is a temporary suspension of new policies or policy renewals in certain areas due to elevated risk, often triggered by imminent natural disasters such as floods, hurricanes, or wildfires. The goal is to mitigate the insurer’s risk exposure during high-risk periods. However, managing moratoriums effectively is a daunting task, requiring real-time data and sophisticated analysis to ensure that decisions are timely and accurate.

The Challenges of Traditional Moratorium Management

Historically, the process of managing moratoriums has been manual and reactive, relying heavily on historical data and static risk assessments. This approach often results in delayed responses, missed opportunities, and increased exposure to unanticipated risks. For example, traditional methods might not account for the latest weather patterns or changes in the built environment, leading to decisions based on outdated information.

Moreover, the lack of integration between different data sources and the inability to process large volumes of data quickly can hamper the effectiveness of moratorium decisions. This often leaves insurers vulnerable to significant financial losses and reputational damage in the wake of disasters.

UrbanStat’s Innovative Solution

UrbanStat revolutionizes moratorium management by leveraging cutting-edge technology to provide real-time insights and dynamic risk assessments. Their platform integrates multiple data sources, including up-to-the-minute weather data, historical loss records, and socio-economic factors, to create a comprehensive risk profile for any given area.

One of the standout features of UrbanStat’s solution is its predictive analytics capability. By using advanced machine learning algorithms, the platform can forecast potential risk scenarios with high accuracy. This allows insurers to proactively manage their moratoriums, placing or lifting them based on predictive risk rather than merely reacting to current events. This predictive approach helps in maintaining a balanced portfolio and avoiding the pitfalls of over-exposure to high-risk areas.

Benefits of Advanced Moratorium Management

The benefits of adopting UrbanStat’s advanced moratorium management are manifold. First and foremost, it enables insurers to make faster, more informed decisions. By providing a real-time view of risk, the platform allows insurers to quickly implement or adjust moratoriums, minimizing their exposure to catastrophic losses.

Additionally, the integration of real-time data ensures that the risk assessments are always up-to-date, reflecting the most current information available. This reduces the chances of underestimating risks and provides a more accurate basis for decision-making.

Another significant advantage is the ability to streamline operations. UrbanStat’s platform automates many of the processes involved in moratorium management, reducing the need for manual intervention and freeing up valuable resources. This not only improves operational efficiency but also enhances the insurer’s ability to respond swiftly to changing risk conditions.

Looking Ahead

As the insurance industry continues to grapple with the impacts of climate change and other emerging risks, the need for advanced underwriting solutions will only grow. UrbanStat’s approach to moratorium management represents a significant step forward, providing insurers with the tools they need to navigate an increasingly uncertain landscape.

By embracing these innovative solutions, insurers can not only protect themselves from financial losses but also build stronger, more resilient relationships with their policyholders. In a world where risks are constantly evolving, having a robust and dynamic approach to moratorium management is no longer a luxury, but a necessity.

UrbanStat’s pioneering technology offers a glimpse into the future of underwriting, where real-time data and predictive analytics drive smarter, faster, and more effective decision-making. For insurers looking to stay ahead of the curve, investing in such advanced solutions is an investment in their future resilience and success.

Are you interested in hearing more about UrbanStat’s Wildfire Map? Contact me at [email protected] 

10 Cities in California That Might Experience Wildfire This Season

By | Uncategorized

A blog post by Anil Celik

 

California has been experiencing its worst years with wildfires. Of the 10 largest wildfires, 9 occurred after 2000, and 5 occurred after 2010. The Camp Fire (2018) was the deadliest (86 civilians) and most destructive (~20,000 properties) wildfire in California history. This single fire has been estimated to cost around $16.5 billion dollars in damages. The average loss ratio for California Homeowners market was above 130% in 2018, and this single event forced insurance companies to increase their premiums. We already see multiple insurance companies attempting to exit (or lower their market share) California Homeowners market and non-renews became a big problem for consumers who are looking for a coverage. 

Local governments, insurance companies, and technology vendors should do a better job understanding wildfires and we believe there is a demand for better data and models in the market. This is why we started hearing a lot of questions about wildfire modeling whether it’s possible to utilize AI to build better predicting models for wildfire risk, long before Camp Fire happened. The first thing we did was looking at existing alternatives in the market. One of the common problems we have seen in different models was that they assumed the relationship between wildfire events and the factors that could explain those events were linear. Another common problem we have seen was models relied on only a few variables e.g. soil type, slope, aspect, and access to roads. According to U.S. Department of Interior, humans are the cause of around 90% of wildfires. Clearly slope, aspect, or access to roads cannot be causes of wildfires, they are merely factors that affects the severity of fires.

Our model used over 25 variables that can start or accelerate wildfires, however these three are perhaps the most interesting ones: (1) Glass bottles that shatter over time and carried away by winds work as magnifiers to starts fires, therefore, we had to input this information to our model. Drivers stop by the road side tend to leave their trash there, this is why we used distance to intercity roads as a factor to the model. (2) Forgotten campfires in campgrounds are another important factor that we had to include to our model. (3) Poorly maintained power infrastructure started the Camp Fire. Distance to high voltage power lines are also another important factor that we have included in our model.

UrbanStat’s Wildfire Model has produced superior results compared to U.S. Forestry’s map. When you overlay the last 20 years of wildfires in California on U.S. Forestry’s map only 54.8% of the areas that burned is classified as `High` or `Very High` risk. The same analysis with UrbanStat’s map produces better results and this performance metric increases to 80.5%.

We strongly believe in our model’s ability to successfully predict the areas that have the highest wildfire risks. This is why we publish this analysis publicly. According to UrbanStat’s AI based wildfire risk map, here are the cities most at risk due to wildfires in California:

  • Ramona CCD, San Diego, CA
  • Moorpark CCD, Ventura, CA
  • Alpine CCD, San Diego, CA
  • Jamul CCD, San Diego, CA
  • Laguna-Pine Valley CCD, San Diego, CA
  • Fillmore CCD, Ventura, CA
  • Ojai-Mira Monte CCD, Ventura, CA
  • Simi Valley CDD, Ventura, CA
  • Santa Paula CCD, Ventura, CA
  • Palomar-Julian CCD, San Diego, CA

 

According to our model, 68.5% of the areas that have similar characteristics to cities mentioned above has already experienced a wildfire. According to U.S. Forestry and UrbanStat risk maps 35% of California is considered to have `High` or `Very High` risk. These cities only represent 2.5%.

Are you interested in hearing more about UrbanStat’s Wildfire Map? Contact me at [email protected] 

Press Release: Kevin M. Doyle Joins UrbanStat as an Advisor to the Board

By | Uncategorized

Chicago, IL – (September 27, 2018) – UrbanStat, a provider of an AI-based property underwriting platform, is excited to welcome Kevin M. Doyle to the team as an advisor to the board.

`Artificial Intelligence and machine learning has created the potential for massive transformational change within the insurance industry. UrbanStat is one of the most exciting companies I’ve come across in Insurtech. With its unique ability to offer a single solution that integrates automated underwriting powered by machine learning, data visualization, and strategic decision making, UrbanStat is well placed to revolutionize how carriers approach underwriting` says Kevin M. Doyle.

`Kevin has extensive experience within the IT and insurance industries, having worked for companies such as Marsh ClearSight, SAP, and CCC Information Services in different leadership positions. He has helped top tier carriers transform their analytical processes for over 20 years. As a company, we are very excited to start working with him. I am sure that with his help, UrbanStat will continue to innovate and reach to a broader audience in the North American insurance industry. ` says Anil Celik, CEO of UrbanStat.

With this latest addition, UrbanStat’s Advisory Board now has 3 distinguished members: Kevin M. Doyle, Nauman Noor, and Cem S. Celen.

 

About UrbanStat

UrbanStat has helped insurance companies such as Sompo Japan, Allianz, Ageas, Safety Insurance, and Gulf Insurance Group to automate and improve their property underwriting processes by utilizing geospatial data sets, statistics, and machine learning models since 2014.

 

About Kevin M. Doyle

Kevin M. Doyle has over 20 years of experience in the insurance industry serving top tier carriers in their data analytics and digitalization needs. He was previously the SVP, Client Management and Delivery Leader at Marsh ClearSight, and also worked as a Sales and Business Development Manager at SAP and CCC Information Services.

Is Machine Learning the Silver Bullet In Underwriting?

By | Uncategorized

A blog post by Nilgun Celik, Tom Gubash & Anil Celik

Using machine-learning to underwrite property insurance has been our key focus for the last 18 months. It started as a simple Minimum Viable Product (MVP), we had our highs and lows, we made many mistakes, and every time we see a new data set we are surprised how much we are still learning.

For engineers, machine learning is a simple concept albeit the complicated math behind it. For non-engineers, it is an abstract concept where you input your data, and it generates magical results. Ergo, we get the question `… but how? ` very often.

Machine learning is a powerful tool that helps a variety of different industries in so many great ways. It always requires lengthy preparations on data; certain problems need more research or understanding of an entire industry and its regulations. Insurance is unquestionably one of them. This blog post will focus on some of the obstacles we experience.

Predicting the policyholders who will file a claim sounds like a true supervised learning problem at first. However, when you start thinking about how the industry works, you start seeing that there are no real `True Positives` or `True Negatives`. Supervised learning problems require a historical data set with known outcomes. Let’s say you are trying to identify fraudulent cases; the algorithms require a historical dataset where you mark the claims with actual fraudulent claims. When It comes to claim prediction, there is the problem of `lack of claims`. Since insurance is a long-term game, a customer who didn’t claim anything for five years could file a large claim in year 6. If you score this customer `High` in year 4, is your algorithm successful or not? When you measure your performance in year 4, your algorithm fails, however, when you run a long-term performance measurement, the scoreboard will reflect an entirely different story.

There are also a few technical obstacles insurance carriers need to overcome. One of the most significant problems is the imbalance between the customers who claim and don’t claim. Most insurance companies have around 1-6% of claim frequency. It means that out of every 100 policyholders, only 1-6 policyholders will file a claim. Our algorithms try to identify those 1-6 policyholders so the insurance carriers can come up with fairer terms and pricing for their entire portfolio. It means that only 1-6% of the data tells us the story we want to learn. This is one of the very first things insurance carriers need to solve, and the good news is that there are a few solutions. If you are interested in reading more about this issue, we discussed this topic in great detail here.

Insurance is a highly regulated industry. The way that insurance carriers can use these technologies could be constrained by regulations depending on the state/country you are operating in. The industry should not pass human biases to the algorithms; we need to be very careful about it. This is why we never use any of the following information during our modeling process:

  • We don’t use personal information: Name, gender, age, ethnicity or anything that would directly correlate with these features
  • We don’t use any financial data: credit scores, insurance scores, income or anything that would directly correlate with these features

We always tell our clients that we don’t want any personal information that would be present in the policy or claim files. The only personal information we use is an address, and we only use that to understand location-based risks, not to come up with socio-economic segmentation. This makes things interesting because we don’t know anything about the customer we are trying to come up with a risk score. Often the actual underwriters have more information about the very same customer as the algorithms don’t have the personal experience/knowledge of the underwriter. This is why algorithms don’t have an ongoing bias. We are not saying they are entirely free of biases, but this is a topic for another article.

Regardless of the barriers, this is a fascinating problem to work on. We genuinely believe that the research we are doing, the experience we are gaining help us create the right approach, and we know that it already helps some of our clients in improving their profitability. We know that machine learning will change how the industry is underwriting. We don’t necessarily believe that underwriting will be replaced entirely by machines (although some people think otherwise); thus, we don’t think it’s the silver bullet. The actual silver bullet is the combination of machine-learning, traditional probabilistic modeling, and more importantly, human intuition. We call this combination ‘the Three Pillars of Risk Analysis`.

 

To learn more and quickly leverage what we’ve already successfully deployed for our carrier partners contact us at [email protected].