Perilfinder Blog: Solvency II – A proper geocoding test

Solvency II – A proper geocoding test

If you work in or for the insurance industry, you have probably been affected in some way by the vast information requirements of the Solvency II directive. Primarily, Solvency II seeks to calculate the amount of capital that EU insurance companies must hold to reduce the risk of insolvency. To calculate this involves extensive spatial analysis to identify accumulated risk on the ground. Basically, insurers need to identify locations where they are exposed to significant claims in the event of a catastrophe.

Of most interest and relevance to the location intelligence industry, is the calculation of the accumulated maximum risk of risk addresses within 200m of each other (using Total Sum Insured (TSI) values). This is required to estimate the capital requirements for fire risk under the directive.

So, in non-Insurance speak, the location intelligence professional is required to identify where the largest aggregate claim would occur in a 200m zone, in the event of a fire or explosion that destroyed everything in the zone. Why 200m? I suppose some reasonable distance had to be chosen. But as I stand on Parliament Street in Dublin and realise that 200m stretches right across the River Liffey to the opposite quays, it’s a considerable blast radius.

So, from a spatial analysis perspective, it sounds easy. Right? Not if it’s done properly.

 

solveny-2

Calculation of the accumulated maximum risk of risk addresses within 200m of a policy

Firstly and most importantly, the geocoding has to be of the highest quality. Ideally, all risk addresses should be geocoded to building level. But this is not possible in nearly every jurisdiction in the EU.

Assuming that everything is matched to building level, a simple creation of 200m buffers around every risk address and a selection of all other risk addresses falling within it, would do the job. Add up the TSI and off you go.

But, the difficulties arise when we can’t match everything to building level. Clearly, we should try to get the large risks to building level at worst but for a large database, there will inevitably be records only located and geocoded to street, townland or worse level. What do we do with them? How do we include their TSI in the accumulation?

The easy way would be to simply select them on the basis of their “best-guess” geocode. So, if we can only find the street that the address refers to but not the building, the record is geocoded to the centre of the street. If that “centre” falls within 200m of another risk location, they would be accumulated. Easy? Yes but fundamentally flawed and potentially very inaccurate.

The better way, is to create a comprehensive process where each risk address not fully geocoded ( only found to street level or building group level, for example), is added to an accumulation if a building which is a member of its street or building group table, falls within the 200m buffer. A bit of clever optimisation and the worst case scenario is identified. So, what the process should do is identify the worst possible combination of confirmed and unconfirmed geocode combinations, to estimate a maximum accumulated TSI, In other words, if those risk addresses that we can’t position to building level, were in a building, what’s the worst possible outcome (or the maximum accumulated TSI) that could occur.

This is a complex process, but once created, the routine can be rerun and reused to accurately quantify and report on accumulations in your book on an ongoing basis.

@ 2015 Perilfinder.com by Feargal O’Neill