Financial Services Authority 14 May 2012 Dear Firm Feedback from our IMAP work to date As part of our commitment to giving you feedback, we thought it would be useful to write to firms involved in the internal models approval process (IMAP), setting out some specific points which we have observed during our review work to date. This feedback is given in the context of what we regard as a high level of commitment and engagement from firms, which we regard overall as being positive. We hope that this will help you to make a satisfactory submission to us during your allocated slot. It does not supersede individual feedback we have given firms to date, and we hope it will also be useful to firms where we have not yet started review and assessment work. 1) Methodology and assumptions We have found that some firms have not framed their key methodology choices and parameter assumptions in terms of their materiality to the ultimate results and uncertainties relating to the internal model. Firms did not do enough work upfront to check that their modelling was at a sufficient level of granularity to reflect their risk profile. Examples include corporate bonds with material special features being modelled in the same way as bonds without such features, or a longevity model that does not fully reflect the risk profile of the business. At the other extreme, some firms’ modelling approaches are too complex and so less likely to be properly understood or used. We have seen some firms using the same assumptions across a range of regions or lines of business. Firms should consider the appropriateness of such assumptions by looking at whether they have a material impact on results. Some firms have encountered difficulties with the platform that aggregates capital, such as being unable to deliver a model consistent with the requirements of the Directive, or having to perform calculations in a less secure environment and in a way that is less automated and more labourintensive. On the general insurance side, some firms with material catastrophe risk exposures have incorrectly concluded that catastrophe models are not part of the internal model and so are not subject to the same requirements. As a result of one or more of the above, firms have had to fundamentally review key modelling components during preapplication. We are looking for evidence clearly conveying that the firm has properly evaluated, tested and communicated the relative materiality, including how it made its judgement regarding the elements it deems to be immaterial. In several cases firms’ internal models are unable to produce capital numbers, or have only recently reached that stage. This poses great difficulties in the design, validation and supervisory review of these models, as issues of materiality and proportionality are crucial in all of these areas. It also poses problems for the evidencing of the use test. 2) Aggregation and dependency Aggregation and dependency assumptions play a key role in the overall results of an internal model, particularly in terms of the level of diversification credit arising in the overall capital requirement. Furthermore, there are often high levels of uncertainty involved in predicting dependencies that might arise between different elements of a firm’s risk profile, with the result that strong validation would be expected from most firms where a material level of diversification credit is expected. Some firms have made choices in their dependency modelling, based on empirical analysis of data sets, without considering the limitations of that data or testing the impact of alternative methods. We expect firms to give careful consideration in their submission to their selection of methods, assumptions and parameters being used, and to validating those choices. In particular, sensitivity testing should be used to understand the importance of the decisions being made. Also, separate stress and scenario testing should be used, with corresponding likelihood and return period assumptions to test the reasonableness of the overall results. 3) Validation We have reviewed a number of firms’ validation policies and have split our feedback into five sections. a) Evidence of meeting the requirements We have seen examples where there was no clear link between the validation work and the requirements from the Level 1 and draft Level 2 texts. For example, there were no references to validating the requirements of the Level 1 Article 112(2) (e.g. that the partial internal model covers major business units) or Article 113(1) (e.g. that there is a justification for the scope of the partial internal model). It was difficult to see how the firm’s validation team would be able to provide assurance that all of the requirements were met given that this results in the validation policy being vague. This also means that we are not able to judge the firm’s compliance with the relevant validation requirements (e.g. those of Articles 229 and 230 of the draft Level 2 text). b) Materiality and granularity Examples of poor validation include where firms have decided that elements of a model are immaterial, but it is not immediately apparent that this is the case and they have not carried out analysis to support these decisions. An example of this was where dependencies between years of account for long and shorttail business were assumed to be exactly the same, but simple validation tests which assessed the impact of this assumption on the overall accuracy of the model were not carried out, including considering the model's accuracy in foreseeable future states of the business and its risk environment. In general, such analysis should include considering alternatives and the impact of being wrong, say in the level of a volatility or dependency parameters, or in the choice of methodology. Again, sensitivity testing has a role to play in such validation work. We also saw cases where there was no granular schedule of validation activities, but only a highlevel description of the validation tools, together with a generic label Page 2 of 5 regarding their frequency. In addition, we considered one firm’s plan to carry out its validation of methodology and assumptions for all risks combined to be inadequate. We would expect all firms going through an IMAP validation review to have as a minimum a detailed schedule of validation work covering all areas of the model. The level of detail should reflect the materiality of the elements of the model; we are not asking for a detailed justification where it is clearly evident that the element of the model is immaterial. c) Independence and expert judgement Some firms have considered work carried out by their internal ‘first or second lines of defence’ as providing independent validation. We have also seen some firms where the validation team had an advisory capacity regarding what the appropriate validation tools to use were, as opposed to a determinative capacity to require specific validation tests to be carried out. Independent validation should be carried out by parties who are separated from the development and parameterisation work. It is also critical that the parties carrying out independent validation work are sufficiently competent to understand and able to challenge what has been produced. Where expert judgement has been exercised in developing an internal model, such judgements should be properly recorded and attributed, particularly where they have a material impact on the result of the model, say where an empiricallyderived volatility assumption is overwritten with a lower judgementbased figure. d) Governance We have seen an example where the documentation on the validation framework had been through the full governance process, however, some key areas had been missed (e.g. adhoc triggers for additional validation, the appropriateness of validation tools, the validation of partial internal model and standard formula integration techniques, and independence between the design and validation teams). e) Tools Some firms have made use of limited stress and scenario testing, or have selected these tests from the internal model itself. In these cases the range of circumstances considered is insufficient or the tests fail to provide an independent check of the results. Some firms have confused stress and scenario testing with sensitivity testing, which is carried out to understand the impact and relevance of particular assumptions within the model. Firms should therefore develop a range of stress and scenarios tests, and carry out sensitivity testing on key parameters. Furthermore, some firms have not stated the likelihood or return period of their tests. We do not expect all scenarios to be at a 1 in 200 level, but it is more meaningful if the return period or likelihood of each test, as assessed by management, is quoted. Page 3 of 5 4) Use test Some firms have developed models which either include elements of prudence, erring on the side of caution, in their approach or which include simplifying assumptions to allow the quicker running and easier maintenance of the model. However, both of these features undermine the accuracy and relevance of the model for use in supporting business decision making. During our work with some general insurance firms, we have seen different versions of the catastrophe model being used for pricing and capital modelling, without a credible explanation for any divergence. We have also seen examples where firms have minimised the level of interaction between the internal model team and other teams, which raises the question: how is the firm able to demonstrate that the internal model is fully embedded in the business and drives key business decisions? 5) Documentation There has been much focus on the documentation requirements of Solvency II. Firms should bear in mind that it is their responsibility to meet the requirements of Solvency II and that we will rely on their submitted documents and other documentation in judging whether the tests and standards have been met. In general, the standard of many firms’ documentation has been inadequate. Material provided to us has in some cases fallen into two categories. a) Failed to evidence the firm’s methodology and assumptions, i.e. the material has: · failed to explain why their modelling was suitable for their risk profile (see point 1 on methodology and assumptions); · not set out the materiality of the methodology or the risks being calibrated, making it difficult to know where to focus our review and assessment effort and whether firms’ efforts have been reasonably prioritised (see point 1 on methodology and assumptions); · been significantly incomplete or at too high a level of granularity for us to assess whether the requirements of the Directive are likely to be met, including in some cases firms relying on modelling tools as the source of documentation; and/or · shown inconsistency within documentation, possibly as a result of model development changes not being properly reflected in all parts of the documentation. b) Not provided to the required quality or timetable, meaning that material has: · not been available by the agreed date or at all for our review; · not been provided before meetings, resulting in inefficient meetings and reviews; · been produced very late (in some cases months) in response to our questions, resulting in further delays in our reviews; and/or · not been the latest version, e.g. updates to the firm’s selfassessment template not shared with us in a timely way. 6) Model change policy Overall, from our review to date we consider model change policies to be inadequate. For example, some firms have set the threshold for materiality too high. When challenged, the firms have been unable to present a realistic scenario where the threshold would be breached. Page 4 of 5 Firms should perform some back testing to present reasonable thresholds in their model change policy. 7) Unmodelled risk (general insurance specific) Recent trends in global economic development have seen the rising importance of emerging markets seeking insurance protection. Some firms have used a ‘rate on line’ methodology to capture their exposure to unmodelled risks that is not sufficiently developed, documented and validated, and does not currently meet the Directive requirements. Others have assumed that their catastrophe risks, beyond the traditional peril regions, can be modelled through attritional ‘loss ratio’ assumptions. In both cases, these approaches will need to be properly documented and validated to meet the requirements of Solvency II. Some firms intend to use a frequencyseverity approach for incorporating part of the non modelled risks into their internal model, but have not yet sufficiently formalised and developed it to meet the requirements of Solvency II. In general, where unmodelled risks are material and, with growth plans, they are likely to become more material in future, their close monitoring and appropriate reflection in the model will be key to meeting the requirements of Solvency II. In addition, as noted in 3 c), all expert judgement assumptions involved in this approach should be recorded, independently challenged and validated given the high materiality of nonmodelled risks involved. Firms should also review all outsourcing arrangements that relate to their internal models, and ensure that these are formally documented, including provision for onsite access and contingency plans. Next steps We will continue to keep under review our approach to implementing the Solvency II Directive, particularly our work with firms in IMAP, to ensure that it is operating effectively and efficiently. By way of reminder, adhering to the submission schedule and supplying us with the agreed information at the required level of quality will allow us to manage our work with you in the run up to implementation. You should continue to have appropriate contingency plans that you can invoke in time for implementing the Solvency II regime if needed. We also ask you to consider, on an ongoing basis, whether your firm needs internal model approval for day one or whether another methodology to calculate your SCR would adequately reflect your firm’s risk profile. We are committed to giving you feedback at an individual and thematic level. If you do not receive feedback from your usual supervisory contact, please raise this with them in the first instance. If you are unable to reach your usual supervisory contact, please email [email protected] and we will look into the matter and get back to you within five working days. Yours sincerely Julian Adams Director, Insurance Page 5 of 5
© Copyright 2026 Paperzz