Operational Risk essay

Efficient management of operational risk mainly relies on an efficient combination of qualitative and quantitative measures of risk assessment, , resistance of these measures, taking into account the events of high intensity and low frequency sensitivity of the regulatory capital requirements on the diverse nature of operational risk, as well as reporting standards among different business activities.

The aim is to point out the fact that although the set of statistical tools and their related literature for operational risk modeling are extensive, during their implementation and an attempt to comply with the regulatory ramekin, banks are facing numerous problems. Based on results obtained by using of secondary research, applying description, comparison, classification, analysis and synthesis method, a detailed critical, there are very concrete problems that banks face when trying to comply with the prescribed qualitative and quantitative requirements for modeling operational risk emphasized.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

The growing complexity of the banking industry increases the potential danger, as well as the size of damage due to operational risk. Measurement and control of this risk is very different than other risks present in the banks, as it mainly deals with events n ‘the tail” of distribution – those whose probability of events is small, but the damage they can cause, has enormous proportions. Operational risk loss distribution is the distribution of heavy tail ” because there is greater possibility of extreme events with high intensity than would asymptotic behavior of the tail standard distribution suggested.

Such extreme events that result in operational risk losses risk are usually events that happen once, with a huge economic impact and without historical precedent. The frequency of internal and external shocks duet to operational risk also varies among banks, depending n the nature of the business activities of individual banks and the sophistication of its standards for risk measurement and control mechanisms.

So, according to the Basel II, banks should generate enough expected revenues in order to cover the margin which would absorb the expected operational risk losses, as well as hold sufficient capital reserves that will cover unexpected losses. Taking into account the above, the effective operational risk management mainly 1 University College of Economics, Entrepreneurship and Management Nikolas Subs Shrinks”, 10 1 10 Zagreb, Selves caste 1 19, GA barilla. Dummied. [email protected] Org KEYWORDS: Operational risk modeling, Operational risk, Basel II, practical problems in defining and implementing models www. Functionalist 27 Finances quantitative measures for risk assessment, resistance of these measures, taking into account the events of high intensity and low frequency, sensitivity of the regulatory capital requirements in the diverse nature of operational risk and reporting standards among different business activities and the risk of conflicting regulatory measures among different countries (Jobs, 2007).

Table 1 An overview of available flop-Down” methods Table 2 An overview of available Bottom-LCP” methods The source: Author adjusted according to Marshall (2001) Therefore, a large number of researchers of operational risk have focused on the quality of the quantitative method for measuring operational risk exposure. 1. METHODS FOR MEASURING OPERATIONAL RISK The literature often mentions several ways in which operational risk can be measured.

These methods are 2 e. G. (Whitewash, 2002) 28 generally classified as “top-down” or “bottom-up” method. The methods of “top-down” risk assessment are based on macro data without establishing specific events or causes of loss. Whitewash (2002) also gives an overview of of “bottom-up” methods, which use individual events in order to identify the operational risk source and amounts.

Therefore, although there is an extensive set of statistical tools and related literature for operational risk modeling, banks are facing numerous problems when they are trying to implement them in practice, as well as to comply with the regulatory framework. 3 King (2001) gives more detailed and complete description of a large number of these methods which include statistical approaches to measurement, where operational risk is measured using data from individual vents and frequency, based on Monte Carlo simulation or analytical solutions. E. G. And cornball & Guiding (2004) www. Functionalbaa 2. THE MODEL PARAMETERS Problems related to compliance with the qualitative and quantitative requirements for operational risk As already mentioned, a number of authors agree modeling, are presented further in the sequel of this that the definition and implementation Of models paper. For measuring capital requirements for operational risk, it is necessary to meet certain qualitative and quantitative requirements. Among the most important 2. 1 .

Problems guarding compliance with quantitative parameters needed to define the model, qualitative requirements there are: High-quality loss database(internal and Qualitative requirements associated with designing external) of model for operational risk assessment, are mainly Creating of high-quality free ounce and related to the business environment factors, the intensity distribution internal control system and to experts opinions, ACID Defining the economic dependency among which are used to enhance the quality of existing parameters of the model internal and external data.

Operational risk modeling involves the use of qualitative data and opinions of Qualitative parameters important for the model the experts, and mostly in the following three cases: are associated with the decision about the business 00 When there is a small quantity of loss data or environment factors that may affect the risk there are no data at all, or when data are not and internal control system. As these are very available. Official measurable factors, their impact is most ACID when it is necessary to assess the long-term comprehended through a process of risk challenges in the risk exposure through risk assessment of risk and scenario analysis based on the indicators or other information that are not opinions of experts. The results obtained in this way, based on loss data are transformed into measurable data that can be Information about the quality of the incorporated into the model, as shown in the figure bank’s control systems is available only below: in a qualitative form, e. . Internal audit Figure 1 Input data for creating of operational risk model The source: Author adjusted according to Operational Risk Management, 201 1 www. Functional. baa 29 assessments (The Federal Reserve Bank of and it imposed the need to start thinking over the Boston, 2001). Sat experiences and to begin to make probability estimates presuming changeable business dynamics.

Basel II provides that the qualitative adjustments Expert opinions that were raised in relatively recent stated in The Advanced Measurement Approach must workshops can help to fill potential holes present in include business environment and internal control the historical loss data – internal and external. Factors, since they also affect the risk profile, with special focus on those factors that are considered to Direct use of expert opinions has far more be meaningful, risk drivers and also, the sensitivity uniqueness and it is far more challenging.

The of the bank’s risk assessment of changes in these main issue relates to the design of the workshop in factors (Bank for International Settlements, 2002). Such a way to minimize the effect of behavioral and Although it is possible to justify any business cognitive biases (Keenan and Frederick, 2006) and environment and internal control factors, such as to obtain information that will make the assimilation those who are the risk drivers, generally, it is only with other data (internal, external, information from possible to identify the source of danger, ND not the business environment and internal control) easier. TTS absolute value. Therefore, the incorporation of these factors in the scenarios that estimate the loss Furthermore, the operational risk profile can be frequency and intensity is a complex process, which complex and it is rarely possible to comprehend can only be performed by experienced professionals the loss distribution with well-known closed form (Palmer, 2005). Distributions, so asking the expert opinions about the loss distribution directly, actually is not very Expert opinions on the frequency and the intensity rewarding task.

Likewise, experts in question are distributions and correlation of losses are often usually middle or higher business management, and mentioned in the context of data from the scenario one cannot expect their expertise in statistics. It is analysis, because experts often provide scenarios possible to seek the opinion of experts on the basic that contain information about the extreme losses frequency distributions.

However, in that case, a that may occur to a certain frequency or data about significant practical assimilation problem can occur, the extreme losses that may occur simultaneously if the ann. already has adequate internal data, and the in certain units (Chuddar, 2009). As the expert frequency distribution that was estimated from these opinions are usually provided through the workshops, data, is significantly different from the distribution this type of data is often referred to as data from the obtained on the basis of expert opinions. Reshow. These data according to Basel II should The intensity distribution usually describes the contribute significantly to the assessment of capital probability of losses of different sizes In case if loss requirements for operational risk associated with operational risk happens. Internal loss data are objective and show the types of losses that could occur In a bank if history repeats Given the current dependence of the intensity itself.

Similarly, external data are also objective and distribution in the assessment of the frequency, are showing what happened to the other banks. Demanding task in creating workshops, is to determine the specific ways in which these mixed Some of them may not be applied to the observed information about the probability of frequency and bank, given the nature of its operational risk intensity distribution gained from experts could be management and control mechanisms, while others integrated.

The critical point is to determine the can happen in the observed bank, as well. However, lowest period of time, which is the targeted right tail it is possible that there are events that seem to be Of the intensity distribution from which the expert possible in the future, but no banks have experienced opinions are required. As the emphasis of workshop them in the past. It is enough just to mention the data is on the prospect of extreme losses, the logical crisis that rocked the banking industry n 2007 choice of time period is one year, which assesses 30 www. . Functional. baa prospects for losses that are expected to happen once a year or less. After, the part of the tail of the intensity distribution is defined; the next question that arises is whether to seek expert opinions on key statistics or intensity distribution quintiles? For example, if we are looking for the key statistical indicators, in this case, the workshop participants will be asked to give their opinion about the expected losses that exceed a variety of default amounts.

When it comes to workshop design, there is also the question of whether experts should be asked about the existence of the upper limit of the intensity for a particular unit, and if so, should they estimate the maximal loss amount? For example, in the case of ‘The damage to physical assets”, the upper limit can be obtained from the bookkeeping and market value, and from the assets replacement cost, too. However, in some other cases, such information may not exist or cannot be known at the time of the search.

So on the one hand, the estimation of maximal loss can increase experts evaluation of frequency and intensity, and on the other hand, it can aggravate itself is challenging problem of experts rating of frequency and intensity distribution (Bank of Japan, 2006). Difficult to transform into cumulative probability estimates. The downside of using the data about the business environment and internal control factors may be that the marginal impact of workshop data on the bank’s capital requirement for operational risk cannot be separated from the impact Of these data.

However, it seems reasonable that the workshop participants should be present with a limited amount of information about the internal and external losses, either in the form of comparative measure f frequency and intensity of internal and external data, or in the form of historical cases, both internal and external, with specific losses due to events associated with operational risk, where one has to be taken into account to achieve balance in the sense that the obtained responses are not aimed at lowering estimates of operational risk capital requirements (Chuddar, 2009).

The way, in which the workshop is run, is the key aspect of the workshop design and is the subject of many behavioral biases, and it can have a significant impact on the way in which the data will be generated at the workshop. For starters, t is not clear how many experts is the right number. In the case off larger number of participants, the lower is the significance Of the influence responses deviate from each other and it can improve statistical distributions. However, in such cases, it is more difficult to achieve a meaningful debate on the risk profile among the workshop participants.

The second question is, whether the workshop should be designed in a way to achieve the convergence of participants to one answer for each question? The result of convergent assessment is that, that the statistical distribution can exactly match the results f the workshop (Magdalene, 2009). When assimilating data obtained from internal and external loss data and data from the business environment, as a key problem there is the question of whether the workshop participants should be familiar with these data, to which extent and the way.

If the goal is to provide an independent set of information about the probabilities, there is a possibility that the participants will try to coordinate with the presented information. On the other hand, information about other data can also help professionals to set their Also the question that arises is whether to design a marks in the right. Reshow with one or more rounds of evaluation. Of course, if the goal is the convergence, it is necessary Also, professionals may already be aware of to make several rounds of evaluation.

It is useful such information, particularly when it comes to for elimination through the debate over the first few information regarding the business environment rounds, of those responses that completely deviate and internal control factors. In this case, banks can from the majority or at least reducing them to a choose to use these data only indirectly, in a way statistically acceptable level. However, as the debate to inform attendees bout them.

They are often between the various rounds is progressing, there is multidimensional and qualitative, making them very no guarantee that the parties will not change their 31 minds, or even in a worse case, that the answers obtained in the later rounds are more diverse and more deviate from the majority. Nevertheless, a more realistic case is, the emergence of the leader in the case when a sufficient number of participants is inclined to follow the leader.

Thus, if the participants are gathered around leaders who initially has an opinion that differs, assimilation with internal and external data can be a problem, and the final capital requirement for operational risk can be either too low or too high ( Palmer, 2005). 2. 2. Problems regarding compliance quantitative requirements with Burdened with this dilemma, if the bank still persists to adjust with the second level of Basel II, it should meaningfully aggregate its distributions to the first level of distribution by event type.

To avoid penalties, the bank then should model less perfect correlation between units of second levels. This of course should be justified without the help of any additional external records taking into account the custom nature of second-level units of measure. Furthermore, the event that caused the loss can be such that it contains important elements of more than one unit of measure. Since such an event must be classified into one unit of measure, it is inevitable that the judgment may affect the quality of internal loss data.

Since the unit is expected to have differential loss distribution, the choice of the classification can ultimately affect the amount of capital requirements (Chuddar, 2009). When it comes to problems regarding compliance with quantitative requirements, usually it is about robbers that concern the loss database, the frequency and intensity distributions and finally the modeling of economic dependence.