Tuesday, May 21, 2019

Business Continuity Planning

Though interruptions to demarcation push aside be due to major natural disasters much(prenominal) as fires, floods, earthquakes and storms or due to man-made disasters much(prenominal) as wars, terrorist attacks and riots it is usu every last(predicate)y the much mundane and less(prenominal) sensational disasters such as power failure, equipment failure, theft and sabotage that ar the causes behind disruptions to business.A craft tenaciousness Plan or Continuity of Business Planning (CoB Plan) de bonnys the process of identification of the applications, customers (internal & external) and locations that a business plans to bring through functioning in the occurrence of such disruptive events, as well the failover processes & the length of time for such support. This encompasses hardware, software strategy, facilities, personnel, communication links and applications (MphasiS, 2003).A Business Continuity Plan is conjecture in order to enable the governing to recover from a disaster with the minimum loss of time and business by restoring its critical operations quickly and smoothly. The Business Continuity Plan should be devised in such a way that it involves non l wizard(prenominal) the recovery, resumption and maintenance of besides the technology components but also of the entire business. retrieval of only the ICT dusts and infrastructure may not always imply the full restoration of business operations.The Business Recovery Planning at XE therefore envisages the consideration of all hazards to business operations that may admit not only ICT applications and infrastructure but also directly bear upon on other(a) business processes. After conducting an extensive Business Impact Analysis (BIA), Risk Assessment for XE was carried out by evaluating the assumptions made in BIA under assorted threat scenarios. Threats were analyzed on the basis of their potential opposition to the organization, its customers and the financial market it is a ssociated with.The threats were then prioritized computeing on their severity. The following threats were identified for XE 1. Natural disasters such as floods, fires, storms, earthquakes, extreme weather, etc. 2. Man-made disasters such as terrorist attacks, wars and riots. 3. Routine threats that include a. Non-availability of critical personnel b. inaccessibility of critical buildings, facilities or geographic regions c. Malfunctioning of equipment or hardware d. Inaccessibility or corruption of software and selective information due to various reasons including virus attacks e. Non-availability of support servicesf. Failure of communication links and other subjective utilities such as power g. Inability to meet financial liquidity requirements, and h. Unavailability of essential records. Organizing the BCP team The first and most important step in developing a successful disaster recovery plan is to create management awareness. The top-level management go forth allocate neces sary resources and time required from various areas of the organizations only if they understand, realize and support the value of disaster recovery. The management has to also accord approval for nett implementation of the plan.The BCP team therefore has to have a member from the management who thunder mug not only provide the inputs from the management but also apprise the management and get its feedback. Besides these, each effect or priority area has to be represented by at least one member. Finally, there has to an overall Business Continuity Plan coordinator who is responsible not only for co-ordination but also for all other aspects of BCP implementation such as training, updating, creating awareness, dischargeing, etc. The coordinator usually has his or her own support team.XEs Business Continuity Planning team would therefore comprise representatives from the management and each of the core or priority areas, and would be held together by the BCP coordinator. Even in th e case of outsourcing of the BCP, it is necessary for the management and nominal members from the core or priority areas to be closely associated with each step of the planning process. Crucial Decisions The recognize decisions to be made in formulating the Business Continuity Plan for XE were associated with the individual steps that were task in making the BCP.The first step of Business Impact Analysis (BIA) involved making a work flow analysis to assess and prioritize all business functions and processes including their interdependencies. At this stage, the potential impact of business disruptions was identified along with all the legal and regulatory requirements for XEs business functions and processes. base on these, decisions on allowable downtime and acceptable level of losses were interpreted. Estimations were made on Recovery meter Objectives (RTOs), Recovery Point Objectives (RPOs) and recovery of the critical path.The second step of Business Continuity Planning com prised of risk assessment during which business processes and the assumptions made in the course of BIA were evaluated using various threat scenarios. The decisions made at this stage included the threat scenarios that were to be adopted, the severity of the threats and finally identification the risks that were to be considered in the BCP found on the assessments made. The next step of Risk Management involved drawing up of the plan of march with respect to the various risks.This was the stage at which the actual Business Continuity Plan was drawn up, formulated and documented. Crucial decisions such as what peculiar(a) proposition steps whould be taken during a disruption, the training programs that should be organized to train personnel in implementation of the BCP, and the frequency of updating and revisions that would be required were taken at this stage. Finally, in the Risk Monitoring and Testing stage, decisions regarding the suitability and effectiveness of the BCP were taken with reference to the initial objectives of the Business Continuity Plan. Business Rules and System Back-upsMy friend works for the Motor Vehicles department that issues driving licenses for hush-hush and commercial message vehicles. Applicants for either license initially come and deposit a fee. The particulars of the specific applicant along with photograph and biometrics in the form of finger prints are then entered into the informationbase. in that respect by and byward, the applicant undergoes a medical exam test, the results of which are over again entered into the database of the system. If approved in the medical test, the applicant has to appear for an initial theoretical test on driving signs and rules and regulations.If the applicant passes the test, he or she is given a Learners License. The applicant then comes back for the practical driving test after a month, and is awarded the driving license if he or she is able to pass the test. New additions are made to the database of the driving license system at every stage of this workflow. Though the tests for the learners license and driving license are held tercet days in a week, an individual can apply any day of the five works days of the department. People also come for renewal of driving licenses.Driving licenses are usually issued for a hitch of one to five years depending on the age and physical condition of the applicant. In the case of commercial vehicles, an applicant first has to obtain a trainee driving license and work as an apprentice number one wood for devil years before he or she becomes eligible for a driving license to get a commercial vehicle. Moreover, a commercial driving license is issued only for a year at time, and the driver has to come back for evaluation and medical tests every year.The number and frequency of transactions are therefore much higher for commercial vehicles. As is evident from the business rules of the department, data is added and modified frequently for a specific applicant during the process of the initial application. Subsequently, data is again added to or the database modified after an interval of one month for the same applicant. Thereafter, fresh data is added to the database or the database modified only after a period of five years when the applicant comes back for renewal.However, there is always the possibility that psyche loses or misplaces his or her license and comes back to have a duplicate issued. But when the scenario of multiple applicants who can come in at any day for fresh, duplicate or renewal of licenses is considered, it becomes evident that transactions are not periodic or time bound but are continuous. Transactions can happen any time during working hours resulting in changes to the database of the system. Taking only complete alleviation of the system would not be the optimal concomitant solution under the given circumstances.Whatever frequency of complete backup is adopted, the chance of losing data will be very high in the case of database failure or any other disastrous event that results system failure or corruption. Moreover, taking complete backup of the system very frequently would be a laborious and cumbersome exercise. The ideal backup stiffity in this case would be incremental backup in which backup is taken of only the data that is added or modified the moment it is added or modified, and a complete backup is taken at a periodic frequency.Under the situation, the Motor Vehicles Department has opted for continuous incremental backup with a complete backup taken at the end of the day. As a Business Continuity Plan measure, the department uses a foreign backup mirroring solution that provides host-based, real-time continuous replication to a disaster recovery site far away from their servers over standard IP networks. This mirroring process uses continuous, asynchronous, byte-level replication and captures the changes as they occur. It copies only chan ged bytes, therefore decrease network use, enabling quicker replication and reducing latency to a great extent.This remote mirroring solution integrates with the existing backup solutions, and can replicate data to perform remote backups and can take snapshots at any time without having any impact on the performance of the production severs. It replicates over the available IP network, both in LAN and WAN, and has been deployed without any additional cost. This remote mirroring solution accords the department the maximum potential safeguard against data loss from failures and other disasters. Database Processing Efficiency versus Database Storage EfficiencyThough computer storage costs as such has decreased dramatically over the years, the controversy between database affect expertness and database storage talent continues to be an issue because the overall performance of a systems is affected by the way data is stored and processed. In other words, even though the volume of st orage space available may no longer be a constraint financially and physically, the way this space is utilized has an impact on the database processing might which in turns affects the overall performance of the application or the system.Under the present circumstances, though it is possible to agree on the side of database storage efficiency to derive greater database performance efficiency and thus break the overall performance of the system, achieving optimization of the overall performance of a system requires striking a fine balance between database processing and database storage efficiency. There can be many tradeoffs between data processing speed and the efficient use of storage space for optimal performance of a system. There is no set rule on which tradeoffs to adopt, and differs according to the practical data creation, modification and flow of the system.Certain bighearted guidelines can however be followed in order to increase the overall utility of the database ma nagement system. Examples of such guidelines are to be found in the case of derived field, denormalization, primary key and indexing overheads, reloading of database and query optimization. Derived fields Derived field are the fields in which data is obtained after the manipulation or operation of two or more original fields or data. The issue at stake is whether the data should be stored only in the original form or as the processed data in derived field also.When the data is stored only in the original form, the derived field is calculated as and when required. It is obvious that storing derived data will require greater storage space but the processing time will be relatively less i. e. storage efficiency will be low whereas processing efficiency becomes higher. However, the decision on whether to store derived fields or not depend on other considerations such as how oftentimes the calculated data is likely to change, and how often the calculated data will be required or used. An example will make will serve to make upshots more clear.A university students grade demo standing is a perfect example of the derived field. For a specific class, a students grade point is obtained by multiplying the points corresponding to the grade of the student by the number of credit hours associated with the course. The points or the grade and the number of credit hours are therefore the original data, by multiplying which we get the grade point or the derived field. The decision on whether to store the derived field or not, will in this case depend on how often the grade point of a student is likely to change, and how often the students grade points are actually required.The grades of a student who has already graduated is unlikely to undergo nay more changes, whereas the grades of a student still studying in the university will change at regular frequency. In such a case, storing the grade points of an undergraduate would be more meaningful than storing the grade poin ts of a student who has already graduated. Again if the undergraduates grades are reported only once a term, then it may not be worth it to store grade points as derived fields. The significance of the matter is realized when we consider a database of thousand of students.The tradeoff in this case is between storing the grade points as derived fields and gaining on database processing efficiency and losing out on database storage efficiency on one hand and not storing the derived fields and gaining on storage efficiency but losing out processing efficiency on the other. Denormalization Denormalization is a process by which the number of records in a fully normalized database can be considerably reduced even objet dart adhering to the rule of the First Normal Form that states that the intersection of any row with any column should result in a single data value.The process of dim down multiple records into a single record is applicable only in certain specific cases in which the num ber and frequency of transaction is known. Normalization of a database is required to maintain the accuracy and integrity of the data which in turn leads to validity of the reports generated by the system and the reliability of the of the decisions based on the system. Denormalization, if through randomly can upset the balance of the database while economizing on storage space. The dilemma of Indexing unknowing use of primary key has a telling negative affect on the database storage efficiency.Many database systems employ to setting index on a field. When a field is index, the system sets a pointer to that particular field. The pointer helps in processing the data much faster. However, indexing fields also results in the system storing and maintaining data but also information or data about the storage. The question therefore again boils down to deciding on whether to achieve higher processing efficiency by compromising storage efficiency or to enable higher storage capabilities a t the cost of processing efficiency.Sorting the data sporadically is one way of overcoming the dilemma of indexing. However, sorting itself is highly taxing on the resources of a system. Moreover, in queen-sized organizations with millions of data, a sort may take even up to hours during which all computer operations endure suspended. Other factors Storage efficiency and processing efficiency are also interdependent in other ways. The deletion of data without reloading the database from time to time may result in the deleted data actually not being removed from the database.The data is simply hidden by setting a flag variable star or marker. This results not only in low storage efficiency but also in low processing efficiency. Reloading a database removes the deleted data permanently from the database and leads to smaller amount of data and a more efficient use if resources thereby boosting processing efficiency. Similarly, haphazard coding structures can impact negatively both o n the storage efficiency and the processing efficiency of a database. Completely ignoring storage efficiency while prioritizing processing efficiency, can never lead to database optimization.Conversely, optimization can also never be achieved by an over emphasis on storage efficiency. The objective is to strike the right balance. The interrelationships between database storage efficiency and database processing efficiency therefore keep the controversy between the two alive in fire of a dramatic decrease in storage costs over the years. References -01 MphasiS Corporation, 2003, MphasiS Disaster Recovery/Business Continuity Plan, Online Available. http//www. mphasis. com June 27, 2008

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.