09
Aug 17

Continuity Central – To BIA or not to BIA is not the question…

Continuity Central recently conducted a survey to seek the views of business continuity professionals on whether it is feasible to omit the business impact analysis (BIA) from the BC process. Mel Gosling, FBCI, explains why he believes this is the wrong question to ask…

The Big Picture

It’s always useful to step back and see the big picture, and with the question of ‘To BIA or not to BIA?’ this bigger picture is that the BIA is an integral part the business continuity management (BCM) process specified in ISO 22301 and promoted by business continuity professional associations such as the BCI in its Good Practice Guidelines. Rather than looking closely at the detailed question, we should look at the bigger picture and ask ourselves whether or not we should use this specific BCM process at all.

More of the Continuity Central article


27
Jul 17

SearchDataCenter – Distributed data centers boost resiliency, but IT hurdles remain

Distributed data center architectures increase IT resiliency compared to traditional single-site models, with networking, data integrity and other factors all playing critical roles.

Architectures that span distributed data centers can reduce the risk of outages, but enterprises still must take necessary steps to ensure IT resiliency.

Major data center outages continue to affect organizations and users worldwide, most recently and prominently at Verizon, Amazon Web Services, Delta and United Airlines. Whether it’s an airline or cloud provider that suffers a technical breakdown, its bottom line and reputation can suffer.

More of the SearchDataCenter article from Tim Culverhouse


07
Jul 17

Continuity Central – Organizational risks that you should definitely be acting on

It is easy for organizations to feel overwhelmed by the number and scale of the risks that are faced; but often the perception of the potential harm engendered by various risks is exaggerated. In this article Chris Butler lists the real risks that every organization needs to consider.

Did you know the world’s most dangerous animal is not a shark, or a bear, but is in fact a mosquito? What’s certain is that human perception of risk is notoriously flawed; often, the events that concern and outrage us the most are the least likely to happen.

From political and economic tremors to cyber threats, 2017 represents another minefield of risks for businesses. For organizations, forging a deepened understanding of both threats and risk factors is crucial for remaining robust, resilient, and most of all, ahead of the competition. Part of this involves separating the myths from reality. So, what then are the real risks to business today?

More of the Continuity Central article


15
Jun 17

Continuity Central – Why business continuity managers need to trust ‘gut feel’

Agree or disagree?

Sometimes as a business continuity manager you have a feeling that a certain decision is the wrong one, despite qualitative and quantitative evidence pointing to the contrary. Dominic Irvine explains how research is starting to support the reliability of trusting your gut feeling…

Qualitative and quantitative evidence is sometimes used as a weapon to force decisions through when not everyone involved is convinced; in the face of charts, spreadsheets and PowerPoint decks, gut feel seems like a poor response, and yet what we are learning from research into exertion and fatigue, is that it is one of the most useful tools in our armoury of tests.

After the First World War, much work was done to find a way to measure fatigue but it was deemed such a subjective concept as to be impossible to develop any meaningful way of objectively measuring it. It was not possible to fathom out the complex interaction between emotional, physical and mental aspects of fatigue in a way that could be reliably and accurately counted. And yet, we all know the feeling of being fatigued and how tired we are.

More of the Continuity Central post


02
Jun 17

Continuity Central – Revamping the business continuity profession: a response

Recently, Continuity Central published ‘Revamping the business continuity profession’; an article in which Charlie Maclean-Bristol looked at challenges faced by business continuity professionals and offered his suggestions for revamping the discipline. Here, David Lindstedt and Mark Armour, developers of the Adaptive Business Continuity methodology, offer their response to the article:

David Lindstedt: Naturally, most folks starting to embrace Adaptive Business Continuity will agree that traditional business continuity methods are not working and it’s time for a change. I totally agree that ‘resilience’ will not be the ‘savior’ of business continuity. As Charlie correctly points out, resilience is an inter-discipline, not a discipline on its own. A business continuity practitioner could run it, but so could anyone from any of the inter-disciplines like ERM, EM, IT DR, etc. The chief concern with resilience will always be: what are the boundaries to what gets included (individual personal psychology to environmental sustainability to the entire content of a MBA program?) and how do you measure its effectiveness?

More of the Continuity Central article


15
May 17

CIO insight – Why So Much of a CIO’s Day Is Devoted to Security

65% of network and systems admins struggle to determine whether app issues are caused by the network, systems or apps, while 53% run into difficulties measuring latency and delay problems when troubleshooting apps.

A growing number of CIOs, other technology leaders and IT professionals are spending a considerable amount of their time troubleshooting security-related issues, according to a recent survey from Viavi Solutions. The resulting report, “State of the Network Study,” reveals that a significant number of survey respondents are spending a quarter of a standard work week on the detection and mitigation of threats. One of the trend-drivers is that email and browser-based malware has increased over the past 12 months, as has the overall sophistication of attack methods. “Enterprise network teams are [devoting] more time and resources than ever before to battle security threats,” said Douglas Roberts, vice president and general manager of the enterprise and cloud business unit for Viavi Solutions. “Not only are they faced with a growing number of attacks, but hackers are becoming increasingly sophisticated in their methods and malware. Dealing with these types of advanced persistent security threats requires planning, resourcefulness and greater visibility throughout the network to ensure that threat intelligence information is always at hand.

More of the CIO Insight slideshow from Dennis McCafferty


04
Apr 17

Continuity Central – IT disaster recovery failures: why aren’t we learning from them?

The news of an IT outage impacting a large company seems to appear in the headlines more and more frequently these days and often the root cause seems to be out-of-date approaches and strategies in place for IT disaster recovery and compliance. Common mistakes that businesses make include not testing the recovery process on a recurring basis; and relying on data backups instead of continuous replication. Also, businesses are still putting all their data protection eggs in one basket: it is always better to keep your data safe in multiple locations.

C-level leaders are now realising the need for IT resilience, whether they’re creating a disaster recovery strategy for the first time, or updating an existing one. IT resilience enables businesses to power forwards through any IT disaster, whether it be from human error, natural disasters, or criminal activities such as ransomware attacks. However, many organizations are over-confident in what they believe to be IT resilience; in reality they have not invested enough in disaster recovery planning and preparation. The resulting high-profile IT failures can be used as a lesson for business leaders to ensure their disaster recovery plan is tough, effective, and allows true recovery to take place.

If it ain’t broke… test it anyway

Virtualization and cloud-based advancements have actually made disaster recovery quite simple and more affordable. But it doesn’t stop there: organizations need to commit to testing disaster recovery plans consistently, or else the entire strategy is useless.

More of the Continuity Central post


13
Feb 17

TheWHIR – Why Does It Seem Like Airline Computers Are Crashing More?

Another week, another major airline is crippled by some kind of software glitch.

If you feel as if you’re hearing about these incidents more often, you are—but not necessarily because they’re happening more frequently.

Delta Air Lines Inc. suffered an IT outage that led to widespread delays and 280 flight cancellations on Jan. 29 and 30, a problem the carrier said was caused by an electrical malfunction. A week earlier, United Continental Holdings Inc. issued a 2 1/2-hour ground stop for all its domestic flights following troubles with a communication system pilots use to receive data.

These two shutdowns were the latest in what’s been a series of computer crack-ups over the past few years, including major system blackouts that hobbled Southwest Airlines Co. as well as Delta for several days last summer—affecting tens of thousands of passengers.

More of the WHIR post from Bloomberg


03
Feb 17

Data Center Knowledge – This Server’s Uptime Puts Your SLA to Shame

An unusual and noteworthy retirement from the IT industry is scheduled to take place in April, Computerworld reports, when a fault-tolerant server from Stratus Technologies running continuously for 24 years in Dearborn, Michigan, is replaced in a system upgrade.

The server was set up in 1993 by Phil Hogan, an IT application architect for a steel product company now known as Great Lakes Works EGL.

Hogan’s server won a contest held by Stratus to identify its longest-running server in 2010, when Great Lakes Works was called Double Eagle Steel Coating Co. (DESCO). While various redundant hardware components have been replaced over the years, Hogan estimates close to 80 percent of the original system remains.

More of the Data Center Knowledge article from Chris Burt


13
Jan 17

ComputerWeekly – Disaster recovery testing: technology systems to test DR

In this concluding part of a two-part series, Computer Weekly looks at ways of testing disaster recovery (DR). In the first article, we discussed the need for disaster recovery and for developing a strategy to test the backup process.

We discussed four main items that need to be evaluated to ensure successful testing. These were:

Time – Evaluating the time since a test was last performed and measuring the time to complete recovery, from a RTO (recovery time objective) perspective.
Change – Testing after major changes occur in the infrastructure, such as application upgrades or infrastructure (hypervisor changes).
Impact – What is the impact of running a test? Can a test be run without impacting the production environment?
People – How do we consider the human factor from the perspective of taking human error out of the recovery process?

In a virtual environment, the options for recovery can be divided into four main sections.

More of the ComputerWeekly article from Chris Evans