07
Sep 17

Continuity Central – Crisis preparedness and its impact on shareholder value

All commercial organizations operating in the digital era exist within a challenging landscape. Underlying trust is weak; expectations of good, transparent governance are high; and acceptance of failure is low.

At the same time, communicating with stakeholders is becoming more complex as traditional addressable audiences fragment into ever-evolving, always-online socially-connected communities, guaranteeing that issues and crises play out very publicly and swiftly.

To navigate these challenges successfully and to protect value for shareholders as companies grow, it’s vital to enhance business resilience. Reducing risk and building trust should be as important as innovating and pursuing operational excellence.

What is a crisis?

The British Standard for Crisis Management (BS 11200:2014) defines a crisis as “An abnormal and unstable situation that threatens the organization’s strategic objectives, reputation or viability.” Understanding this definition is vital in helping an organization to prepare itself to deal with a crisis. Through worst-case scenario planning, organizations can identify what abnormal events they could be exposed to, the impact of abnormal events on the ability to execute strategic objectives, and the damage that could be caused to reputation and viability.

More of the Continuity Central post from Robert McAllister


09
Aug 17

Continuity Central – To BIA or not to BIA is not the question…

Continuity Central recently conducted a survey to seek the views of business continuity professionals on whether it is feasible to omit the business impact analysis (BIA) from the BC process. Mel Gosling, FBCI, explains why he believes this is the wrong question to ask…

The Big Picture

It’s always useful to step back and see the big picture, and with the question of ‘To BIA or not to BIA?’ this bigger picture is that the BIA is an integral part the business continuity management (BCM) process specified in ISO 22301 and promoted by business continuity professional associations such as the BCI in its Good Practice Guidelines. Rather than looking closely at the detailed question, we should look at the bigger picture and ask ourselves whether or not we should use this specific BCM process at all.

More of the Continuity Central article


27
Jul 17

SearchDataCenter – Distributed data centers boost resiliency, but IT hurdles remain

Distributed data center architectures increase IT resiliency compared to traditional single-site models, with networking, data integrity and other factors all playing critical roles.

Architectures that span distributed data centers can reduce the risk of outages, but enterprises still must take necessary steps to ensure IT resiliency.

Major data center outages continue to affect organizations and users worldwide, most recently and prominently at Verizon, Amazon Web Services, Delta and United Airlines. Whether it’s an airline or cloud provider that suffers a technical breakdown, its bottom line and reputation can suffer.

More of the SearchDataCenter article from Tim Culverhouse


07
Jul 17

Continuity Central – Organizational risks that you should definitely be acting on

It is easy for organizations to feel overwhelmed by the number and scale of the risks that are faced; but often the perception of the potential harm engendered by various risks is exaggerated. In this article Chris Butler lists the real risks that every organization needs to consider.

Did you know the world’s most dangerous animal is not a shark, or a bear, but is in fact a mosquito? What’s certain is that human perception of risk is notoriously flawed; often, the events that concern and outrage us the most are the least likely to happen.

From political and economic tremors to cyber threats, 2017 represents another minefield of risks for businesses. For organizations, forging a deepened understanding of both threats and risk factors is crucial for remaining robust, resilient, and most of all, ahead of the competition. Part of this involves separating the myths from reality. So, what then are the real risks to business today?

More of the Continuity Central article


15
Jun 17

Continuity Central – Why business continuity managers need to trust ‘gut feel’

Agree or disagree?

Sometimes as a business continuity manager you have a feeling that a certain decision is the wrong one, despite qualitative and quantitative evidence pointing to the contrary. Dominic Irvine explains how research is starting to support the reliability of trusting your gut feeling…

Qualitative and quantitative evidence is sometimes used as a weapon to force decisions through when not everyone involved is convinced; in the face of charts, spreadsheets and PowerPoint decks, gut feel seems like a poor response, and yet what we are learning from research into exertion and fatigue, is that it is one of the most useful tools in our armoury of tests.

After the First World War, much work was done to find a way to measure fatigue but it was deemed such a subjective concept as to be impossible to develop any meaningful way of objectively measuring it. It was not possible to fathom out the complex interaction between emotional, physical and mental aspects of fatigue in a way that could be reliably and accurately counted. And yet, we all know the feeling of being fatigued and how tired we are.

More of the Continuity Central post


02
Jun 17

Continuity Central – Revamping the business continuity profession: a response

Recently, Continuity Central published ‘Revamping the business continuity profession’; an article in which Charlie Maclean-Bristol looked at challenges faced by business continuity professionals and offered his suggestions for revamping the discipline. Here, David Lindstedt and Mark Armour, developers of the Adaptive Business Continuity methodology, offer their response to the article:

David Lindstedt: Naturally, most folks starting to embrace Adaptive Business Continuity will agree that traditional business continuity methods are not working and it’s time for a change. I totally agree that ‘resilience’ will not be the ‘savior’ of business continuity. As Charlie correctly points out, resilience is an inter-discipline, not a discipline on its own. A business continuity practitioner could run it, but so could anyone from any of the inter-disciplines like ERM, EM, IT DR, etc. The chief concern with resilience will always be: what are the boundaries to what gets included (individual personal psychology to environmental sustainability to the entire content of a MBA program?) and how do you measure its effectiveness?

More of the Continuity Central article


15
May 17

CIO insight – Why So Much of a CIO’s Day Is Devoted to Security

65% of network and systems admins struggle to determine whether app issues are caused by the network, systems or apps, while 53% run into difficulties measuring latency and delay problems when troubleshooting apps.

A growing number of CIOs, other technology leaders and IT professionals are spending a considerable amount of their time troubleshooting security-related issues, according to a recent survey from Viavi Solutions. The resulting report, “State of the Network Study,” reveals that a significant number of survey respondents are spending a quarter of a standard work week on the detection and mitigation of threats. One of the trend-drivers is that email and browser-based malware has increased over the past 12 months, as has the overall sophistication of attack methods. “Enterprise network teams are [devoting] more time and resources than ever before to battle security threats,” said Douglas Roberts, vice president and general manager of the enterprise and cloud business unit for Viavi Solutions. “Not only are they faced with a growing number of attacks, but hackers are becoming increasingly sophisticated in their methods and malware. Dealing with these types of advanced persistent security threats requires planning, resourcefulness and greater visibility throughout the network to ensure that threat intelligence information is always at hand.

More of the CIO Insight slideshow from Dennis McCafferty


04
Apr 17

Continuity Central – IT disaster recovery failures: why aren’t we learning from them?

The news of an IT outage impacting a large company seems to appear in the headlines more and more frequently these days and often the root cause seems to be out-of-date approaches and strategies in place for IT disaster recovery and compliance. Common mistakes that businesses make include not testing the recovery process on a recurring basis; and relying on data backups instead of continuous replication. Also, businesses are still putting all their data protection eggs in one basket: it is always better to keep your data safe in multiple locations.

C-level leaders are now realising the need for IT resilience, whether they’re creating a disaster recovery strategy for the first time, or updating an existing one. IT resilience enables businesses to power forwards through any IT disaster, whether it be from human error, natural disasters, or criminal activities such as ransomware attacks. However, many organizations are over-confident in what they believe to be IT resilience; in reality they have not invested enough in disaster recovery planning and preparation. The resulting high-profile IT failures can be used as a lesson for business leaders to ensure their disaster recovery plan is tough, effective, and allows true recovery to take place.

If it ain’t broke… test it anyway

Virtualization and cloud-based advancements have actually made disaster recovery quite simple and more affordable. But it doesn’t stop there: organizations need to commit to testing disaster recovery plans consistently, or else the entire strategy is useless.

More of the Continuity Central post


13
Feb 17

TheWHIR – Why Does It Seem Like Airline Computers Are Crashing More?

Another week, another major airline is crippled by some kind of software glitch.

If you feel as if you’re hearing about these incidents more often, you are—but not necessarily because they’re happening more frequently.

Delta Air Lines Inc. suffered an IT outage that led to widespread delays and 280 flight cancellations on Jan. 29 and 30, a problem the carrier said was caused by an electrical malfunction. A week earlier, United Continental Holdings Inc. issued a 2 1/2-hour ground stop for all its domestic flights following troubles with a communication system pilots use to receive data.

These two shutdowns were the latest in what’s been a series of computer crack-ups over the past few years, including major system blackouts that hobbled Southwest Airlines Co. as well as Delta for several days last summer—affecting tens of thousands of passengers.

More of the WHIR post from Bloomberg


03
Feb 17

Data Center Knowledge – This Server’s Uptime Puts Your SLA to Shame

An unusual and noteworthy retirement from the IT industry is scheduled to take place in April, Computerworld reports, when a fault-tolerant server from Stratus Technologies running continuously for 24 years in Dearborn, Michigan, is replaced in a system upgrade.

The server was set up in 1993 by Phil Hogan, an IT application architect for a steel product company now known as Great Lakes Works EGL.

Hogan’s server won a contest held by Stratus to identify its longest-running server in 2010, when Great Lakes Works was called Double Eagle Steel Coating Co. (DESCO). While various redundant hardware components have been replaced over the years, Hogan estimates close to 80 percent of the original system remains.

More of the Data Center Knowledge article from Chris Burt