02
Jun 17

Continuity Central – Revamping the business continuity profession: a response

Recently, Continuity Central published ‘Revamping the business continuity profession’; an article in which Charlie Maclean-Bristol looked at challenges faced by business continuity professionals and offered his suggestions for revamping the discipline. Here, David Lindstedt and Mark Armour, developers of the Adaptive Business Continuity methodology, offer their response to the article:

David Lindstedt: Naturally, most folks starting to embrace Adaptive Business Continuity will agree that traditional business continuity methods are not working and it’s time for a change. I totally agree that ‘resilience’ will not be the ‘savior’ of business continuity. As Charlie correctly points out, resilience is an inter-discipline, not a discipline on its own. A business continuity practitioner could run it, but so could anyone from any of the inter-disciplines like ERM, EM, IT DR, etc. The chief concern with resilience will always be: what are the boundaries to what gets included (individual personal psychology to environmental sustainability to the entire content of a MBA program?) and how do you measure its effectiveness?

More of the Continuity Central article


03
May 17

ZDNet – Cloud v. Data Center: Key trends for IT decision-makers

Cloud-based compute, networking and storage infrastructure, and cloud-native applications, are now firmly on the radar of CIOs — be they in startups, small businesses or large enterprises. So much so that, whereas a few years ago the question facing them was “Which workloads should I move to the cloud?”, it’s now becoming “Which, if any, workloads should I keep on-premises?”. While most organisations will probably end up pursuing a hybrid cloud strategy in the medium term, it’s worth examining this turnaround, and the reasons behind it.

The general background, as ZDNet has explored in recent special features, is the competitive pressure for organisations to undergo a digital transformation based on cloud-native applications and methods such as DevOps, in pursuit of improved IT and organisational performance.

More of the ZDNet article from Charles McLellan


01
May 17

Arthur Cole – The Reality of an Intelligent IoT

The Internet of Things (IoT) may be barely off the ground, but developers are already looking for ways to imbue the technology with high degrees of intelligence.

On one level, an intelligent IoT is a reason unto itself given that the scale and complexity of the data environment is beyond the capabilities of today’s management tools. But ultimately, the expectation is that much of the IoT will govern itself, and that includes the basic interactions between systems and users.

Zebra Technologies’ Tom Bianculli gave eWeek a good overview of all the ways in which intelligence is likely to affect the IoT. From the intelligent enterprise itself, capable of dynamic data streaming, real-time analytics and self-managing applications, to advances in health care, transportation, retail and virtual every other industry, the intelligent IoT has the potential to revolutionize the way we live, work and play.

More of the IT Business Edge article from Arthur Cole


26
Apr 17

Data Center Knowledge – Busy in the Data Center? Here’s How to Make Time for Learning

Continuing education should be a top priority for anyone involved with data centers or the entire IT field for that matter. It’s especially important in industries such as healthcare for example. While new technologies and approaches don’t always save lives, they can certainly alter the landscape of the market. Times change, and so should you. While we may agree that continuing education is fundamental to both organizational success and the development of one’s career, it’s not always easy to fit it into a busy schedule.

Keep in mind, continuing education does not have to mean an additional university degree or even a new certification. Your CE approach can take a number of different forms.

More of the Data Center Knowledge article from Karen Riccio


24
Apr 17

Baseline – How Digital Strategy Gaps Hurt Customer Experience

The vast majority of companies recognize that digital customer experiences (CX) represent a make-or-break proposition in terms of competitive differentiation, but digital strategy shortcomings are limiting their ability to deliver, according to a recent survey from Dimension Data. The resulting “Global Customer Experience Benchmarking Report” indicates that very few organizations are able to connect all CX channels. Most, in fact, still rely on dated resources such as telephone and email communications to support customers. Very few consider their company’s digital business strategy optimized. And, while most said customer analytics and connected customer journeys will greatly affect CX for the near future, the majority of businesses do not collect data to review and improve customer journey patterns. “The digital dilemma is deepening, and organizations need to choose a path between digital crisis or redemption,” said Joe Manuele, Dimension Data’s group executive for CX and collaboration.

More of the Baseline slide show from Dennis McCafferty


11
Apr 17

Data Center Knowledge – Finding the Sweet Spot for Your Data Center

There’s certainly no shortage of options for expanding data center capacity these days. You can renovate an existing facility or add a modular unit onsite or offsite, build one from scratch, lease data center space, or move non-critical data and applications off your servers and into a cloud … and just about any combination of the above.

Which scenario is right for your company? Whatever makes the most sense for the business, said HPE’s Laura Cunningham during her Data Center World session, “Finding the Sweet Spot for Your Data Center.”

So, it’s imperative to know the future direction and financial preferences of your company before meeting face-to-face with a CIO, CEO or CFO to ask approval for any IT project.

More of the Data Center Knowledge post from Karen Riccio


17
Feb 17

Washington Post – Weather Service suffered ‘catastrophic’ outage; website stopped sending forecasts, warnings

On a day when a blizzard was pasting Maine and Northern California faced a dire flooding threat, several of the National Weather Service’s primary systems for sending out alerts to the public failed for nearly three hours.

Between 1:08 p.m. and 3:44 p.m. Eastern time Monday, products from the Weather Service stopped disseminating over the Internet, including forecasts, warnings, radar and satellite imagery, and current conditions.

Updates to the Weather Service’s public-facing website, Weather.gov, ceased publishing.

In an email to staff on Tuesday, David Michaud, the director of the Weather Service’s Office of Central Processing, said a power outage had triggered the outage and characterized the impacts as “significant”. The cause of the outage was under review, a Weather Service spokesperson said.

“[I] want to ensure you that everyone involved is working hard to avoid these outages in the future and find ways to better communicate to employees across the agency in real time when outages occur,” Michaud’s email said.

More of the Washington post article from Jason Samenow


13
Feb 17

TheWHIR – Why Does It Seem Like Airline Computers Are Crashing More?

Another week, another major airline is crippled by some kind of software glitch.

If you feel as if you’re hearing about these incidents more often, you are—but not necessarily because they’re happening more frequently.

Delta Air Lines Inc. suffered an IT outage that led to widespread delays and 280 flight cancellations on Jan. 29 and 30, a problem the carrier said was caused by an electrical malfunction. A week earlier, United Continental Holdings Inc. issued a 2 1/2-hour ground stop for all its domestic flights following troubles with a communication system pilots use to receive data.

These two shutdowns were the latest in what’s been a series of computer crack-ups over the past few years, including major system blackouts that hobbled Southwest Airlines Co. as well as Delta for several days last summer—affecting tens of thousands of passengers.

More of the WHIR post from Bloomberg


10
Feb 17

SearchCloudComputing – For enterprises, multicloud strategy remains a siloed approach

Although not mentioned in this article, enterprise cloud providers like Expedient are often a key player in the multicloud mix. Enterprise clouds deliver VMware or HyperV environments that require little or no retraining for the infrastructure staff.

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but “multicloud” remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

More of the SearchCloudComputing article


03
Feb 17

Data Center Knowledge – This Server’s Uptime Puts Your SLA to Shame

An unusual and noteworthy retirement from the IT industry is scheduled to take place in April, Computerworld reports, when a fault-tolerant server from Stratus Technologies running continuously for 24 years in Dearborn, Michigan, is replaced in a system upgrade.

The server was set up in 1993 by Phil Hogan, an IT application architect for a steel product company now known as Great Lakes Works EGL.

Hogan’s server won a contest held by Stratus to identify its longest-running server in 2010, when Great Lakes Works was called Double Eagle Steel Coating Co. (DESCO). While various redundant hardware components have been replaced over the years, Hogan estimates close to 80 percent of the original system remains.

More of the Data Center Knowledge article from Chris Burt