13
Sep 17

Fast Company – RIP, Jerry Pournelle, a pioneer of tech journalism for the non-geeky

In 1980, anyone who used a PC was, by definition, something of a nerd. But Byte, the leading computer magazine of the time, saw a need for a column that emphasized the benefits of the machines rather than their innards. It found its author in celebrated science-fiction author Jerry Pournelle, whose Byte writings–best known by the name “Chaos Manor”–were not very technical; profoundly first person-y and opinionated; focused what you could do with a PC; and prone to going off on extended tangents that were as defining an aspect of the columns as the parts that more obviously belonged in a publication called Byte.

More of the Fast Company article


06
Sep 17

IT Business Edge – Clouds Vie for Critical Workloads

Editors note: Like the Skytap illustration in the article, Expedient clients are using public and private cloud services RIGHT NOW to improve application performance, reduce maintenance workloads, and improve uptime. These organizations don’t have the luxury of waiting for their development teams or primary software vendors to rewrite their mission critical apps from the ground up.

It seems that cloud providers are no longer fooling around when it comes to getting enterprise workloads. With new migration packages and services optimized for mission-critical data and applications, CSPs large and small are eager for your business.

The question for most enterprises, however, is whether to stick with the hyperscale providers like Amazon and Microsoft, or go with a not-so-large firm that may have a bit more flexibility when it comes to matching infrastructure with customized user needs.

Skytap, for one, is hoping that the one-size-fits-all approach will not be enough for most enterprises as they embrace crucial service offerings like Big Data and the IoT. CEO Thor Culverhouse argues that the cloud giants are overlooking key market segments like the legions of mission-critical apps that are stuck on legacy systems but will have to move to hybrid infrastructure in order to keep up with the speed of business activity. His plan is to offer specialized infrastructure optimized for the 75 percent of the enterprise workload that is not likely to become cloud-native any time soon.

More of the IT Business Edge article from Arthur Cole


02
Aug 17

IT World – 7 things your IT disaster recovery plan should cover

Enterprise networks and data access can be knocked out without warning, thanks to natural and man-made disasters. You can’t stop them all from happening, of course, but with a good disaster recovery plan you can be better prepared for the unexpected.

Hurricanes. Tornadoes. Earthquakes. Fires. Floods. Terrorist attacks. Cyberattacks. You know any of these could happen to your business at any time. And you’ve probably got a disaster recovery (DR) plan in place to protect your enterprise’s data, employees and business.

But how thorough is your DR plan? When was it last updated and tested? Have you taken into account new technologies and services that can make it easier to recover from disaster? The following are 7 things your IT disaster recovery plan should include.

1. An analysis of all potential threats and possible reactions to them

Your DR plan should take into account the complete spectrum of “potential interrupters” to your business, advises Phil Goodwin, research director of data protection, availability and recovery for research firm IDC. (IDC is part of IDG, which publishes CSO.)

More of the IT World post from James A Martin


14
Jul 17

Continuity Central – Reasons to Eliminate the Business Impact Analysis

Adaptive BC, a website established to develop and promote a new approach to business continuity, has been calling for the elimination of the BIA. In this article David Lindstedt, one of the founders of Adaptive BC, explains why.

The business impact analysis (BIA) has been a staple of business continuity for decades. In that time, the BIA has grown, expanded, and become rather nebulous in its scope, objectives, and value. By exploring both its initial purpose and current implementation, we can conclude that early benefits gained from the BIA no longer outweigh the disadvantages, and that practitioners ought to eliminate the use of the BIA as much and as soon as feasible.

Part one: genesis

What was the BIA when it came into use? The original intent of the BIA was to estimate the impact that a significant incident would have on the business. More accurately, it was to estimate the different types of impact that a significant incident would have on different parts of the business. As the BCI DRJ Glossary states, even today the BIA is defined simply as the, “Process of analyzing activities and the effect that a business disruption might have on them.”

More of the Continuity Central article


13
Jul 17

Continuity Central – DNS attacks are posing an increasing threat to businesses

EfficientIP has published the results of a survey that was conducted for its 2017 Global DNS Threat Survey Report. It explored the technical and behavioural causes for the rise in DNS threats and their potential effects on businesses across the world.

Major issues highlighted by the study, now in its third year, include a lack of awareness as to the variety of attacks; a failure to adapt security solutions to protect DNS; and poor responses to vulnerability notifications. These concerns will not only be subject to regulatory changes, but also create a higher risk of data loss, downtime or compromised reputation.

According to the report, carried out among 1,000 respondents across APAC, Europe and North America, 94 percent of respondents claim that DNS security is critical for this business. Yet, 76 percent of organizations have been subjected to a DNS attack in last 12 months and 28 percent suffered data theft.

More of the Continuity Central post


07
Jun 17

TechTarget – Building an IT resiliency plan into an always-on world

The concepts of recovery point objectives and recovery time objectives are becoming increasingly obsolete. Today’s highly connected world has forced most organizations to ensure IT resiliency and make their resources continuously available. More importantly, the cost of downtime continues to increase and has become unacceptable and even unaffordable for many organizations.

A 2016 study by the Ponemon Institute estimated the total cost of data center downtime to be about $740,357 per hour — a little higher than a similar 2015 study by cloud backup and disaster recovery-as-a-service provider Infrascale. The study also indicated that downtime can be so expensive, it calculated data center outages cost businesses an average of $8,851 per minute.

For large companies, the losses can be staggering. One 2016 outage cost Delta Air Lines $150 million.

The study went on to state that it takes, on average, 18.5 hours for a business to recover from a disaster. Given the hourly price of an outage, the cost of recovering from a disaster can be staggering. So it is hardly surprising the IT industry is transitioning from legacy backup and recovery planning in favor of disaster recovery or business continuity planning.

More of the TechTarget article from Brian Posey


31
May 17

IT Business Edge – Ensuring IT and Legal Are on the Same Page

As I’ve mentioned lately, cybersecurity is dependent on humans. Much of that revolves around human behavior and how cybercriminals prey on our mistakes, laziness, and dedication to multi-tasking. Yet, there are other areas where humans directly affect cybersecurity; one is communication.

I sat in on a session at the Enfuse 2017 conference called “Can I Get a Translation?” The discussion centered around the need for legal departments and the IT or security teams to speak the same language when talking about cybersecurity.

One of the problems is that IT and legal have different interests, the panel explained. Legal, for example, is looking for potential smoking guns in the data, but that’s not IT or the security team’s goal. But if that data isn’t stored or protected correctly, you know which department is going to get blamed, right?

More of the IT Business Edge article from Sue Marquette Poremba


08
May 17

ZDNet – Cloud and the New CIO

Cloud changes everything, and never more so than the role of the CIO, as the recently-released State of the CIO 2017 report[1] reveals.

As the report points out, CIOs still perform the delicate balancing act “between crafting technology strategy and driving business innovation while overseeing routine IT functional tasks such as cost control, vendor negotiation, crisis management, and operational improvements.”

However, although not explicitly stated, it is implicit that cloud services will continue to play a large part in making the CIO more efficient. For example, cloud computing is now the default way for enterprises to deliver new services, whether or not they are officially sanctioned by and acquired through the IT department. This plays to the LOB manager’s need to ‘just get things done’ because convenience and speed will – as so many commentators have already pointed out – always trump security and process. We’ll return to this point a bit later.

More of the ZDNet article from Manek Dubash


04
May 17

Continuity Central – How personal biases can affect business continuity decisions

Managerial biases such as overconfidence and myopia can explain many failures in business decisions but new research shows how personal biases can be used to improve decision making.

Conventional approaches to eliminating biases focuses on ‘changing the mind’: if people can be trained to recognise their biases and think more logically, better outcomes are likely. However, increasing evidence suggests that such a de-biasing approach is not enough for effective decisions, because it only deals with our conscious half – what Daniel Kahneman famously called System 2. Our automatic half – Kahneman’s System 1 – also plays a role in determining a decision and it is sensitive to our surrounding environment. Even contextual factors, such as the weather being sunny or cloudy, can significantly influence the decisions made.

More of the Continuity Central article


02
Mar 17

ITWorld – Why DRaaS is a better defense against ransomware

Recovering from a ransomware attack doesn’t have to take days

It’s one thing for a user’s files to get infected with ransomware, it’s quite another to have a production database or mission-critical application infected. But, restoring these databases and apps from a traditional backup solution (appliance, cloud or tape) will take hours or even days which can cost a business tens or hundreds of thousands of dollars. Dean Nicolls, vice president of marketing at Infrascale, shares some tangible ways disaster recovery as a service (DRaaS) can pay big dividends and quickly restore systems in the wake of a ransomware attack.

Quickly pinpointing the time of infection

With a cloud backup, it takes a while to determine if your application has been corrupted. Admins must download the application files from the cloud (based on your most recent backup), rebuild, and then compile the database or application.

More of the ITWorld post from Ryan Francis