01
Apr 16

The Register – SMBs? Are you big enough to have a serious backup strategy?

One of the TLAs* we come across all the time in IT is CIA. It’s not, in this context, a shady American intelligence force: as far as we’re concerned it stands for Confidentiality, Integrity and Availability – the three strands you need to consider as part of your security and data management policies and processes.

Most organisations tend to focus on confidentiality. And that’s understandable because a guaranteed way for your company to become super-famous is for confidential data to be made publicly available and for the Press to find out – just ask TalkTalk. On the other hand, site outages will often make the news (particularly if you’re a prominent company like DropBox or Microsoft) but they’re generally forgotten the moment that the owner puts out a convincing statement saying that their data centre fell into a sinkhole or they were the subject of a type of DDoS attack never previously seen – as long as that statement says: “… and there was never any risk of private data being exposed”.

Internally, though, you care about the integrity and availability of your data. By definition, the data you process needs to be available and correct – otherwise you wouldn’t need it to do your company’s work. And guaranteeing this is a pain in the butt – for companies of all sizes.

More of The Register post from Dave Cartright


31
Mar 16

Data Center Knowledge – How to Avoid the Outage War Room

Most IT pros have experienced it. The dreaded war room meeting that immediately starts after an outage to a critical application or service, but how do you avoid it? The only reliable way is to avoid the outage in the first place.

First, you need to build in redundancy. Most enterprises have already done much of this work. Building redundancy and disaster recovery into systems has been a best practice for decades. Avoiding single points of failure (SPOF) is simply mandatory in mission critical, performance sensitive, highly distributed and dynamic environments.

Next, you need to assess spikes in load. Most organizations have put in place methods to “burst” capacity. This most often takes the form of a hybrid cloud where the base system runs on premise, and the extra capacity is rented as needed. It can also take the form of hosting the entire application on public cloud like Amazon, Google or Microsoft, but that carries many downsides including the need to re-architect the applications to be stateless so they can run on an inherently unreliable infrastructure.

More of the Data Center Knowledge article from Bernd Harzog


30
Mar 16

CIO Insight – Do You Know Where Your Critical Data Lives?

Engage with others to assess needs from differing perspectives: business operations, customers, regulators/auditors and shareholders. Keep this list updated because it evolves.

In an era of continuous business operations, being offline has become unacceptable. Yet this drive for high availability, although exciting, also poses serious risks to the security of your data. Your data may be among the most important assets to your business. Any form of downtime can be detrimental to the livelihood of your business because it affects reputation and revenue, said Derek Brost, director of Engineering at Bluelock. “Don’t wait until a disaster strikes to take action,” he warned. “If you’re experiencing pressure to improve your current IT program, don’t fret. [These tips] should set you on the right path to a secure business environment, one with optimized recovery.” Bluelock provides Disaster Recovery-as-a-Service for complex environments and sensitive data to help companies mitigate risk with confidence. Confidence begins with a plan that works, Brost said. These tips should help the always-on business to proceed with confidence in the face of an intrusion.

More of the CIO Insight post from Karen A. Frenkel


29
Mar 16

Baseline – Data Center Outages Result in Shocking Expenses

The average cost of data center outages has increased by tens of thousands of dollars in recent years, according to recent research published by the Ponemon Institute and Emerson Network Power. The accompanying report, “2016 Cost of Data Center Outages,” reveals that unplanned outages usually last longer than a typical two-hour movie and cost organizations thousands of dollars for every minute of downtime. An uninterruptible power supply (UPS) system failure and, of course, hackers account for most of these incidents, causing business disruption, lost revenue and a slowdown in productivity. With continued growth in cloud computing and the Internet of things (IoT)—which is expected to grow to a $1.7 trillion market by 2020, up from about $656 billion in 2014—the data center will continue to be crucial in leveraging business-benefiting opportunities. So IT departments are under pressure to reduce these outages. “As organizations … invest millions in data center development, they are exploring new approaches to data center design and management to both increase agility and reduce the cost of downtime,” according to the report.

More of the Baseline article from Dennis McCafferty


23
Mar 16

SearchDataCenter – The right infrastructure for fast and big data architectures

The newer fast data architectures differ significantly from big data architectures and the tried-and-true online transaction processing tools that fast data supplements. Understanding big data and fast data’s requirement changes will inform your foray into the hardware and software choices.

Big data architectures

Big data is about analyzing and gaining deeper insights from much larger pools of data than enterprises typically gathered in the past. Much of the data (e.g., social-media data about customers) is accessible in public clouds. This data, in turn, emphasizes speedy access and deemphasizes consistency, leading to a wide array of Hadoop big data tools. Thus, the following changes in architecture and emphasis are common:

More of the SearchDataCenter article from Wayne Kernochan


18
Mar 16

SearchCloudComputing – Verizon Cloud joins casualty list amid public IaaS exodus

Why do YOU think the big guys are shutting down their cloud operations?

Verizon is the latest large-scale IT vendor to quietly shutter its public cloud after its splashy entry to the market several years ago.

Customers this week received a letter informing them that Verizon’s public cloud, reserved performance and marketplace services will be closed on April 12. Any virtual machines running on the public Verizon Cloud will be shut down and no content on those servers will be retained.

The move isn’t particularly surprising. Despite once-lofty ambitions, Verizon acknowledges its public cloud offering is not a big part of its cloud portfolio and, a year ago, the firm began to emphasize its private cloud services even before its public cloud became generally available. Other large vendors such as Dell and Hewlett Packard Enterprise similarly have been shutting down their public clouds.

More of the SearchCloudComputing article from Trevor Jones


11
Mar 16

Data Center Knowledge – The Life Cycle of a Data Center

Your data center is alive.

It is a living, breathing, and sometimes even growing entity that constantly must adapt to change. The length of its life depends on use, design, build, and operation.

Equipment will be replaced, changed, and may be modified to best equip your specific data center’s individual specification to balance the total cost of ownership with risk and redundancy measures.

Just as with a human being, the individual care and love you show your data center can lengthen the life of your partnership.

This, best utilizing and tailoring your data center to extend its life cycle, is addressed by Morrison Hershfield Critical Facilities Practice Lead, Steven Shapiro, in his upcoming Data Center World presentation, “The Life Cycle of a Data Center”.

More of the Data Center Knowledge post from Karen Riccio


17
Feb 16

VMTurbo – What’s the Promise of Orchestration?

In my conversations over 2015, I have found that one of the top of mind goals for many Directors and CIOs for this year is the goal of fully automating the orchestration of the environment. It is a common pain felt across the IT staff, the lack of agility and automation when it comes to provisioning new workloads for the environment.

Whether the plan is to expand the VMWare suite through vRealize Automation, pursue a third party technology like Chef, Puppet, CloudForms, or move into a full IaaS or PaaS environment through OpenStack or Cloud Foundry, the objective is to speed up the auto-provisioning capabilities of the data center to meet the rapidly growing needs for faster, more responsive applications at a quicker delivery time. However, the benefits of moving to automated orchestration, also create new challenges.

Why Orchestrate?

To answer this question, let me throw out a scenario that many can probably relate to today. An administrator logs into his Outlook first thing Friday morning, and at the top of his inbox is a request for a new VM from a coworker, who plans to begin testing a new application in the next couple of weeks per the CIO’s initiative.

More of the VMTurbo post from Matt Vetter


16
Feb 16

Data Center Knowledge – How Data Center Trends Are Forcing a Revisit of the Database

Ravi Mayuram is Senior Vice President of Products and Engineering at Couchbase.

Data centers are like people: no two are alike, especially now. A decade of separating compute, storage, and even networking services from the hardware that runs them has left us with x86 pizza boxes stacked next to, or connected with, 30-year-old mainframes. And why not? Much of the tough work is done by software tools that define precisely how and when hardware is to be used.

From virtual machines to software-defined storage and network functions virtualization, these layers of abstraction fuse hardware components into something greater and easier to control.

More of the Data Center Knowledge post from Ravi Mayuram


15
Feb 16

ZDNet – A call for more cloud computing transparency

In a recent research note, Gartner argued that the revenue claims of cloud vendors are increasingly hard to digest. Gartner said enterprises shouldn’t take vendor cloud revenue claims at face value and evaluate them based on strategy and services (naturally using tools from the research firm).

A week ago, I argued that Google should provide some kind of cloud run rate just so customers can get a feel for scale and how it compares to Amazon Web Services, Microsoft’s Azure and IBM. Oh well. Unlike Gartner, I think the revenue figures matter somewhat, but are far from the deciding factor.

But debating revenue run rates and nuances between the private and public cloud variations misses the point. What’s missing from the cloud equation today is better transparency.

With that issue in mind, here’s where I think we need to go in terms of cloud transparency:

PUBLIC FACING
Revenue reporting from cloud vendors. Amazon Web Services breaks out its results and they’re straightforward earnings and revenue. IBM has an “as-a-service” run rate. Microsoft has a commercial cloud run rate. And Oracle to its credit has line-by-line breakdowns of the various flavors–infrastructure-, platform- and software—of as-a-service sales.

More of the ZDNet post from Larry Dignan