10
Feb 17

SearchCloudComputing – For enterprises, multicloud strategy remains a siloed approach

Although not mentioned in this article, enterprise cloud providers like Expedient are often a key player in the multicloud mix. Enterprise clouds deliver VMware or HyperV environments that require little or no retraining for the infrastructure staff.

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but “multicloud” remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

More of the SearchCloudComputing article


03
Feb 17

Data Center Knowledge – This Server’s Uptime Puts Your SLA to Shame

An unusual and noteworthy retirement from the IT industry is scheduled to take place in April, Computerworld reports, when a fault-tolerant server from Stratus Technologies running continuously for 24 years in Dearborn, Michigan, is replaced in a system upgrade.

The server was set up in 1993 by Phil Hogan, an IT application architect for a steel product company now known as Great Lakes Works EGL.

Hogan’s server won a contest held by Stratus to identify its longest-running server in 2010, when Great Lakes Works was called Double Eagle Steel Coating Co. (DESCO). While various redundant hardware components have been replaced over the years, Hogan estimates close to 80 percent of the original system remains.

More of the Data Center Knowledge article from Chris Burt


31
Jan 17

The Register: Suffered a breach? Expect to lose cash, opportunities, and customers – report

More than a third of organisations that experienced a breach last year reported substantial customer, opportunity and revenue loss.

The finding is one of the key takeaways from the latest edition of Cisco’s annual cybersecurity report, which also suggests that defenders are struggling to improve defences against a growing range of threats.

The vast majority (90 per cent) of breached organisations are improving threat defence technologies and processes following attacks by separating IT and security functions (38 per cent), increasing security awareness training for employees (38 per cent), and implementing risk mitigation techniques (37 per cent). The report surveyed nearly 3,000 chief security officers (CSOs) and security operations leaders from 13 countries. CSOs cite budget constraints, poor compatibility of systems, and a lack of trained talent as the biggest barriers to advancing their security policies.

More than half of organisations faced public scrutiny after a security breach. Operations and finance systems were the most affected, followed by brand reputation and customer retention. For organisations that experienced an attack, the effect can be substantial: 22 per cent of breached organisations lost customers and 29 per cent lost revenue, with 38 per cent of that group losing more than 20 per cent of revenue. A third (33 per cent) of breached organisations lost business opportunities.

More of The Register article from John Leyden


30
Dec 16

GigaOM – The enterprise CIO is moving to a consumption-first paradigm

Take yourself back a couple of decades and the IT industry looked very different than it does today. Back then the number of solution choices was relatively limited and only available to those with the finances to afford it. Many of the core services had to be built from the ground up. Why? There simply wasn’t the volume or maturity of the IT marketplace for core services. Today, that picture is very different!

For example, consider email. Back in 1995, Microsoft Exchange was just a fledgling product that was less than two years old. The dominant email solutions were cc:Mail (acquired by Lotus in 1991), Lotus Notes (acquired by IBM in 1995) along with a myriad of mainframe, mini and UNIX-based mail servers.

Every enterprise had to setup and manage their individual email environment. Solutions like Google Apps and Microsoft 365 simply did not exist. There was no real alternative…except for outsourcing.

More of the GigaOM post from Tim Crawford


15
Dec 16

ComputerWeekly – IT Priorities 2017: What will IT decision-makers be focusing on?

Each year, TechTarget looks at how CIOs and senior IT decision-makers will be investing in IT in the 12 months ahead

Budgets for staff and on-premise servers are falling as CIOs focus on cloud computing, according to the TechTarget’s IT Priorities 2017 survey.

Most of the 353 people surveyed said their IT budgets would remain the same. Only 17% said their budget would increase by more than 10%, 16% said their budget would increase by 5% to 10%, and 9% said their budget would decrease.

The survey found that most of the budget increases would be invested in cloud services (43%), software (43%) and disaster recover (30%).

More of the ComputerWeekly post from Cliff Saran


09
Dec 16

Continuity Central – C-Level and IT pros disagree on organizations’ ability to recover from a disaster: Evolve IP survey

When it comes to assessing an organization’s ability to recover from a disaster, a significant disconnect exists between C-Level executives and IT professionals. While nearly 7 in 10 CEOs, CFOs or COOs feel their organization is very prepared to recover from a disaster, less than half of IT pros (44.5 percent) are as confident , a technology survey conducted by Evolve IP reports. The survey of more than 500 executives and IT professionals uncovered factors, including compliance requirements and use of hosted solutions, that contribute to an organization’s disaster recovery confidence overall.

Disaster recovery compliance was a clear driver of confidence in the ability to recover IT and related assets in the event of an incident. In fact, 67 percent of respondents in banking, 58 percent of respondents in the government sector and 55 percent of respondents at technology companies feel very prepared: of these disaster recovery compliance was noted as a requirement by 97 percent, 73.5 percent and 71 percent respectively.

More of the Continuity Central post


07
Dec 16

Baseline – Why IT Pros Feel Unprepared for Disasters

While most C-level executives feel their organization is “very prepared” for a potential systems-crashing disaster, IT professionals sharply disagree, according to a recent survey from Evolve IP. The “2016 Evolve IP Disaster Recovery and Business Continuity Survey” report indicates that a significant number of companies have suffered from a major incident that required disaster recovery (DR) over the past year—sometimes resulting in six-figure losses. Many tech employees indicate that a lack of DR budgeting leaves them unprepared for disruptions caused by hardware failures, server issues, power outages, environmental events, human error and targeted cyber-attacks. And a great many organizations still rely on old-school recovery methods such as backup tapes, instead of newer cloud-based solutions.

There is, however, notable interest in Disaster Recovery as a Service (DRaaS), despite the fact that only about half of C-level executives have heard of this term. “The lack of DR education at the executive level—and the likely related lack of budget—poses a real risk to today’s businesses,” according to the report. “These factors are further exacerbated by a dramatic increase in targeted attacks, continued reliance on aging tape backups, as well as internal hardware that remains highly susceptible to failure.

More of the Baseline slideshow from Dennis McCafferty


02
Dec 16

Data Center Knowledge – The Mission Critical Cloud: Designing an Enterprise Cloud

Today, many organizations are taking a look at cloud from a new lens. Specifically, organizations are looking to cloud to enable a service-driven architecture capable of keeping up with enterprise demands. With that in mind, we’re seeing businesses leverage more cloud services to help them stay agile and very competitive. However, the challenge revolves around uptime and resiliency. This is compounded by often complex enterprise environments.

When working with cloud and data center providers, it’s critical to see just how costly an outage could be. Consider this – only 27% of companies received a passing grade for disaster readiness, according to a 2014 survey by the Disaster Recovery Preparedness Council. At the same time, increased dependency on the data center and cloud providers means that overall outages and downtime are growing costlier over time. Ponemon Institute and Emerson Network Power have just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).

More of the Data Center Knowledge post from Bill Kleyman


30
Nov 16

ComputerWeekly – Disaster recovery testing: A vital part of the DR plan

Disaster recovery provision is worthless unless you test out your plans. In this two-part series, Computer Weekly looks at disaster recovery testing in virtualised datacentres

IT has become critical to the operation of almost every company that offers goods and services to businesses and consumers.

We all depend on email to communicate, collaboration software (such as Microsoft Word and Excel) for our documents and data, plus a range of applications that manage internal operations and customer-facing platforms such as websites and mobile apps.

Disaster recovery – which describes the continuing of operations when a major IT problem hits – is a key business IT processes that has to be implemented in every organisation.

First of all, let’s put in perspective the impact of not doing effective disaster recovery.

Estimates on the cost of application and IT outages vary widely, with some figures quoting around $9000/minute.es and mobile apps.

More of the ComputerWeekly post from Chris Evans


16
Nov 16

ZDNet – Cloud will account for 92 percent of datacenter traffic by 2020

Businesses are migrating to cloud architectures at a rapid clip and by 2020, cloud traffic will take up 92 percent of total data center traffic globally, according to Cisco’s Global Cloud Index report.

The networking giant predicts that cloud traffic will rise 3.7-fold up from 3.9 zettabytes (ZB) per year in 2015 to 14.1ZB per year by 2020.

“The IT industry has taken cloud computing from an emerging technology to an essential scalable and flexible networking solution. With large global cloud deployments, operators are optimizing their data center strategies to meet the growing needs of businesses and consumers,” said Doug Webster, VP of service provider marketing for Cisco, in a press release. “We anticipate all types of data center operators continuing to invest in cloud-based innovations that streamline infrastructures and help them more profitably deliver web-based services to a wide range of end users.”

Breaking things down, Cisco expects business workloads to dominate data center applications by 2020 but that their overall workload share will decrease from 79 percent to 72 percent.

More of the ZDNet article from Natalie Gagliordi