30
Dec 16

GigaOM – The enterprise CIO is moving to a consumption-first paradigm

Take yourself back a couple of decades and the IT industry looked very different than it does today. Back then the number of solution choices was relatively limited and only available to those with the finances to afford it. Many of the core services had to be built from the ground up. Why? There simply wasn’t the volume or maturity of the IT marketplace for core services. Today, that picture is very different!

For example, consider email. Back in 1995, Microsoft Exchange was just a fledgling product that was less than two years old. The dominant email solutions were cc:Mail (acquired by Lotus in 1991), Lotus Notes (acquired by IBM in 1995) along with a myriad of mainframe, mini and UNIX-based mail servers.

Every enterprise had to setup and manage their individual email environment. Solutions like Google Apps and Microsoft 365 simply did not exist. There was no real alternative…except for outsourcing.

More of the GigaOM post from Tim Crawford


15
Dec 16

ComputerWeekly – IT Priorities 2017: What will IT decision-makers be focusing on?

Each year, TechTarget looks at how CIOs and senior IT decision-makers will be investing in IT in the 12 months ahead

Budgets for staff and on-premise servers are falling as CIOs focus on cloud computing, according to the TechTarget’s IT Priorities 2017 survey.

Most of the 353 people surveyed said their IT budgets would remain the same. Only 17% said their budget would increase by more than 10%, 16% said their budget would increase by 5% to 10%, and 9% said their budget would decrease.

The survey found that most of the budget increases would be invested in cloud services (43%), software (43%) and disaster recover (30%).

More of the ComputerWeekly post from Cliff Saran


09
Dec 16

Continuity Central – C-Level and IT pros disagree on organizations’ ability to recover from a disaster: Evolve IP survey

When it comes to assessing an organization’s ability to recover from a disaster, a significant disconnect exists between C-Level executives and IT professionals. While nearly 7 in 10 CEOs, CFOs or COOs feel their organization is very prepared to recover from a disaster, less than half of IT pros (44.5 percent) are as confident , a technology survey conducted by Evolve IP reports. The survey of more than 500 executives and IT professionals uncovered factors, including compliance requirements and use of hosted solutions, that contribute to an organization’s disaster recovery confidence overall.

Disaster recovery compliance was a clear driver of confidence in the ability to recover IT and related assets in the event of an incident. In fact, 67 percent of respondents in banking, 58 percent of respondents in the government sector and 55 percent of respondents at technology companies feel very prepared: of these disaster recovery compliance was noted as a requirement by 97 percent, 73.5 percent and 71 percent respectively.

More of the Continuity Central post


07
Dec 16

Baseline – Why IT Pros Feel Unprepared for Disasters

While most C-level executives feel their organization is “very prepared” for a potential systems-crashing disaster, IT professionals sharply disagree, according to a recent survey from Evolve IP. The “2016 Evolve IP Disaster Recovery and Business Continuity Survey” report indicates that a significant number of companies have suffered from a major incident that required disaster recovery (DR) over the past year—sometimes resulting in six-figure losses. Many tech employees indicate that a lack of DR budgeting leaves them unprepared for disruptions caused by hardware failures, server issues, power outages, environmental events, human error and targeted cyber-attacks. And a great many organizations still rely on old-school recovery methods such as backup tapes, instead of newer cloud-based solutions.

There is, however, notable interest in Disaster Recovery as a Service (DRaaS), despite the fact that only about half of C-level executives have heard of this term. “The lack of DR education at the executive level—and the likely related lack of budget—poses a real risk to today’s businesses,” according to the report. “These factors are further exacerbated by a dramatic increase in targeted attacks, continued reliance on aging tape backups, as well as internal hardware that remains highly susceptible to failure.

More of the Baseline slideshow from Dennis McCafferty


02
Dec 16

Data Center Knowledge – The Mission Critical Cloud: Designing an Enterprise Cloud

Today, many organizations are taking a look at cloud from a new lens. Specifically, organizations are looking to cloud to enable a service-driven architecture capable of keeping up with enterprise demands. With that in mind, we’re seeing businesses leverage more cloud services to help them stay agile and very competitive. However, the challenge revolves around uptime and resiliency. This is compounded by often complex enterprise environments.

When working with cloud and data center providers, it’s critical to see just how costly an outage could be. Consider this – only 27% of companies received a passing grade for disaster readiness, according to a 2014 survey by the Disaster Recovery Preparedness Council. At the same time, increased dependency on the data center and cloud providers means that overall outages and downtime are growing costlier over time. Ponemon Institute and Emerson Network Power have just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).

More of the Data Center Knowledge post from Bill Kleyman


30
Nov 16

ComputerWeekly – Disaster recovery testing: A vital part of the DR plan

Disaster recovery provision is worthless unless you test out your plans. In this two-part series, Computer Weekly looks at disaster recovery testing in virtualised datacentres

IT has become critical to the operation of almost every company that offers goods and services to businesses and consumers.

We all depend on email to communicate, collaboration software (such as Microsoft Word and Excel) for our documents and data, plus a range of applications that manage internal operations and customer-facing platforms such as websites and mobile apps.

Disaster recovery – which describes the continuing of operations when a major IT problem hits – is a key business IT processes that has to be implemented in every organisation.

First of all, let’s put in perspective the impact of not doing effective disaster recovery.

Estimates on the cost of application and IT outages vary widely, with some figures quoting around $9000/minute.es and mobile apps.

More of the ComputerWeekly post from Chris Evans


16
Nov 16

ZDNet – Cloud will account for 92 percent of datacenter traffic by 2020

Businesses are migrating to cloud architectures at a rapid clip and by 2020, cloud traffic will take up 92 percent of total data center traffic globally, according to Cisco’s Global Cloud Index report.

The networking giant predicts that cloud traffic will rise 3.7-fold up from 3.9 zettabytes (ZB) per year in 2015 to 14.1ZB per year by 2020.

“The IT industry has taken cloud computing from an emerging technology to an essential scalable and flexible networking solution. With large global cloud deployments, operators are optimizing their data center strategies to meet the growing needs of businesses and consumers,” said Doug Webster, VP of service provider marketing for Cisco, in a press release. “We anticipate all types of data center operators continuing to invest in cloud-based innovations that streamline infrastructures and help them more profitably deliver web-based services to a wide range of end users.”

Breaking things down, Cisco expects business workloads to dominate data center applications by 2020 but that their overall workload share will decrease from 79 percent to 72 percent.

More of the ZDNet article from Natalie Gagliordi


15
Nov 16

Continuity Central – Enterprises struggle with increasing complexity of IT systems

Enterprises today are employing hybrid IT as they struggle to keep up with digital transformation, according to the recently released Harvard Business Review Analytic Services report ‘Hybrid IT Takes Center Stage’.

Sponsored by Verizon Enterprise Solutions, the report presents the results of a survey of 310 business and IT executives worldwide which found that most say their organizations are struggling to keep up with the pace of change in business today while working to ensure the complexity of their IT systems do not jeopardize performance, agility or security.

In fact, 63 percent of respondents indicated they are pursuing a hybrid IT approach to keep up with their existing infrastructure that consists of a mix of private clouds, public clouds and legacy data centers / centres – either on-premises or managed by service providers.

To enable hybrid IT, the report singles out the need for a secure, high-performance network architecture that can deliver the kind of security, flexibility and responsiveness required to stitch all these systems together.

“The vast majority of CIOs and line of business owners are working within the constraints of legacy apps, networks and investments,” said Chris Yousey, vice president of managed services for Verizon Enterprise Solutions. “And while the move to hybrid IT is about protecting their investments, it’s really more about improving performance, availability and above all, agility in today’s business climate.”

More of the Continuity Central article


04
Nov 16

IT Business Edge – Digital Transformation Starts with Infrastructure

Business models around the world are rapidly shifting from selling products to monetizing services. It doesn’t matter what industry you are in, if you are not generating revenue by digitally connecting to consumers, the future of your enterprise is in doubt.

While this digital transformation requires new approaches to organizational structures, workforce skillsets, business processes and customer relationships, it all starts with infrastructure. Static, silo-laden data systems are out; agile, software-defined architectures are in.

But how, exactly, are traditional enterprises supposed to implement such a radical upgrade in time to ward off competition from digitally driven upstarts who are unburdened by legacy infrastructure? To be sure, it will take a concerted effort, and a clearly defined strategy as to how digital transformation can be optimized for the enterprise’s unique market strengths.

More of the IT Business Edge post from Arthur Cole


31
Oct 16

Data Center Knowledge – “Right-Sizing” The Data Center: A Fool’s Errand?

Overprovisioned. Undersubscribed. Those are some of the most common adjectives people apply when speaking about IT architecture or data centers. Both can cause data center operational issues that can result in outages or milder reliability issues for mechanical and electrical infrastructure. The simple solution to this problem is to “right-size your data center.”

Unfortunately, that is easier to say than to actually do. For many, the quest to right-size turns into an exercise akin to a dog chasing its tail. So, we constantly ask ourselves the question: Is right-sizing a fool’s errand? From my perspective, the process of right-sizing is invaluable; the process provides the critical data necessary to build (and sustain) a successful data center strategy.

When it comes to right-sizing, the crux of the issue always comes down to what IT assets are being supported and what applications are required to operate the organization.

More of the Data Center Knowledge article from Tim Kittila