19
Jul 16

Baseline – Cloud-First—Except When Performance Matters

Many companies have a cloud-first policy that hosts as many applications as possible in the cloud, but apps that are latency-sensitive are staying on premise.

In the name of achieving increased IT agility, many organizations have implemented a cloud-first policy that requires as many application workloads as possible to be hosted in the cloud. The thinking is that it will be faster to deploy and provision IT resources in the cloud.

While that’s true for most classes of workloads, those applications that are latency-sensitive are staying home to run on premise.

Speaking at a recent Hybrid Cloud Summit in New York, Tom Koukourdelis, senior director for cloud architecture at Nasdaq, said there are still whole classes of high-performance applications that need to run in real time. Trying to access those applications across a wide area network (WAN) simply isn’t feasible. The fact of the matter, he added, is that there is no such thing as a one-size-fits-all cloud computing environment.

More of the Baseline article from Mike Vizard


11
Jul 16

CloudExpo Journal – The End Goal of Digital Transformation

Although we often write about and discuss digital transformation, we often fail to identify the end goal we are really trying to achieve. We talk at great length about data, analytics, speed, information logistics systems and personalized user experiences, but none of these are the end goal. Ultimately we must digitally transform so we can remove the “fog of war,” and have clear visibility and insights into our businesses and the needs of our customers. The end goal of digital transformation, however, is the ability to rapidly act and react to changing data, competitive conditions and strategies fast enough to succeed.

Knowledge is nothing, if not tied to action. In a recent survey of 500 managers, they reported the number one mistake companies are making in digital transformation is moving too slow. They may have all the necessary information and strategies, but if they are incapable of acting or reacting fast enough to matter, then it is wasted. True digital transformation includes the information logistics systems capable of collecting, analyzing and reporting data fast enough to be useful, plus the ability to act and react in response.

More of the CloudExpo Journal from Kevin Benedict


30
Jun 16

Baseline – IT Struggles to Meet Network Capacity Demands

An insatiable need for access to data and digital technologies is causing organizations to expand their network capacity to staggeringly high levels, according to a recent survey from Viavi Solutions. The resulting “Ninth Annual State of the Network Global Study” indicates that most companies will soon be running the majority of their apps in the cloud, seeking to lower expenses while provisioning network resources more effectively. In addition, most enterprises are deploying some form of software-defined networking (SDN). At the same time, they’re investing in state-of-the art unified communications (UC) tools, including VoIP and Web collaboration apps—all of which are contributing to a need for more bandwidth. “Data networks of all types around the globe are being strained by an explosion of traffic, from bandwidth-hungry video today to the Internet of things tomorrow,” said Oleg Khaykin, president and CEO at Viavi Solutions.

More of the Baseline slideshow from Dennis McCafferty


21
Jun 16

Data Center Knowledge – FedRAMP’s Lack of Transparency Irks Government IT Decision Makers

Four out of five federal cloud decision makers are frustrated with FedRAMP, according to a new report from government IT public-private partnership MeriTalk. Federal IT professionals said they are frustrated with a lack of transparency into the process.

MeriTalk surveyed 150 Federal IT decision makers in April for the FedRAMP Fault Lines report, and found that 65 percent of respondents at defense agencies, and 55 percent overall, do not believe that FedRAMP has increased security. Perhaps even worse, 41 percent are unfamiliar with the General Service Administration’s (GSA) plans to fix FedRAMP. The GSA announced FedRAMP Accelerated in March.

“Despite efforts to improve, FedRAMP remains cracked at the foundation,” said MeriTalk founder Steve O’Keeffe. “We need a FedRAMP fix – the PMO must improve guidance, simplify the process, and increase transparency.”

More of the Data Center Knowledge article from Chris Burt


11
May 16

CIO Dashboard – 3 Strategies to Decrease IT Costs and Increase Business Impact

Guest post by Suheb Siddiqui and Chetan Shetty

A veteran CIO recently said, “The last few months felt like I time traveled back to the 1980s. Business stakeholders are demanding the applications they want, designed the way they like, and at a speed dictated by their priorities.” He wasn’t talking about an AS400 based Cobol program; he was talking about custom apps written on industry standard platforms, provided by numerous Platform as a Service (PaaS) providers such as Salesforce, Oracle and ServiceNow.

Our industry has undergone numerous transitions. In the 1980s and part of the 1990s, business users were in control. They could pick their favorite “best of breed” applications, and design and customize them how they wanted. Integration and governance was expensive and difficult. Then, Y2K fueled the growth of Megasuite ERPs. Starting in the late 1990s, IT started controlling the agenda and strong governance led to cost efficiencies, albeit at the expense of user satisfaction.

Fast forward to 2016, Software as a Service (SaaS) and PaaS solutions are empowering users to be in control again. As a result, we are witnessing a growing gap between the total IT spend of an organization, which is increasing as users buy their own SaaS solutions, and the IT budget controlled by the CIO, which is under constant cost pressure. Successful CIOs have to find new strategies to bridge this growing “Digital Divide.”

More of the CIO Dashboard article


09
May 16

Continuity Central – Expanded NIST disaster and failure data repository aims to improve resilience

NIST has announced that data from the February 27th 2010 Chile earthquake has now been added to the NIST Disaster and Failure Studies Data Repository, providing a great deal of useful information for regional and global resilience planning.

The repository was established in 2011 to provide a place where data collected during and after a major disaster or structural failure, as well as data generated from related research, could be organized and maintained to facilitate study, analysis and comparison with future events. Eventually, NIST hopes that the repository will serve as a national archival database where other organizations can store the research, findings and outcomes of their disaster and failure studies.

Initially, the NIST Disaster and Failure Studies Data Repository was established to house data from the agency’s six-year investigation of the collapses of three buildings at New York City’s World Trade Center (WTC 1, 2 and 7) as a result of the terrorist attacks on Sept. 11, 2001. With the addition of the 2010 Chile earthquake dataset, NIST is broadening the scope of the repository to begin making it a larger collection of information on hazard events such as earthquakes, hurricanes, tornadoes, windstorms, community-scale fires in the wildland urban interface, storm surges and man-made disasters (accidental, criminal or terrorist).

More of the Continuity Central article


28
Apr 16

Continuity Central – The benefits of moving business critical to the cloud

The key difference is the way in which cloud allows these problems to be mitigated, resolved, and avoided in future.

Core enterprise applications such as ERP are not as readily moved off-site as other applications – but they’re propelling a new wave of cloud adoption. Andres Richter explains why organizations should consider making the switch.

Modern enterprise management software has come a long way from its industrial routes in providing procurement and manufacturing functionalities. Responding to changes in the technology landscape such as mobility, big data analytics and cloud computing, the software has had no choice but to evolve. Employees now require instant information at their fingertips, wherever they are, from any device. Unsurprisingly, core business functions of modern enterprise resource planning (ERP) such as financials, operations, HR and analytics require the same, consumerized flexibility offered by a plethora of non-business critical cloud-based applications. But it’s only the CIOs committed to future proofing their IT who have spotted this opportunity and have made the move from on-premise to a cloud-only or an integrated approach.

While vendors look at ways to disrupt the market, the challenge of convincing ‘stick in the mud’ IT decision makers that business continuity can be maintained during the transition to cloud ERP and beyond remains: but we are seeing an increase. Panorama Consulting’s ERP Report 2016 sees 27 percent of businesses adopting cloud ERP, a rise from 11 percent in the previous year. In our experience, more than 20 percent of current customers at Priority Software are already in the cloud. The take-up is particularly high in industries such as digital media, professional services and business services.

More of the Continuity Central post


19
Apr 16

Continuity Central – Dealing with the risk of DDoS ransom attacks

We are all familiar with the disruptive consequences of a distributed denial of service (DDoS) attack when a website is forced offline because it has been swamped with massive levels of traffic from multiple sources. The cost in terms of lost business to companies while their website is offline can be significant.

Cyber criminals are now taking the process a step further by tying ransom demands to their DDoS attacks, threatening to keep company websites permanently offline until they pay up. In effect, DDoS attacks are coming with an invoice attached.

What are DDoS ransom attacks?

Given the stakes, it makes sense for organizations to try and learn as much as they can about DDoS ransom demands: what do they look like, how can businesses work out if their site is at genuine risk and how can they protect their online presence?

Potential DDoS attacks, usually by criminal groups, start with a test attack on a website or service. The preferred method is to send increasing levels of traffic to the site to ascertain whether it could be vulnerable to an attack. Sometimes, the site can be knocked out with a small attack (from 1-2Gb of bandwidth) or it may require a much larger scale onslaught (from 10-100Gb), depending on the robustness of the security technology the service provider hosting the site has in place.

More of the Continuity Central post from Jake Madders


06
Apr 16

The Register – Successful DevOps? You’ll need some new numbers for that

Dark launches, feature flags and canary launches: They sound like something from science fiction or some new computer game franchise bearing the name of Tom Clancy.

What they are is the face of DevOps – processes that enable projects to run successfully.

And their presence is set to be felt by a good many as numerous industry surveys can attest.

With DevOps on the rise, then, the question becomes one of not just how to implement DevOps but also how to measure the success of that implementation.

Before I get to the measurement, what about how to roll out DevOps? That brings us back to that Tom Clancy trio.

Let’s start with dark launches. This is a technique to which a new generation of enterprises have turned and which is relatively commonplace among startups and giants like Facebook alike.

It’s the practice of releasing new features to a particular section of users to test how the software will behave in production conditions. Key to this process is that the software is released without any UI features.

Canary releases (really another name for dark launches) and feature flags (of feature toggles) work by building in conditional “switches” to the DevOps code using Boolean logic, so different users see different code with different features. The principle is the same as with dark launches: companies can get an idea as to how the implementation is handled without running full production.

More of The Register article from Maxwell Cooter


05
Apr 16

IT Business Edge – Diverse Infrastructure Requires Diverse Efficiency Metrics

Achieving data center efficiency is not only challenging on a technology level, but as a matter of perspective as well. With no clear definition of “efficient” to begin with, matters are only made worse by the lack of consensus as to how to even measure efficiency and place it into some kind of quantifiable construct.

At best, we can say that one technology or architecture is more efficient than another and that placing efficiency as a high priority within emerging infrastructural and architectural solutions at least puts the data industry on the path toward more responsible energy consumption.

The much-vaunted PUE (Power Usage Effectiveness) metric is an unfortunate casualty of this process. The Green Grid most certainly overreached when it designated PUE as the defining characteristic of an efficient data center, but this was understandable given that it is a simple ratio between total energy consumed and the portion devoted to data resources rather than ancillary functions like cooling and lighting. And when implemented correctly, it does in fact provide a good measure of energy efficiency. The problem is that it is easy to game and does not take into account the productivity of the data that low-PUE facilities provide nor the need for some facilities to shift loads between resources and implement other practices that could drive up their ratings.

More of the IT Business Edge article from Arthur Cole