One of the reasons virtualization (the precursor to cloud computing) gained popularity in the early 2000s is that companies had too many servers running at low utilization. The prevailing wisdom was that every box needed a backup and under-utilization was better than maxing out compute capacity and risking overload.
The vast amounts of energy and money wasted on maintaining all this hardware finally led businesses to datacenter consolidation via virtual machines, and those virtual machines began migrating off-premises to various clouds.
The problem is, old habits die hard. And the same kinds of server sprawl that plagued physical datacenters 15 years ago are now appearing in cloud deployments, too.
More of the ZDNet article from Michael Steinhart
This month, anything that doesn’t have me looking up to see if North Korea has lobbed a missile at the West Coast is a positive event. But this week, Intel responded to AMD’s Epyc launch with an epic launch of its own: the Purley version of its Xeon processor architecture. It clearly has come to play hard ball. Years ago, because things tended to be more generic, the processor played a far bigger role in servers and workstations. Today, a server can rely more heavily on the GPU than the CPU, can bottleneck on memory, storage, or internal transport rather than the processor more often, and just as often, must be purpose-built for whatever task it is being positioned in.
More of the IT Business Edge post from Rob Enderle
Digital transformation has stalled due to a misalignment between its definition and meaning, delayed ROI, complexity and resistance to new ways of working.
A new survey finds a “widespread stall” in digital transformation efforts, suggesting that its leadership is in crisis. Half of senior executives polled said their company is not successfully executing 50 percent of its strategies, according to the new report from Wipro Digital, “A Crisis in Digital Transformation.” While most executives believe the company is clear on the definition of digital transformation, an obstacle to success is the lack of alignment on what exactly digital transformation means. “Digital transformation efforts are coming up short on intended ROI, in part because digital transformation is as much a leadership issue as it is a strategy, technology, culture and talent issue,” said Rajan Kohli, senior vice president and global head, Wipro Digital.
More of the CIO Insight slideshow from Karen Frenkel
With the ever-increasing interest in technology solutions, IT’s stakeholders are giving them two competing demands:
1. Produce new innovative, strategic technology-based capabilities.
2. Do so with reduced resources.
How can IT leaders step up to the plate and juggle these seemingly competing agendas: to meet the business’ demands for increased innovation, including new digital systems and services, all while cutting costs and slashing budgets?
One popular solution has emerged within IT thought leadership. Often called “two-speed IT,” this idea proposes that the IT organization does not attempt to resolve the tension between these two ideas. Instead, IT lumps all of its technology into one of two broad buckets: operational technology and innovative technology. Do this, and operations won’t slow down innovation, and expensive innovation investments won’t inflate operations’ budgets.
More of the CIO Insight article from Lee Reese
When the term digital transformation was first bandied about by consultants and business publications, its implications were more about keeping up and catching up than true transformation. Additionally, at first it was only applied to large, traditional organizations struggling, or experimenting, in an increasingly digital economy. But true digital transformation requires so much more. As evidenced by the recent Amazon acquisition of Whole Foods, we’re living in a new world.
Early transformation efforts were focused on initiatives: e-commerce, sensors/internet of things, applications, client and customer experience, and so on. Increasingly, our clients are coming to us as they realize that in order for these disparate initiatives to thrive, they need to undergo an end-to-end transformation, the success of which demands dramatic operational, structural, and cultural shifts.
More of the HBR post from Tuck Rickards and Rhys Grossman
Could Microsoft’s not-so secret weapon get the edge on AWS?
The promise of the cloud is based on offloading processing and other data management tasks to offsite locations, but businesses of all sizes are gradually coming to realise that they’re better off running certain applications internally.
Microsoft are hoping Azure Stack will bridge this gap, giving Azure users the ability to run Azure consistent services in their own data centers. “Azure Stack will be a game changer in terms of how we run our data centres” predicts Mark Skelton, Head of Consultancy at OCSL, a Microsoft-friendly IT service provider. He adds:
It effectively creates Nirvana; one place to code, one place to develop and one platform to build upon.
Here are a few reasons why Azure Stack is in huge demand before it’s release.
More of the Technative post
Having a multi-cloud strategy these days is like having a multi-server strategy in ages past: Why trust your workloads to a single point of failure when you can move them about at will?
But while distributing resources over multiple points fosters redundancy and eliminates vendor lock-in on one level, the enterprise should be aware that this invariable pushes these same risks to another.
It’s no surprise that upwards of 85 percent of organizations have implemented a multi-cloud strategy by now, says Datacenter Journal’s Kevin Liebl. Following major outages at AWS and Azure earlier this year, the vulnerabilities of placing all data in one basket have become clear. Using multiple clouds provides clear advantages for disaster recovery, data migration, workload optimization, and a host of other functions.
More of the IT Business Edge post from Arthur Cole
Adaptive BC, a website established to develop and promote a new approach to business continuity, has been calling for the elimination of the BIA. In this article David Lindstedt, one of the founders of Adaptive BC, explains why.
The business impact analysis (BIA) has been a staple of business continuity for decades. In that time, the BIA has grown, expanded, and become rather nebulous in its scope, objectives, and value. By exploring both its initial purpose and current implementation, we can conclude that early benefits gained from the BIA no longer outweigh the disadvantages, and that practitioners ought to eliminate the use of the BIA as much and as soon as feasible.
Part one: genesis
What was the BIA when it came into use? The original intent of the BIA was to estimate the impact that a significant incident would have on the business. More accurately, it was to estimate the different types of impact that a significant incident would have on different parts of the business. As the BCI DRJ Glossary states, even today the BIA is defined simply as the, “Process of analyzing activities and the effect that a business disruption might have on them.”
More of the Continuity Central article
EfficientIP has published the results of a survey that was conducted for its 2017 Global DNS Threat Survey Report. It explored the technical and behavioural causes for the rise in DNS threats and their potential effects on businesses across the world.
Major issues highlighted by the study, now in its third year, include a lack of awareness as to the variety of attacks; a failure to adapt security solutions to protect DNS; and poor responses to vulnerability notifications. These concerns will not only be subject to regulatory changes, but also create a higher risk of data loss, downtime or compromised reputation.
According to the report, carried out among 1,000 respondents across APAC, Europe and North America, 94 percent of respondents claim that DNS security is critical for this business. Yet, 76 percent of organizations have been subjected to a DNS attack in last 12 months and 28 percent suffered data theft.
More of the Continuity Central post
Strange as it may seem, the cloud only holds about a fifth of the total enterprise workload, which means there is still time for the enterprise to suddenly decide that the risks are not worth the rewards and start pulling data and applications back to legacy infrastructure.
Unlikely as this is, it nonetheless points out the fact that there are still many unknowns when it comes to the cloud, particularly its ability to provide the lion’s share of data infrastructure in ways that are both cheaper and more amenable to enterprise objectives.
According to Morgan Stanley’s Brian Nowak, the cloud is nearing an inflection point at which it should start to show accelerated growth into the next decade.
More of the IT Business Edge post from Arthur Cole