The stocks of all seven US data center REITs (there are now six, following a merger that closed Thursday) slid down simultaneously this week, after a well-known venture capitalist and hedge-fund owner said at an investor conference that advances in processor technology will eventually lead to the demise of the data center provider industry.
But industry insiders say his views are overly simplistic, and that history has shown that advances in computing technology only create more hunger for data center capacity, not less.
Since server chips are getting smaller and more powerful than ever, companies in the future will not need anywhere near the amount of data center space they need today, Chamath Palihapitiya, founder and CEO of the VC firm Social Capital, who last year also launched a hedge fund, said Tuesday afternoon, according to Seeking Alpha, which cited Bloomberg as the source:
More of TheWHIR post from Yevgeniy Sverdlik
Druva has published the results of its 2017 VMware Cloud Migration Survey, which looked at how enterprises working in a VMware environment are approaching cloud migration. The survey results show a powerful trend toward moving virtual workloads to the cloud due to its lower cost, with Amazon Web Services (AWS) being the preferred destination for workload migrations. Disaster recovery, workload mobility, and archival automation were all strong adoption drivers, with many organizations looking to save money and maximize IT initiatives focused on simplifying their infrastructure.
Key findings of the Druva 2017 VMware Cloud Migration Survey:
There is a major shift in the VMware market to migrate data centres to the cloud. 90 percent of companies are aiming to migrate their workloads by 2018, with a clear preference for AWS (47 percent), followed by Microsoft Azure (25 percent).
More of the Continuity Central article
James Kelly is the Lead Cloud and SDN Expert at Juniper Networks.
So you’re doing cloud, and there is no sign of slowing down. Maybe your IT strategies are measured, maybe you’re following the wisdom of the crowd, maybe you’re under the gun, you’re impetuous or you’re oblivious. Maybe all of the above apply. In any case, like all businesses, you’ve realized that cloud is the vehicle for your newly dubbed software-defined enterprise: a definition carrying onerous, what I call, ‘daft pressures’ for harder, better, faster, stronger IT.
You may as well be solving the climate-change crisis because to have a fighting chance today, it feels like you have to do everything all at once.
More of the Data Center Knowledge post from James Kelly
Public cloud providers go to great lengths to secure their infrastructure, but organizations are still responsible to protect their apps and data. We look at Amazon Web Services and Microsoft Azure.
As we discussed in an earlier post [link to cloud fears entry], it’s a little late in the game to be wholly suspicious of cloud computing. However, there’s still a lot to talk about in terms of securing the cloud.
The security features offered by public cloud providers represent only a part of the shared responsibility model; the other part falls within your organization’s responsibility. For example, your public cloud provider may offer security groups for identity and access management (IAM) and firewalls that scan traffic on specific ports and to and from specific IP addresses.
More of the ZDNet article from Larry Seltzer
In 1980, anyone who used a PC was, by definition, something of a nerd. But Byte, the leading computer magazine of the time, saw a need for a column that emphasized the benefits of the machines rather than their innards. It found its author in celebrated science-fiction author Jerry Pournelle, whose Byte writings–best known by the name “Chaos Manor”–were not very technical; profoundly first person-y and opinionated; focused what you could do with a PC; and prone to going off on extended tangents that were as defining an aspect of the columns as the parts that more obviously belonged in a publication called Byte.
More of the Fast Company article
Since its inception in the 19th century, Artificial Intelligence is a growing topic of conversation in both science fiction and intellectual debate. To Cut a long story short, AI turns out to be the most disruptive and pervasive technologies of the current digital revolution. Right from automobiles to health care, home automation, aerospace engineering, material science, sports, the technology has been used very creatively, in hitherto unheard of sectors and has the potential to profoundly affect how we interact across the globe. As a result, the tech industry’s interest becomes stronger than ever.
According to the Oxford dictionary “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
More of the Customer Think article from Nishtha Singh
The Gartner Hype Cycle for Cloud Security aims to help security professionals understand which emerging technologies are ready for mainstream use, and which are still years away from productive deployments for most organizations. The 2017 edition of the Hype Cycle for Cloud Security is now available and a summary is below.
“Security continues to be the most commonly cited reason for avoiding the use of public cloud,” said Jay Heiser, research vice president at Gartner. “Yet paradoxically, the organizations already using the public cloud consider security to be one of the primary benefits.”
The attack resistance of the majority of cloud service providers has not proven to be a major weakness so far, but customers of these services may not know how to use them securely.
“The Hype Cycle can help cybersecurity professionals identify the most important new mechanisms to help their organizations make controlled, compliant and economical use of the public cloud,” said Mr. Heiser.
More of the Continuity Central post
In recent weeks I didn’t write stories about Packet.net splashing down in 15 new nations to start an edge compute service, or the plans that Tata Telecoms shared with me to expand its data centre footprint by targeting partnerships with users of its submarine cables.
I skipped them both because the companies concerned are minor players in the big, big, drama that is the shift from on-premises computing to the cloud. Even if we’d loosed the crack Reg Punning Squad to work some headlining magic, I couldn’t imagine many of you would click on news of either company.
But a conversation with Zerto’s president Paul Zeiter has me thinking perhaps we all need to spend more time looking at small clouds.
Zeiter pointed out to me that in almost every enterprise IT category, there’s a couple of leaders, a few followers and then a big pool of “other” that often accounts for 40 per cent or more of the market. And sometimes the “Others” are more interesting than the mainstream: for example, The Register has often had good responses to our coverage of niche PC vendor Eurocom becuase the company makes stuff like server-class laptops that insist their batteries are actually an uninterruptible power supply.
More of The Register post from Simon Sharwood
All commercial organizations operating in the digital era exist within a challenging landscape. Underlying trust is weak; expectations of good, transparent governance are high; and acceptance of failure is low.
At the same time, communicating with stakeholders is becoming more complex as traditional addressable audiences fragment into ever-evolving, always-online socially-connected communities, guaranteeing that issues and crises play out very publicly and swiftly.
To navigate these challenges successfully and to protect value for shareholders as companies grow, it’s vital to enhance business resilience. Reducing risk and building trust should be as important as innovating and pursuing operational excellence.
What is a crisis?
The British Standard for Crisis Management (BS 11200:2014) defines a crisis as “An abnormal and unstable situation that threatens the organization’s strategic objectives, reputation or viability.” Understanding this definition is vital in helping an organization to prepare itself to deal with a crisis. Through worst-case scenario planning, organizations can identify what abnormal events they could be exposed to, the impact of abnormal events on the ability to execute strategic objectives, and the damage that could be caused to reputation and viability.
More of the Continuity Central post from Robert McAllister
Editors note: Like the Skytap illustration in the article, Expedient clients are using public and private cloud services RIGHT NOW to improve application performance, reduce maintenance workloads, and improve uptime. These organizations don’t have the luxury of waiting for their development teams or primary software vendors to rewrite their mission critical apps from the ground up.
It seems that cloud providers are no longer fooling around when it comes to getting enterprise workloads. With new migration packages and services optimized for mission-critical data and applications, CSPs large and small are eager for your business.
The question for most enterprises, however, is whether to stick with the hyperscale providers like Amazon and Microsoft, or go with a not-so-large firm that may have a bit more flexibility when it comes to matching infrastructure with customized user needs.
Skytap, for one, is hoping that the one-size-fits-all approach will not be enough for most enterprises as they embrace crucial service offerings like Big Data and the IoT. CEO Thor Culverhouse argues that the cloud giants are overlooking key market segments like the legions of mission-critical apps that are stuck on legacy systems but will have to move to hybrid infrastructure in order to keep up with the speed of business activity. His plan is to offer specialized infrastructure optimized for the 75 percent of the enterprise workload that is not likely to become cloud-native any time soon.
More of the IT Business Edge article from Arthur Cole