02
Mar 16

CloudExpo – Hybrid Cloud Versus Hybrid IT: What’s the Hype?

Once again, the boardroom is in a bitter battle over what edict its members will now levy on their hapless IT organization. On one hand, hybrid cloud is all the rage. Adopting this option promises all the cost savings of public cloud with the security and comfort of private cloud. This environment would not only check the box for meeting the cloud computing mandate, but also position the organization as innovative and industry-leading. Why wouldn’t a forward-leaning management team go all in with cloud?

On the other hand, hybrid IT appears to be the sensible choice for leveraging traditional data center investments. Data center investment business models always promise significant ROI within a fairly short time frame; if not, they wouldn’t have been approved. Shutting down such an expensive initiative early would be an untenable decision. Is this a better option than the hybrid cloud?

Hybrid Cloud Versus Hybrid IT
The difference between hybrid cloud and hybrid IT is more than just semantics. The hybrid cloud model is embraced by those entities and startups that don’t need to worry about past capital investments. These newer companies have more flexibility in exploring newer operational options.

More of the CloudExpo blog post from Kevin Jackson


24
Feb 16

Baseline – What Worries IT Organizations the Most?

IT employees and leaders have a lot to worry about these days, according to a recent survey from NetEnrich. For starters, they’re spending too much money on technology that either doesn’t get used or fails to deliver on its promises, findings show. They devote too many hours to “keeping the lights on” rather than innovating. And the increase of tech acquisition decisions being made outside of the IT department (shadow IT) elevates existing risks about cyber-security and business app performance. Meanwhile, tech departments are still struggling with a lack of available talent to support agility and business advances. “Corporate IT departments are in a real bind,” said Raju Chekuri, CEO at NetEnrich.

More of the Baseline slideshow from Dennis McCafferty


17
Feb 16

VMTurbo – What’s the Promise of Orchestration?

In my conversations over 2015, I have found that one of the top of mind goals for many Directors and CIOs for this year is the goal of fully automating the orchestration of the environment. It is a common pain felt across the IT staff, the lack of agility and automation when it comes to provisioning new workloads for the environment.

Whether the plan is to expand the VMWare suite through vRealize Automation, pursue a third party technology like Chef, Puppet, CloudForms, or move into a full IaaS or PaaS environment through OpenStack or Cloud Foundry, the objective is to speed up the auto-provisioning capabilities of the data center to meet the rapidly growing needs for faster, more responsive applications at a quicker delivery time. However, the benefits of moving to automated orchestration, also create new challenges.

Why Orchestrate?

To answer this question, let me throw out a scenario that many can probably relate to today. An administrator logs into his Outlook first thing Friday morning, and at the top of his inbox is a request for a new VM from a coworker, who plans to begin testing a new application in the next couple of weeks per the CIO’s initiative.

More of the VMTurbo post from Matt Vetter


22
Jan 16

About Virtualization – The Network is Agile, but Slower with SDN and Microservices

Have you ever moved something in your kitchen because it fits better, only to find that you spend more time having to go and get it where you used to have it closer at hand? This is a simple analogy, but does relate to some confusion that is happening around SDN and microservices implementations.

As new methodologies and technologies come into your organization, we assess what it is that they are meant to achieve. You’ve worked out a list of requirements that you want to see, and from that wish list, you check off which are attained by the product of choice. As we look towards microservices architectures, which I fully agree we should, we have one checklist for the applications. As we look at the challenges that SDN solves, which I fully agree that we should, we have another checklist.

Let’s first approach this by dealing with a couple of myths about SDN and microservices architectures:

More of the About Virtualization post


06
Jan 16

Data Center Knowledge – Getting to the True Data Center Cost

Will it be cheaper to run a particular application in the cloud than keeping it in the corporate data center? Would a colo be cheaper? Which servers in the data center are running at low utilization? Are there servers that have been forgotten about by the data center manager? Does it make sense to replace old servers with new ones? If it does, which ones would be best for my specific applications?

Those are examples of the essential questions every data center manager should be asking themselves and their team every day if they aren’t already. Together, they can be distilled down to a single ever-relevant question: How much does it cost to run an application?

Answering it is incredibly complex, which is the reason startups like TSO Logic, Romonet, or Coolan, among others, have sprung up in recent years. If you answer it correctly, the pay-off can be substantial, because almost all data centers are not running as efficiently as they can, and there’s always room for optimization and savings.

More of the Data Center Knowledge post


07
May 14

The Virtualization Practice – Will Scale-Out Architectures Kill Enterprise Storage?

In 1980 I was working for Datapoint, a vendor with proprietary client hardware, proprietary server hardware, a proprietary LAN, and proprietary systems software. In 1983 IBM introduced the PC, and in 1985 it introduced the PC-XT with a hard disk. 3Com introduced Ethernet, and Novell created a network operating system. All of a sudden, Datapoint was on the wrong side of history in the computer business. In five short years, Datapoint went from six thousand employees to sixty.
Is Shared Enterprise Storage on the Wrong Side of History?

To date, shared enterprise storage has been the beneficiary of two huge trends in the computer industry: data center virtualization (which basically needs shared storage for certain highly valuable features such as vMotion to work), and big data, which produces so much data that large-scale arrays are needed. Shared enterprise storage arrays also sport a variety of highly attractive enterprise-class features like redundancy (no single point of failure), deduplication (saving on disk space and cost), and the ability to move bits around as business needs dictate. These arrays also have a huge amount of hardware engineering behind them, designed to allow them to offer excellent performance at scale to their customers.

More of The Virtualization Practice post