05
Jun 17

The Register – So your client’s under-spent on IT for decades and lives in fear of an audit

Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous “agility” to reducing the time to deploy new workloads. I have an argument for infrastructure as code that boils down to “cover your ass”, and have discovered it’s not quite so difficult as we might think.

Recently, a client of mine went through an ownership change. The new owners, appalled at how much was being spent on IT, decided that the best path forward was an external audit. The client in question, of course, is an SMB who had been massively under-spending on IT for 15 years, and there no way they were ready for – or would pass – an audit.

Trying to cram eight months’ worth of migrations, consolidations, R&D, application replacement and so forth into four frantic, sleepless nights of panic ended how you might imagine it ending. The techies focused on making sure their asses were covered when the audit landed. Overall network performance slowed to a crawl and everyone went home angry.

More of The Register article from Trevor Pott


01
Jun 17

TechTarget – Enlightened shadow IT policy collaborates with users

A cloud-era shadow IT policy still needs to manage risk, but the era of “no way” is giving way to allow users quick access to the productivity apps they need.

Most IT departments have spent time rooting out the shadow, or non-IT-sanctioned, applications and systems in use within their organizations. Today, users find that cloud-based services not necessarily approved by IT enable them to quickly subscribe to applications and platforms that improve their collaboration and productivity. That advantage is prompting IT organizations to rethink how to work with users rather than have a shadow IT policy that is in full-out combat against apps that haven’t been fully blessed by the enterprise and could introduce security risks.

More of the TechTarget article from Sandra Gittlen


03
May 17

ZDNet – Cloud v. Data Center: Key trends for IT decision-makers

Cloud-based compute, networking and storage infrastructure, and cloud-native applications, are now firmly on the radar of CIOs — be they in startups, small businesses or large enterprises. So much so that, whereas a few years ago the question facing them was “Which workloads should I move to the cloud?”, it’s now becoming “Which, if any, workloads should I keep on-premises?”. While most organisations will probably end up pursuing a hybrid cloud strategy in the medium term, it’s worth examining this turnaround, and the reasons behind it.

The general background, as ZDNet has explored in recent special features, is the competitive pressure for organisations to undergo a digital transformation based on cloud-native applications and methods such as DevOps, in pursuit of improved IT and organisational performance.

More of the ZDNet article from Charles McLellan


13
Apr 17

Arthur Cole – The New Cloud and the Old Data Center

What do your business requirements tell you about your best data center or cloud solution?

The more things change, the more they stay the same. It’s a trite saying but appropriate for today’s cloud infrastructure market, which seems to be evolving along much the same vendor-defined trajectory as the data center before it.

According to new data from Synergy Research Group, the top three vendors duking it out for cloud dominance are … wait for it … Dell EMC, Cisco and HPE. This may come as a surprise to some, considering commodity manufacturers in the APAC region are supposed to be taking over. But according to the company’s research, the new Big Three each hold about 11.5 percent of the market, while an equal share went to multiple ODMs in the Pacific Rim. Microsoft and IBM each held smaller shares, which means that more than a third of the market is divvied up between numerous small to medium-sized vendors.

More of the IT Business Edge post from Arthur Cole


28
Feb 17

TheWHIR – 3 Steps to Ensure Cloud Stability in 2017

We’re reaching a point of maturity when it comes to cloud computing. Organizations are solidifying their cloud use-cases, understanding how cloud impacts their business, and are building entire IT models around the capabilities of cloud.

Cloud growth will only continue; Gartner recently said that more than $1 trillion in IT spending will, directly or indirectly, be affected by the shift to cloud during the next five years.

“Cloud-first strategies are the foundation for staying relevant in a fast-paced world,” said Ed Anderson, research vice president at Gartner. “The market for cloud services has grown to such an extent that it is now a notable percentage of total IT spending, helping to create a new generation of start-ups and ‘born in the cloud’ providers.”

More of TheWHIR post from Bill Kleyman


10
Feb 17

SearchCloudComputing – For enterprises, multicloud strategy remains a siloed approach

Although not mentioned in this article, enterprise cloud providers like Expedient are often a key player in the multicloud mix. Enterprise clouds deliver VMware or HyperV environments that require little or no retraining for the infrastructure staff.

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but “multicloud” remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

More of the SearchCloudComputing article


19
Dec 16

Data Center Knowledge – TSO Logic: Cloud Migration Offers Instant Savings

Need help doing the math to see if your in-house virtual machines would be cheaper to operate in the cloud? If so, contact me.

Nearly half (45 percent) of on-premise virtualized operating system instances could run more economically in the cloud, for a 43 percent annual savings, according to research released this week by infrastructure optimization company TSO Logic. The research makes starkly clear the cost of legacy hardware, and the savings potential of cloud migration.

More than one in four OS instances are over-provisioned, the company says, and migrating them to an appropriate sized cloud instance would reduce their cost by 36 percent.

Drawn from an algorithmic analysis of anonymized data from TSO Logic’s North American customers, the research also showed that of 10,000 physical servers, 25 percent are at least 3-years-old. The same workload as done on Generation-5 servers could now be done on 30 percent less Generation-9 servers, based only on processor gains, the company says.

More of the Data Center Knowledge post from Chris Burt


15
Dec 16

ComputerWeekly – IT Priorities 2017: What will IT decision-makers be focusing on?

Each year, TechTarget looks at how CIOs and senior IT decision-makers will be investing in IT in the 12 months ahead

Budgets for staff and on-premise servers are falling as CIOs focus on cloud computing, according to the TechTarget’s IT Priorities 2017 survey.

Most of the 353 people surveyed said their IT budgets would remain the same. Only 17% said their budget would increase by more than 10%, 16% said their budget would increase by 5% to 10%, and 9% said their budget would decrease.

The survey found that most of the budget increases would be invested in cloud services (43%), software (43%) and disaster recover (30%).

More of the ComputerWeekly post from Cliff Saran


07
Dec 16

Baseline – Why IT Pros Feel Unprepared for Disasters

While most C-level executives feel their organization is “very prepared” for a potential systems-crashing disaster, IT professionals sharply disagree, according to a recent survey from Evolve IP. The “2016 Evolve IP Disaster Recovery and Business Continuity Survey” report indicates that a significant number of companies have suffered from a major incident that required disaster recovery (DR) over the past year—sometimes resulting in six-figure losses. Many tech employees indicate that a lack of DR budgeting leaves them unprepared for disruptions caused by hardware failures, server issues, power outages, environmental events, human error and targeted cyber-attacks. And a great many organizations still rely on old-school recovery methods such as backup tapes, instead of newer cloud-based solutions.

There is, however, notable interest in Disaster Recovery as a Service (DRaaS), despite the fact that only about half of C-level executives have heard of this term. “The lack of DR education at the executive level—and the likely related lack of budget—poses a real risk to today’s businesses,” according to the report. “These factors are further exacerbated by a dramatic increase in targeted attacks, continued reliance on aging tape backups, as well as internal hardware that remains highly susceptible to failure.

More of the Baseline slideshow from Dennis McCafferty


02
Dec 16

Data Center Knowledge – The Mission Critical Cloud: Designing an Enterprise Cloud

Today, many organizations are taking a look at cloud from a new lens. Specifically, organizations are looking to cloud to enable a service-driven architecture capable of keeping up with enterprise demands. With that in mind, we’re seeing businesses leverage more cloud services to help them stay agile and very competitive. However, the challenge revolves around uptime and resiliency. This is compounded by often complex enterprise environments.

When working with cloud and data center providers, it’s critical to see just how costly an outage could be. Consider this – only 27% of companies received a passing grade for disaster readiness, according to a 2014 survey by the Disaster Recovery Preparedness Council. At the same time, increased dependency on the data center and cloud providers means that overall outages and downtime are growing costlier over time. Ponemon Institute and Emerson Network Power have just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).

More of the Data Center Knowledge post from Bill Kleyman