When the term digital transformation was first bandied about by consultants and business publications, its implications were more about keeping up and catching up than true transformation. Additionally, at first it was only applied to large, traditional organizations struggling, or experimenting, in an increasingly digital economy. But true digital transformation requires so much more. As evidenced by the recent Amazon acquisition of Whole Foods, we’re living in a new world.
Early transformation efforts were focused on initiatives: e-commerce, sensors/internet of things, applications, client and customer experience, and so on. Increasingly, our clients are coming to us as they realize that in order for these disparate initiatives to thrive, they need to undergo an end-to-end transformation, the success of which demands dramatic operational, structural, and cultural shifts.
More of the HBR post from Tuck Rickards and Rhys Grossman
Could Microsoft’s not-so secret weapon get the edge on AWS?
The promise of the cloud is based on offloading processing and other data management tasks to offsite locations, but businesses of all sizes are gradually coming to realise that they’re better off running certain applications internally.
Microsoft are hoping Azure Stack will bridge this gap, giving Azure users the ability to run Azure consistent services in their own data centers. “Azure Stack will be a game changer in terms of how we run our data centres” predicts Mark Skelton, Head of Consultancy at OCSL, a Microsoft-friendly IT service provider. He adds:
It effectively creates Nirvana; one place to code, one place to develop and one platform to build upon.
Here are a few reasons why Azure Stack is in huge demand before it’s release.
More of the Technative post
Having a multi-cloud strategy these days is like having a multi-server strategy in ages past: Why trust your workloads to a single point of failure when you can move them about at will?
But while distributing resources over multiple points fosters redundancy and eliminates vendor lock-in on one level, the enterprise should be aware that this invariable pushes these same risks to another.
It’s no surprise that upwards of 85 percent of organizations have implemented a multi-cloud strategy by now, says Datacenter Journal’s Kevin Liebl. Following major outages at AWS and Azure earlier this year, the vulnerabilities of placing all data in one basket have become clear. Using multiple clouds provides clear advantages for disaster recovery, data migration, workload optimization, and a host of other functions.
More of the IT Business Edge post from Arthur Cole
Adaptive BC, a website established to develop and promote a new approach to business continuity, has been calling for the elimination of the BIA. In this article David Lindstedt, one of the founders of Adaptive BC, explains why.
The business impact analysis (BIA) has been a staple of business continuity for decades. In that time, the BIA has grown, expanded, and become rather nebulous in its scope, objectives, and value. By exploring both its initial purpose and current implementation, we can conclude that early benefits gained from the BIA no longer outweigh the disadvantages, and that practitioners ought to eliminate the use of the BIA as much and as soon as feasible.
Part one: genesis
What was the BIA when it came into use? The original intent of the BIA was to estimate the impact that a significant incident would have on the business. More accurately, it was to estimate the different types of impact that a significant incident would have on different parts of the business. As the BCI DRJ Glossary states, even today the BIA is defined simply as the, “Process of analyzing activities and the effect that a business disruption might have on them.”
More of the Continuity Central article
EfficientIP has published the results of a survey that was conducted for its 2017 Global DNS Threat Survey Report. It explored the technical and behavioural causes for the rise in DNS threats and their potential effects on businesses across the world.
Major issues highlighted by the study, now in its third year, include a lack of awareness as to the variety of attacks; a failure to adapt security solutions to protect DNS; and poor responses to vulnerability notifications. These concerns will not only be subject to regulatory changes, but also create a higher risk of data loss, downtime or compromised reputation.
According to the report, carried out among 1,000 respondents across APAC, Europe and North America, 94 percent of respondents claim that DNS security is critical for this business. Yet, 76 percent of organizations have been subjected to a DNS attack in last 12 months and 28 percent suffered data theft.
More of the Continuity Central post
Strange as it may seem, the cloud only holds about a fifth of the total enterprise workload, which means there is still time for the enterprise to suddenly decide that the risks are not worth the rewards and start pulling data and applications back to legacy infrastructure.
Unlikely as this is, it nonetheless points out the fact that there are still many unknowns when it comes to the cloud, particularly its ability to provide the lion’s share of data infrastructure in ways that are both cheaper and more amenable to enterprise objectives.
According to Morgan Stanley’s Brian Nowak, the cloud is nearing an inflection point at which it should start to show accelerated growth into the next decade.
More of the IT Business Edge post from Arthur Cole
In so many ways IT operations has developed a military-style culture. If IT ops teams are not fighting fires they’re triaging application casualties. Tech engineers are the troubleshooters and problems solvers who hunker down in command centers and war rooms.
For the battle weary on-call staff who are regularly dragged out of bed in the middle of the night, having to constantly deal with flaky infrastructure and poorly designed applications carries a heavy personal toll. So, what are the signs an IT organization is engaged in bad on-call practices? Three obvious ones to consider include:
Support teams are overloaded – Any talk of continuous delivery counts for squat if systems are badly designed, hurriedly released and poorly tested.
More of the Data Center Knowledge post from Peter Waterhouse
It is easy for organizations to feel overwhelmed by the number and scale of the risks that are faced; but often the perception of the potential harm engendered by various risks is exaggerated. In this article Chris Butler lists the real risks that every organization needs to consider.
Did you know the world’s most dangerous animal is not a shark, or a bear, but is in fact a mosquito? What’s certain is that human perception of risk is notoriously flawed; often, the events that concern and outrage us the most are the least likely to happen.
From political and economic tremors to cyber threats, 2017 represents another minefield of risks for businesses. For organizations, forging a deepened understanding of both threats and risk factors is crucial for remaining robust, resilient, and most of all, ahead of the competition. Part of this involves separating the myths from reality. So, what then are the real risks to business today?
More of the Continuity Central article
While a VPN is useful for multicloud networks, IT teams still need to be careful to avoid high traffic charges, as applications move from one provider to another.
One of the most important — and most complex — concepts in multicloud is network integration between public cloud providers. This model facilitates cross-cloud load balancing and failover but, without careful planning, can also lead to hefty network integration costs.
Nearly all enterprises have a virtual private network (VPN) that connects their sites, users, applications and data center resources. When they adopt cloud computing, they often expect to use that VPN to connect their public cloud resources as well. Many cloud providers have features to facilitate this, and even when they don’t, it’s usually possible to build VPN support into application images hosted in the cloud.
More of the TechTarget article from Tom Nolle
Cisco is generally credited with driving the concept of the Internet of Things (IoT), even though it was Carnegie Mellon back in 1982 that first conceptualized the idea. I’m still at Cisco’s big analyst event this week and was fascinated by a survey shared on stage. Apparently, 74 percent of the survey’s respondents have indicated that their IoT efforts have been going really badly, either not finishing or not finishing within expectations. In addition, these same folks are indicating that about half their time is spent on troubleshooting problems. Cisco connects the latter to complexity. Given that there really is nothing as complex as a typical IoT effort, I see the two stats as related and suggest that IoT efforts are poorly planned, which is why they aren’t completing as expected and likely adding to the complexity problems overwhelming IT organizations.
Now, Cisco is positioning its Network Intuitive efforts at this problem and certainly massive automation can reduce the amount of work, particularly with regard to often repetitive troubleshooting efforts. However, with the IoT in particular, really understanding the problem you are trying to solve and simplifying the effort at the front end would likely have an even bigger initial positive impact.
More of the IT Business Edge article from Rob Enderle