I think we can all agree that corporate functions tend to be a locus of frustration for pretty much all employees — except, of course, the ones from the function that is the object of the frustration in question. If you have ever thought to yourself “Doesn’t legal understand that we are going to lose this deal if they don’t sign off soon?” or “Why is HR’s answer always Our rules don’t allow that?” you are not alone.
Recent McKinsey research showed that senior executives have a low level of satisfaction (an average of only 30%) in their corporate functions across the board. McKinsey’s recommendations are all sensible, such as: “Create incentives for functional leaders to contain costs, instead of allocating costs that business units can’t change.” This issue has long been a bugbear for me. Despite chronically low satisfaction and lots of intelligent prescriptions like this, the problem by all accounts seems to be getting worse, not better.
More of the Harvard Business Review article from Roger L Martin
Agree or disagree?
Sometimes as a business continuity manager you have a feeling that a certain decision is the wrong one, despite qualitative and quantitative evidence pointing to the contrary. Dominic Irvine explains how research is starting to support the reliability of trusting your gut feeling…
Qualitative and quantitative evidence is sometimes used as a weapon to force decisions through when not everyone involved is convinced; in the face of charts, spreadsheets and PowerPoint decks, gut feel seems like a poor response, and yet what we are learning from research into exertion and fatigue, is that it is one of the most useful tools in our armoury of tests.
After the First World War, much work was done to find a way to measure fatigue but it was deemed such a subjective concept as to be impossible to develop any meaningful way of objectively measuring it. It was not possible to fathom out the complex interaction between emotional, physical and mental aspects of fatigue in a way that could be reliably and accurately counted. And yet, we all know the feeling of being fatigued and how tired we are.
More of the Continuity Central post
Applications that were once simple to manage are now rolled out across thousands of physical and virtual machines.
These sprawling applications include multiple components, with the potential points of integration spread across the enterprise and out into the wider cloud.
So, what are the key challenges CIOs will face as they overhaul their IT departments in readiness for the next stage of enterprise computing? Here are some key lessons for CIOs.
1. Build a platform for business change
Successful companies in the digital age are characterised by their ability to absorb technology into everyday processes and by ensuring there is no division between what might previously have been classed as IT and business professionals.
More of the ZDNet article from Mark Samuels
Does devops lead to agile, or does agile lead to devops? Or perhaps they move in tandem as the enterprise gropes its way through digital transition. And if that’s the case, is optimized, automated infrastructure the cause or the effect of this new IT model?
The answers to these questions could be crucial for the enterprise over the next few years because they speak directly to how technology decisions will be made. For instance, if the right infrastructure is required for devops, then what technologies are needed to deliver the appropriate outcomes? But if devops evolves naturally, then how does the enterprise foster an integrated IT environment rather than simply another collection of disjointed point solutions?
According to a recent survey by BMC Software, the top three priorities for IT investment over the next two years are containers, workload automation/scheduling and devops
More of the IT Business Edge post from Arthur Cole
Determining the ROI for any cybersecurity investment, from staff training to AI-enabled authentication managers, can best be described as an enigma shrouded in mystery. The digital threat landscape changes constantly, and it’s very difficult to know the probability of any given attack succeeding — or how big the potential losses might be. Even the known costs, such as penalties for data breaches in highly regulated industries like health care, are a small piece of the ROI calculation. In the absence of good data, decision makers must use something less than perfect to weigh the options: their judgment.
But insights from behavioral economics and psychology show that human judgment is often biased in predictably problematic ways. In the case of cybersecurity, some decision makers use the wrong mental models to help them determine how much investment is necessary and where to invest. For example, they may think about cyber defense as a fortification process — if you build strong firewalls, with well-manned turrets, you’ll be able to see the attacker from a mile away.
More of the Harvard Business Review post from Alex Blau
The concepts of recovery point objectives and recovery time objectives are becoming increasingly obsolete. Today’s highly connected world has forced most organizations to ensure IT resiliency and make their resources continuously available. More importantly, the cost of downtime continues to increase and has become unacceptable and even unaffordable for many organizations.
A 2016 study by the Ponemon Institute estimated the total cost of data center downtime to be about $740,357 per hour — a little higher than a similar 2015 study by cloud backup and disaster recovery-as-a-service provider Infrascale. The study also indicated that downtime can be so expensive, it calculated data center outages cost businesses an average of $8,851 per minute.
For large companies, the losses can be staggering. One 2016 outage cost Delta Air Lines $150 million.
The study went on to state that it takes, on average, 18.5 hours for a business to recover from a disaster. Given the hourly price of an outage, the cost of recovering from a disaster can be staggering. So it is hardly surprising the IT industry is transitioning from legacy backup and recovery planning in favor of disaster recovery or business continuity planning.
More of the TechTarget article from Brian Posey
Many critical industries such as nuclear energy, commercial and military airlines—even drivers’ education—invest significant time and resources to developing processes. The data center industry … not so much.
That can be problematic, considering that two-thirds of data center outages are related to processes, not infrastructure systems, says David Boston, director of facility operations solutions for TiePoint-bkm Engineering.
“Most are quite aware that processes cause most of the downtime, but few have taken the initiative to comprehensively address them. This is somewhat unique to our industry.”
Boston is scheduled to speak about strategies to prevent data center outages at the Data Center World local conference at the Art Institute of Chicago on July 12. More about the event here.
More of the Data Center Knowledge article from Karen Riccio
Study You, dear readers, continually tell us in surveys how hard it is to get the investment needed to help you do your jobs effectively. Regardless of the topic – core infrastructure, middleware, management tools, etc – it’s common to hear stories of execs not “getting it”, while expecting IT to muddle through as more pressure is piled onto already stretched teams.
But it has been a while since we have run a survey specifically focused on the pain and practicality of IT-related procurement, so let’s put this right.
Our latest study includes questions like:
“How often do procurement or finance get involved, then skew decisions towards cost, regardless of value?”
There’s then the old chestnut:
“How often are you forced to buy from an incumbent supplier, regardless of whether it’s the right choice?”
Of course the level of pain or pleasure often depends on the environment you work in and how well business and IT people communicate and understand each other, so we touch on that too, including the problem of senior management often having inflated or unrealistic expectations.
More of The Register article and survey link from Dale Vile
Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous “agility” to reducing the time to deploy new workloads. I have an argument for infrastructure as code that boils down to “cover your ass”, and have discovered it’s not quite so difficult as we might think.
Recently, a client of mine went through an ownership change. The new owners, appalled at how much was being spent on IT, decided that the best path forward was an external audit. The client in question, of course, is an SMB who had been massively under-spending on IT for 15 years, and there no way they were ready for – or would pass – an audit.
Trying to cram eight months’ worth of migrations, consolidations, R&D, application replacement and so forth into four frantic, sleepless nights of panic ended how you might imagine it ending. The techies focused on making sure their asses were covered when the audit landed. Overall network performance slowed to a crawl and everyone went home angry.
More of The Register article from Trevor Pott
Recently, Continuity Central published ‘Revamping the business continuity profession’; an article in which Charlie Maclean-Bristol looked at challenges faced by business continuity professionals and offered his suggestions for revamping the discipline. Here, David Lindstedt and Mark Armour, developers of the Adaptive Business Continuity methodology, offer their response to the article:
David Lindstedt: Naturally, most folks starting to embrace Adaptive Business Continuity will agree that traditional business continuity methods are not working and it’s time for a change. I totally agree that ‘resilience’ will not be the ‘savior’ of business continuity. As Charlie correctly points out, resilience is an inter-discipline, not a discipline on its own. A business continuity practitioner could run it, but so could anyone from any of the inter-disciplines like ERM, EM, IT DR, etc. The chief concern with resilience will always be: what are the boundaries to what gets included (individual personal psychology to environmental sustainability to the entire content of a MBA program?) and how do you measure its effectiveness?
More of the Continuity Central article