Rogue IT is the foundation upon which innovation can be built. Rather than being restricted by traditional application and product development processes, non-IT teams can rapidly deploy solutions matching business requirements, thus accelerating new cost savings and resource efficiencies.
You might as well embrace rogue IT, or shadow IT, which will continue to grow in importance, and its impact will be felt globally, according to Tim Kelleher, vice president of IT Security Services at Century Link. Rogue IT might just lead to innovation and competitive advantage, he says. Employees increasingly will bypass corporate IT by subscribing to new collaboration, analytics or other cloud services to get work done, he says. Others will build homegrown applications via the cloud and other development platforms. This trend to remove power from corporate hands is enough to strike fear in any CIO because security risks and bandwidth restrictions can accompany each new project. On the other hand, “while the natural tendency is to limit unauthorized usage,” says Kelleher, “rogue IT can prove very useful to organizations today, driving new levels of innovation and productivity.
More of the CIO Insight slide show
We asked chief information officers how they expect their role to change in 2016 and beyond. They said the “seat at the table” discussion is over, and that the CIO exerts greater influence inside the C-suite as technology permeates every line of business.
Many CIOs said they now shape corporate strategy, not just support it. While they still have a mandate to improve operating performance, keep costs down and drive productivity using technology, they also guide product development and user experience design.
“Regardless of industry, CIOs will have more responsibility directly to the customer,” said Bill Bradley, CIO at CenturyLink Inc.
While in the past viewed as mostly a technical position, “the CIO…is now considered very valuable in the ability to bridge the gap between IT and internal and external customer needs,” said Erika Lance, CIO at Nationwide Title Clearing Inc.
More of the Wall Street Journal article
Many of you have been asking when I would start publishing a data center newsletter again. The answer is now! The Expedient Data Center News Digest is a monthly email newsletter about colocation, cloud computing, disaster recovery and CIO strategy. Click here to subscribe and have a look.
Max Schrems has a lot to answer for. The Austrian is single-handedly responsible for bringing down a key transnational data agreement that has left cloud service providers scrabbling for legal counsel. This is either a good thing, if you’re a privacy activist concerned about intrusive US surveillance policies, or a confusing and worrying one, if you’re a provider or customer of cloud services.
Worried by the Edward Snowden revelations, Schrems questioned the Irish Data Protection Commissioner, on the basis that Facebook was collecting his data in Ireland and then moving it to the US for processing. The Irish DPC simply pointed to the Safe Harbour agreement and said that its hands were tied.
The case was bumped up to the Court of Justice of the European Union (CJEU), which on October 16 ruled that Safe Harbour was illegal. Its rationale was that it enabled companies to share data for national security purposes but didn’t address whether the protections were strong enough.
More of The Register article from Danny Bradbury
Few front-line technology workers give their companies high marks for adapting to new, transformative tech, according to a recent survey from Business Performance Innovation (BPI) and Dimension Data. The resulting report, “Bringing Dexterity to IT Complexity: What’s Helping or Hindering IT Tech Professionals,” indicates that most organizations haven’t even begun to transform IT—or are just getting started. A major sore spot: A lack of collaboration and/or alignment with the business side, as most tech staffers said business teams wait too long to bring IT into critical planning processes. This, combined with a lack of funding and other resources, results in tech departments spending too much time on legacy maintenance and far too little on essential advances that bring value to the business. “Instead of ushering their companies into a new age of highly agile innovation, IT workers are hindered by a growing list of maintenance tasks, staff cutbacks and aging infrastructure,” according to the report.
More of the Baseline Magazine article from Dennis McCafferty
Data infrastructure built on commodity hardware has a lot going for it: lower costs, higher flexibility, and the ability to rapidly scale to meet fluctuating workloads. But simply swapping out proprietary platforms for customizable software architectures is not the end of the story. A lot more must be considered before we even get close to the open, dynamic data environments that most organizations are striving for.
The leading example of commodity infrastructure is Facebook, which recently unveiled plans for yet another massive data center in Europe – this time in Clonee, Ireland. The facility will utilize the company’s Open Compute Project framework, which relies on advanced software architectures atop low-cost commodity hardware and is now available to the enterprise community at large in the form of a series of reference architectures that are free for the asking. The idea is that garden variety enterprises and cloud providers will build their own modular infrastructure to support the kinds of abstract, software-defined environments needed for Big Data, the Internet of Things and other emerging initiatives.
More of the IT Business Edge post from Arthur Cole
Andrew Stuart offers some IT-focused experience-based business continuity tips :
1. Understand the threat landscape
Storms, ransomware viruses and fires are only some of many real threats for which all businesses should proactively prepare. Your IT department needs a full understanding of all of the threats likely to hit your building, communications room or servers in order to help prepare for the worst. This can be done by assessing risks based on the location and accessibility of your data centres / centers, as well as any malicious attacks that could occur. When planning to mitigate a disaster, treat every incident as unique: a local fire may affect one machine, whereas human error may lead to the deletion of entire servers.
2. Set goals for recovery
While some companies assume they are protected in the wake of a disaster if they duplicate their data, many learn the hard way that their backup stopped functioning during a disaster or their data is inaccessible afterwards. The IT team needs to define criteria for recovery time objectives (RTO), or how long your business can continue to run without access to your data, and recovery point objectives (RPO), which is the maximum age of data that will still be useful to back up. The IT team will also need to identify critical systems and prioritise recovery tasks.
More of the Continuity Central article from Andrew Stuart
Does inertia matter more to you than delivering better services to your clients?
Something happened the other day that reminded me of a time when I was still new in this business. Yes, I didn’t have the 25+ years of experience as I do now but I still had enough under my belt to know what was going on and how to evaluate a department and its staff.
I had just started working as a banquet manager at another hotel and found that most of the waiters have been working there for around 7 years and some up to 15 years. After a few days of observation, I made a list of the things that I knew we can do better and planned the steps needed to make it happen. No big deal, I’ve done this many times before.
On the following week’s schedule I listed a date for a mandatory meeting/training class and prepared the topics I would discuss. The meeting day arrived and we all sat around a series of round tables and enjoyed the coffee, soda and bottled water I had prepared for them. Hey, if I force you to come in for a meeting, the least I can do is have some beverages prepared for you…right?
More of the CustomerThink post from Steve Steve DiGioia
You can’t seem to have a conversation about cloud technology and its impact on the business without the topic of Shadow IT coming up. The two concepts at times seem so tightly intertwined, one would think there is a certain inevitability, almost a causal linkage between them. Shadow IT tends to be an emotional topic for many, dividing people into one of two camps. One camp tends to see Shadow IT as a great evil putting companies, their data and systems at risk by implementing solutions without oversight or governance. Another camp sees Shadow IT as the great innovators that are helping the company succeed by allowing the business to bypass a slow and stagnant IT organization. Does going to the cloud inherently mean there will be Shadow IT? If it does, is that necessarily a bad or good thing?
More of the CloudExpo blog post by Ed Featherston
Have you ever moved something in your kitchen because it fits better, only to find that you spend more time having to go and get it where you used to have it closer at hand? This is a simple analogy, but does relate to some confusion that is happening around SDN and microservices implementations.
As new methodologies and technologies come into your organization, we assess what it is that they are meant to achieve. You’ve worked out a list of requirements that you want to see, and from that wish list, you check off which are attained by the product of choice. As we look towards microservices architectures, which I fully agree we should, we have one checklist for the applications. As we look at the challenges that SDN solves, which I fully agree that we should, we have another checklist.
Let’s first approach this by dealing with a couple of myths about SDN and microservices architectures:
More of the About Virtualization post