Think back to the last time you encountered a difficult challenge at work–one of those problems that requires hard, long thought and perhaps some focused drudgery to break through. What did you do?
If you work in the knowledge economy, chances are you interrupted yourself several times along the way –checked your email, went on Facebook, got up and chatted with a coworker.
On average, employees who do the majority of their work on computers are distracted once every 10 and a half minutes. Twenty-three percent of those interruptions come from email, but the biggest source of interruptions by far come from…ourselves. Voluntarily switching from one task to the next without finishing the original task first accounted for a full 44% of work interruptions.
More of the Fast Company article from Becky Kane
Ask line-of-business executives (LOBs) what IT people really do, and you might get a blank stare. Sure, they troubleshoot email and website problems, but herein lies the rub. Tech workers only seem to jump into action whenever their technology breaks. We’ve always known about this perception problem, but it’s bigger than most people think.
Here’s an interesting stat from a recent McKinsey & Company survey: 51 percent of IT respondents reported undergoing a major transformation in the past two years, yet just 36 percent of their business peers reported the same. This means a good chunk of business people probably had no idea the tech people were stressed with a big project, nor did they grasp the project’s business benefits.
More of the Data Center Knowledge article from Tom Kaneshige
Continuity Central recently conducted a survey to seek the views of business continuity professionals on whether it is feasible to omit the business impact analysis (BIA) from the BC process. Mel Gosling, FBCI, explains why he believes this is the wrong question to ask…
The Big Picture
It’s always useful to step back and see the big picture, and with the question of ‘To BIA or not to BIA?’ this bigger picture is that the BIA is an integral part the business continuity management (BCM) process specified in ISO 22301 and promoted by business continuity professional associations such as the BCI in its Good Practice Guidelines. Rather than looking closely at the detailed question, we should look at the bigger picture and ask ourselves whether or not we should use this specific BCM process at all.
More of the Continuity Central article
What is your DDOS strategy?
What would you do if your company was hit with a DDoS attack that lasted 11 days? Perhaps a large organization could withstand that kind of outage, but it could be devastating to the SMB, especially if it relies on web traffic for business transactions.
That 11-day – 277 hours to be more exact – attack did happen in the second quarter of 2017. Kaspersky Lab said it was longest attack of the year, and 131 percent longer than the longest attack in the first quarter. And unfortunately, the company’s latest DDoS intelligence report said we should expect to see these long attacks more frequently, as they are coming back into fashion. This is not the news businesses want to hear.
More of the IT Business Edge post from Sue Marquette Poremba
In part one of this series, we explored a pair of competing requests many modern IT leaders receive from their stakeholders:
We investigated one “buzzwordy” solution—two-speed IT—and how implementing this solution often creates more problems than it solves. We proposed an alternate five-step framework for handling these requests. In steps one and two of this framework, we revealed how the above two competing requests are old problems, best solved with an old, proven solution—and not buzzwords.
E-Signatures 201: Get the Details on Integration, Customization and Advanced Workflow Register
In part two of this series, we will walk you through the remaining steps in our practical framework and lead you down a path toward implementing this proven solution: the technology lifecycle.
Step 3: Think technology lifecycle, not “innovation” vs. “operations.”
To better understand why the good-on-paper “two-speed IT” approach often produces problems when implemented in the real world, look at Gartner’s two speeds (or modes) in which they shoehorn all technology systems and services:
Mode 1: Development projects related to core system maintenance, stability or efficiency. These require highly specialized programmers and traditional, slow-moving development cycles. There is little need for business involvement.
Mode 2: Development projects that help innovate or differentiate the business. These require a high degree of business involvement, fast turnaround and frequent updates. Mode 2 requires a rapid path (or IT fast lane) to transform business ideas into applications.
More of the CIO Insight post from Lee Reese
How can IT leaders juggle seemingly competing agendas: to meet the business’ demands for increased innovation, while cutting costs and slashing budgets?
With the ever-increasing interest in technology solutions, IT’s stakeholders are giving them two competing demands:
1. Produce new innovative, strategic technology-based capabilities.
2. Do so with reduced resources.
How can IT leaders step up to the plate and juggle these seemingly competing agendas: to meet the business’ demands for increased innovation, including new digital systems and services, all while cutting costs and slashing budgets?
Unleash Your DevOps Strategy by Synchronizing Application and Database Changes Register
One popular solution has emerged within IT thought leadership. Often called “two-speed IT,” this idea proposes that the IT organization does not attempt to resolve the tension between these two ideas. Instead, IT lumps all of its technology into one of two broad buckets: operational technology and innovative technology. Do this, and operations won’t slow down innovation, and expensive innovation investments won’t inflate operations’ budgets.
More of the CIO Insight post from Lee Reese
Enterprise networks and data access can be knocked out without warning, thanks to natural and man-made disasters. You can’t stop them all from happening, of course, but with a good disaster recovery plan you can be better prepared for the unexpected.
Hurricanes. Tornadoes. Earthquakes. Fires. Floods. Terrorist attacks. Cyberattacks. You know any of these could happen to your business at any time. And you’ve probably got a disaster recovery (DR) plan in place to protect your enterprise’s data, employees and business.
But how thorough is your DR plan? When was it last updated and tested? Have you taken into account new technologies and services that can make it easier to recover from disaster? The following are 7 things your IT disaster recovery plan should include.
1. An analysis of all potential threats and possible reactions to them
Your DR plan should take into account the complete spectrum of “potential interrupters” to your business, advises Phil Goodwin, research director of data protection, availability and recovery for research firm IDC. (IDC is part of IDG, which publishes CSO.)
More of the IT World post from James A Martin
Since the 2016 U.S. Presidential election, concerns over the circulation of “fake” news and other unverified digital content have intensified. As people have grown to rely on social media as a news source, there has been considerable debate about its role in aiding the spread of misinformation. Much recent attention has centered around putting fact-checking filters in place, as false claims often persist in the public consciousness even after they are corrected.
We set out to test how the context in which we process information affects our willingness to verify ambiguous claims. Results across eight experiments reveal that people fact-check less often when they evaluate statements in a collective setting (e.g., in a group or on social media) than when they do so alone. Simply perceiving that others are present appeared to reduce participants’ vigilance when processing information, resulting in lower levels of fact-checking.
Our experiments surveyed over 2,200 U.S. adults via Amazon Mechanical Turk. The general paradigm went as follows: As part of a study about “modes of communication on the internet,” respondents logged onto a simulated website and evaluated a series of statements.
More of the Harvard Business Review article from Rachel Meng, Youjung Jun, and Gita V. Johar
A health records software company will have to pay $155m to the US government to settle accusations it was lying about the data protection its products offered.
The Department of Justice said that eClinicalWorks (eCW), a Massachusetts-based software company specializing in electronic health records (EHR) management, lied to government regulators when applying to be certified for use by the US Department of Health and Human Services (HHS).
According to the DoJ, eCW and its executives lied to the HHS about the data protections its products use. At one point, it is alleged that the company configured the software specially to beat testing tools and trick the HHS into believing the products were far more robust and secure than they actually were.
More of The Register article from Shaun Nichols
Distributed data center architectures increase IT resiliency compared to traditional single-site models, with networking, data integrity and other factors all playing critical roles.
Architectures that span distributed data centers can reduce the risk of outages, but enterprises still must take necessary steps to ensure IT resiliency.
Major data center outages continue to affect organizations and users worldwide, most recently and prominently at Verizon, Amazon Web Services, Delta and United Airlines. Whether it’s an airline or cloud provider that suffers a technical breakdown, its bottom line and reputation can suffer.
More of the SearchDataCenter article from Tim Culverhouse