17
Feb 17

Washington Post – Weather Service suffered ‘catastrophic’ outage; website stopped sending forecasts, warnings

On a day when a blizzard was pasting Maine and Northern California faced a dire flooding threat, several of the National Weather Service’s primary systems for sending out alerts to the public failed for nearly three hours.

Between 1:08 p.m. and 3:44 p.m. Eastern time Monday, products from the Weather Service stopped disseminating over the Internet, including forecasts, warnings, radar and satellite imagery, and current conditions.

Updates to the Weather Service’s public-facing website, Weather.gov, ceased publishing.

In an email to staff on Tuesday, David Michaud, the director of the Weather Service’s Office of Central Processing, said a power outage had triggered the outage and characterized the impacts as “significant”. The cause of the outage was under review, a Weather Service spokesperson said.

“[I] want to ensure you that everyone involved is working hard to avoid these outages in the future and find ways to better communicate to employees across the agency in real time when outages occur,” Michaud’s email said.

More of the Washington post article from Jason Samenow


13
Feb 17

TheWHIR – Why Does It Seem Like Airline Computers Are Crashing More?

Another week, another major airline is crippled by some kind of software glitch.

If you feel as if you’re hearing about these incidents more often, you are—but not necessarily because they’re happening more frequently.

Delta Air Lines Inc. suffered an IT outage that led to widespread delays and 280 flight cancellations on Jan. 29 and 30, a problem the carrier said was caused by an electrical malfunction. A week earlier, United Continental Holdings Inc. issued a 2 1/2-hour ground stop for all its domestic flights following troubles with a communication system pilots use to receive data.

These two shutdowns were the latest in what’s been a series of computer crack-ups over the past few years, including major system blackouts that hobbled Southwest Airlines Co. as well as Delta for several days last summer—affecting tens of thousands of passengers.

More of the WHIR post from Bloomberg


10
Feb 17

SearchCloudComputing – For enterprises, multicloud strategy remains a siloed approach

Although not mentioned in this article, enterprise cloud providers like Expedient are often a key player in the multicloud mix. Enterprise clouds deliver VMware or HyperV environments that require little or no retraining for the infrastructure staff.

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but “multicloud” remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

More of the SearchCloudComputing article


03
Feb 17

Data Center Knowledge – This Server’s Uptime Puts Your SLA to Shame

An unusual and noteworthy retirement from the IT industry is scheduled to take place in April, Computerworld reports, when a fault-tolerant server from Stratus Technologies running continuously for 24 years in Dearborn, Michigan, is replaced in a system upgrade.

The server was set up in 1993 by Phil Hogan, an IT application architect for a steel product company now known as Great Lakes Works EGL.

Hogan’s server won a contest held by Stratus to identify its longest-running server in 2010, when Great Lakes Works was called Double Eagle Steel Coating Co. (DESCO). While various redundant hardware components have been replaced over the years, Hogan estimates close to 80 percent of the original system remains.

More of the Data Center Knowledge article from Chris Burt


31
Jan 17

The Register: Suffered a breach? Expect to lose cash, opportunities, and customers – report

More than a third of organisations that experienced a breach last year reported substantial customer, opportunity and revenue loss.

The finding is one of the key takeaways from the latest edition of Cisco’s annual cybersecurity report, which also suggests that defenders are struggling to improve defences against a growing range of threats.

The vast majority (90 per cent) of breached organisations are improving threat defence technologies and processes following attacks by separating IT and security functions (38 per cent), increasing security awareness training for employees (38 per cent), and implementing risk mitigation techniques (37 per cent). The report surveyed nearly 3,000 chief security officers (CSOs) and security operations leaders from 13 countries. CSOs cite budget constraints, poor compatibility of systems, and a lack of trained talent as the biggest barriers to advancing their security policies.

More than half of organisations faced public scrutiny after a security breach. Operations and finance systems were the most affected, followed by brand reputation and customer retention. For organisations that experienced an attack, the effect can be substantial: 22 per cent of breached organisations lost customers and 29 per cent lost revenue, with 38 per cent of that group losing more than 20 per cent of revenue. A third (33 per cent) of breached organisations lost business opportunities.

More of The Register article from John Leyden


30
Dec 16

GigaOM – The enterprise CIO is moving to a consumption-first paradigm

Take yourself back a couple of decades and the IT industry looked very different than it does today. Back then the number of solution choices was relatively limited and only available to those with the finances to afford it. Many of the core services had to be built from the ground up. Why? There simply wasn’t the volume or maturity of the IT marketplace for core services. Today, that picture is very different!

For example, consider email. Back in 1995, Microsoft Exchange was just a fledgling product that was less than two years old. The dominant email solutions were cc:Mail (acquired by Lotus in 1991), Lotus Notes (acquired by IBM in 1995) along with a myriad of mainframe, mini and UNIX-based mail servers.

Every enterprise had to setup and manage their individual email environment. Solutions like Google Apps and Microsoft 365 simply did not exist. There was no real alternative…except for outsourcing.

More of the GigaOM post from Tim Crawford


15
Dec 16

ComputerWeekly – IT Priorities 2017: What will IT decision-makers be focusing on?

Each year, TechTarget looks at how CIOs and senior IT decision-makers will be investing in IT in the 12 months ahead

Budgets for staff and on-premise servers are falling as CIOs focus on cloud computing, according to the TechTarget’s IT Priorities 2017 survey.

Most of the 353 people surveyed said their IT budgets would remain the same. Only 17% said their budget would increase by more than 10%, 16% said their budget would increase by 5% to 10%, and 9% said their budget would decrease.

The survey found that most of the budget increases would be invested in cloud services (43%), software (43%) and disaster recover (30%).

More of the ComputerWeekly post from Cliff Saran


09
Dec 16

Continuity Central – C-Level and IT pros disagree on organizations’ ability to recover from a disaster: Evolve IP survey

When it comes to assessing an organization’s ability to recover from a disaster, a significant disconnect exists between C-Level executives and IT professionals. While nearly 7 in 10 CEOs, CFOs or COOs feel their organization is very prepared to recover from a disaster, less than half of IT pros (44.5 percent) are as confident , a technology survey conducted by Evolve IP reports. The survey of more than 500 executives and IT professionals uncovered factors, including compliance requirements and use of hosted solutions, that contribute to an organization’s disaster recovery confidence overall.

Disaster recovery compliance was a clear driver of confidence in the ability to recover IT and related assets in the event of an incident. In fact, 67 percent of respondents in banking, 58 percent of respondents in the government sector and 55 percent of respondents at technology companies feel very prepared: of these disaster recovery compliance was noted as a requirement by 97 percent, 73.5 percent and 71 percent respectively.

More of the Continuity Central post


07
Dec 16

Baseline – Why IT Pros Feel Unprepared for Disasters

While most C-level executives feel their organization is “very prepared” for a potential systems-crashing disaster, IT professionals sharply disagree, according to a recent survey from Evolve IP. The “2016 Evolve IP Disaster Recovery and Business Continuity Survey” report indicates that a significant number of companies have suffered from a major incident that required disaster recovery (DR) over the past year—sometimes resulting in six-figure losses. Many tech employees indicate that a lack of DR budgeting leaves them unprepared for disruptions caused by hardware failures, server issues, power outages, environmental events, human error and targeted cyber-attacks. And a great many organizations still rely on old-school recovery methods such as backup tapes, instead of newer cloud-based solutions.

There is, however, notable interest in Disaster Recovery as a Service (DRaaS), despite the fact that only about half of C-level executives have heard of this term. “The lack of DR education at the executive level—and the likely related lack of budget—poses a real risk to today’s businesses,” according to the report. “These factors are further exacerbated by a dramatic increase in targeted attacks, continued reliance on aging tape backups, as well as internal hardware that remains highly susceptible to failure.

More of the Baseline slideshow from Dennis McCafferty


02
Dec 16

Data Center Knowledge – The Mission Critical Cloud: Designing an Enterprise Cloud

Today, many organizations are taking a look at cloud from a new lens. Specifically, organizations are looking to cloud to enable a service-driven architecture capable of keeping up with enterprise demands. With that in mind, we’re seeing businesses leverage more cloud services to help them stay agile and very competitive. However, the challenge revolves around uptime and resiliency. This is compounded by often complex enterprise environments.

When working with cloud and data center providers, it’s critical to see just how costly an outage could be. Consider this – only 27% of companies received a passing grade for disaster readiness, according to a 2014 survey by the Disaster Recovery Preparedness Council. At the same time, increased dependency on the data center and cloud providers means that overall outages and downtime are growing costlier over time. Ponemon Institute and Emerson Network Power have just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).

More of the Data Center Knowledge post from Bill Kleyman