17
Feb 17

Washington Post – Weather Service suffered ‘catastrophic’ outage; website stopped sending forecasts, warnings

On a day when a blizzard was pasting Maine and Northern California faced a dire flooding threat, several of the National Weather Service’s primary systems for sending out alerts to the public failed for nearly three hours.

Between 1:08 p.m. and 3:44 p.m. Eastern time Monday, products from the Weather Service stopped disseminating over the Internet, including forecasts, warnings, radar and satellite imagery, and current conditions.

Updates to the Weather Service’s public-facing website, Weather.gov, ceased publishing.

In an email to staff on Tuesday, David Michaud, the director of the Weather Service’s Office of Central Processing, said a power outage had triggered the outage and characterized the impacts as “significant”. The cause of the outage was under review, a Weather Service spokesperson said.

“[I] want to ensure you that everyone involved is working hard to avoid these outages in the future and find ways to better communicate to employees across the agency in real time when outages occur,” Michaud’s email said.

More of the Washington post article from Jason Samenow


13
Feb 17

TheWHIR – Why Does It Seem Like Airline Computers Are Crashing More?

Another week, another major airline is crippled by some kind of software glitch.

If you feel as if you’re hearing about these incidents more often, you are—but not necessarily because they’re happening more frequently.

Delta Air Lines Inc. suffered an IT outage that led to widespread delays and 280 flight cancellations on Jan. 29 and 30, a problem the carrier said was caused by an electrical malfunction. A week earlier, United Continental Holdings Inc. issued a 2 1/2-hour ground stop for all its domestic flights following troubles with a communication system pilots use to receive data.

These two shutdowns were the latest in what’s been a series of computer crack-ups over the past few years, including major system blackouts that hobbled Southwest Airlines Co. as well as Delta for several days last summer—affecting tens of thousands of passengers.

More of the WHIR post from Bloomberg


10
Feb 17

SearchCloudComputing – For enterprises, multicloud strategy remains a siloed approach

Although not mentioned in this article, enterprise cloud providers like Expedient are often a key player in the multicloud mix. Enterprise clouds deliver VMware or HyperV environments that require little or no retraining for the infrastructure staff.

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but “multicloud” remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

More of the SearchCloudComputing article


31
Jan 17

The Register: Suffered a breach? Expect to lose cash, opportunities, and customers – report

More than a third of organisations that experienced a breach last year reported substantial customer, opportunity and revenue loss.

The finding is one of the key takeaways from the latest edition of Cisco’s annual cybersecurity report, which also suggests that defenders are struggling to improve defences against a growing range of threats.

The vast majority (90 per cent) of breached organisations are improving threat defence technologies and processes following attacks by separating IT and security functions (38 per cent), increasing security awareness training for employees (38 per cent), and implementing risk mitigation techniques (37 per cent). The report surveyed nearly 3,000 chief security officers (CSOs) and security operations leaders from 13 countries. CSOs cite budget constraints, poor compatibility of systems, and a lack of trained talent as the biggest barriers to advancing their security policies.

More than half of organisations faced public scrutiny after a security breach. Operations and finance systems were the most affected, followed by brand reputation and customer retention. For organisations that experienced an attack, the effect can be substantial: 22 per cent of breached organisations lost customers and 29 per cent lost revenue, with 38 per cent of that group losing more than 20 per cent of revenue. A third (33 per cent) of breached organisations lost business opportunities.

More of The Register article from John Leyden


16
Jan 17

Untitled

From the article: “The problem I see more often is that leaders don’t make decisions at all. They don’t clearly signal their intent about what matters. In short, they don’t prioritize.” Is your IT staff clear on priorities?

Every organization needs what I call a “hierarchy of purpose.” Without one, it is almost impossible to prioritize effectively.

When I first joined BNP Paribas Fortis, for example, two younger and more dynamic banks had just overtaken us. Although we had been a market leader for many years, our new products had been launched several months later than the competition — in fact, our time to market had doubled over the previous three years. Behind that problem was a deeper one: We had more than 100 large projects (each worth over 500,000 euros) under way. No one had a clear view of the status of those investments, or even the anticipated benefits. The bank was using a project management tool, but the lack of discipline in keeping it up to date made it largely fruitless. Capacity, not strategy, was determining which projects launched and when. If people were available, the project was launched. If not, it stalled or was killed.

Prioritization at a strategic and operational level is often the difference between success and failure. But many organizations do it badly.

More of the Harvard Business Review article from Antonio Nieto-Rodriguez


12
Jan 17

ComputerWeekly – Disaster recovery testing: A vital part of the DR plan

IT has become critical to the operation of almost every company that offers goods and services to businesses and consumers.

We all depend on email to communicate, collaboration software (such as Microsoft Word and Excel) for our documents and data, plus a range of applications that manage internal operations and customer-facing platforms such as websites and mobile apps.

Disaster recovery – which describes the continuing of operations when a major IT problem hits – is a key business IT processes that has to be implemented in every organisation.

First of all, let’s put in perspective the impact of not doing effective disaster recovery.

Estimates on the cost of application and IT outages vary widely, with some figures quoting around $9000/minute.

More of the ComputerWeekly article from Chris Evans


11
Jan 17

IT Business Edge – Whistleblower Advises IT Pros on How to Handle Corporate Malfeasance

Michael G. Winston’s name will probably forever be linked to the Great Recession of the late 2000s, but in a good way: He’s the whistleblower who dared to take on the subprime mortgage lender Countrywide Financial Corp. So what better person to ask about blowing the whistle as an IT pro?

Now a leadership consultant and author of the book, “World-Class Performance,” Winston has become something of a folk hero in the recession’s aftermath, never shying away from speaking out on corporate malfeasance. In a recent interview, I presented a hypothetical scenario to him in which a newly hired network engineer learns that the IT organization is engaged in an effort, initiated by the CEO, to hack into the networks of the company’s competitors, and he’s expected to go along with it. What should the network engineer do?

More of the IT Business Edge article from Don Tennant


10
Jan 17

WSJ – How to calculate technical debt

To inform non-IT executives of the specific costs and risks posed by aging infrastructure and applications, CIOs are beginning to quantify their IT organizations’ technical debt.

As business leaders have grown more involved in IT investment decisions, many CIOs have found it increasingly difficult to obtain funding for infrastructure and application maintenance. Consequently, some CIOs are turning to the concept of technical debt to bolster their business cases for IT maintenance investments.

Technical debt refers to the accumulated costs and long-term consequences of using poor coding techniques, making quality/time trade-offs, delaying routine maintenance, and employing other suboptimal IT practices in the enterprise. Those kinds of quick fixes may lower costs in the short term or keep software development and implementation projects on schedule, but they can also cause serious problems down the road if left unaddressed. Specifically, they may lead to application outages, security vulnerabilities, and increased maintenance costs.

While the concept of technical debt was originally coined to address quick and dirty coding practices, it is now being applied to other IT disciplines including infrastructure, architecture, integration, and processes, according to Mike Habeck, a director with Deloitte Consulting LLP’s Technology Strategy & Architecture practice.

More of the Wall Street Journal article from Deloitte


09
Jan 17

CIO.com – ​5 lessons in reducing IT complexity

It’s an adage as old as time (or at least as old as the invention of the personal computer): Technology is destined to cycle constantly between complexity and simplicity.

Remember the hassle of attaching peripherals in the days before USB ports? Remember the anguish of developing applications for competing OS interfaces before HTML? We fixed those problems, and look at that, we’ve moved on to others.

“Complexity grows over time,” says Bryson Koehler, chief information and technology officer (CITO) of The Weather Company in Atlanta. “Systems are built to do one thing, and then they’re modified, morphed and bastardised to do things they were never meant to do.”

Complexity also occurs when technologies overlap one another – “when you add new stuff but keep the old instead of getting rid of it,” says Dee Burger, North America CEO of Capgemini Consulting.

More of the CIO.com post from Howard Baldwin


06
Jan 17

HBR – 4 Assumptions About Risk You Shouldn’t Be Making

Are you being honest with yourself and your company about risk? If doing nothing leads to decline, projects with marginal projections actually are better alternatives than inaction.

Two roads diverged in a wood, and I—I took the one less traveled by, and that has made all the difference.” The line is instantly recognizable as the conclusion of “The Road Not Taken” by Robert Frost. And, the misunderstood poem helps to highlight how innovation-seeking executives need to reframe the word risk.

Most readers assume Frost’s poem is hopeful, describing the value of the rugged individualism that has long served as an American hallmark. However, a measured reading shows a wistful tone that borders on regret (“I shall be telling this with a sigh”), with critics arguing that the poem’s key message is how we rationalize bad decisions after the fact.

Similarly, when the word risk comes out of an executive’s mouth, it’s usually accompanied by one of four mistakes:

More of the Harvard Business Review post from Scott Anthony