01
Apr 14

CCJ – Analyzing the Impact of Cloud Computing on the IT Department

Modern innovations in networking technology, virtualization, Big Data processing, storage, and analysis, along with cloud computing, are giving new shape to the information technology realm of today and the future.

Organizations have, for many years, effectively sourced a number of non-core business functions to outside service providers.

Marketing, accounting, customer service, sales and administrative staffing are just a few examples of such outsourcing; today, organizations can also get computing services on demand in virtually any location, and tailor those services to their specific needs. Such “micro outsourcing” can apply to functions such as processing, storage, software, security, and support.

Organizations that opt to utilize cloud services may choose to go “all the way” on the cloud, or only source some business functions such as marketing and sales funnels, tech support, or customer relationship management, to name a just a few, while keeping others in-house with on-premise solutions.

Given the cost of ownership implications of on-premise solutions for information technology and organizational processes, in conjunction with Big Data and its impact on everything from marketing to disaster response, it’s almost impossible for organizations to avoid sourcing at least some portion of business functionality to cloud-based solutions.

More of the Cloud Computing Journal article


31
Mar 14

CiteWorld – The battle for cloud infrastructure supremacy rages on

There is a two-pronged battle going on today for the soul of the cloud. It involves a struggle for cloud infrastructure supremacy and control of the network pipes, and it involves many of the biggest names in tech, from old (IBM, Microsoft) to new (Amazon, Google). As organizations move more data center infrastructure to the cloud, the results of these battles will have a profound impact on your business.

Like every other technology battle we have witnessed over the last 25 years — whether it was AOL, CompuServe, and Prodigy in the early early ’90s; Netscape versus Internet Explorer in the browser wars in the early days of the Web; or the ongoing client computing platform battle among Apple, Microsoft and Google — the story is the same. We have several dominant players trying to be the one company.

For now, Amazon maintains a sizable lead in the cloud infrastructure business. In fact, some have suggested that Rackspace, unable to compete, could be a takeover target soon.

But Amazon can’t rest easy. Last year IBM bought SoftLayer, an infrastructure-as-a-service (IaaS) provider that gives it serious chops, so much so it actually landed in second place in revenues behind AWS with 7% of the market, based on numbers provided by Synergy research group. Even if you lump IaaS and platform-as-a-service (PaaS) numbers together, Amazon appears to have a sizable lead despite its lack of PaaS offerings.

More of the Citeworld post


27
Mar 14

High Scalability – The “Four Hamiltons” Framework for Mitigating Faults in the Cloud: Avoid it, Mask it, Bound it, Fix it Fast

This is a guest post by Patrick Eaton, Software Engineer and Distributed Systems Architect at Stackdriver.

Stackdriver provides intelligent monitoring-as-a-service for cloud hosted applications. Behind this easy-to-use service is a large distributed system for collecting and storing metrics and events, monitoring and alerting on them, analyzing them, and serving up all the results in a web UI. Because we ourselves run in the cloud (mostly on AWS), we spend a lot of time thinking about how to deal with faults in the cloud. We have developed a framework for thinking about fault mitigation for large, cloud-hosted systems. We endearingly call this framework the “Four Hamiltons” because it is inspired by an article from James Hamilton, the Vice President and Distinguished Engineer at Amazon Web Services.

The article that led to this framework is called “The Power Failure Seen Around the World”. Hamilton analyzes the causes of the power outage that affected Super Bowl XLVII in early 2013. In the article, Hamilton writes:

As when looking at any system faults, the tools we have to mitigate the impact are: 1) avoid the fault entirely, 2) protect against the fault with redundancy, 3) minimize the impact of the fault through small fault zones, and 4) minimize the impact through fast recovery.

More of the High Scalability post


26
Mar 14

ThoughtsOnCloud – Four steps to identify and relocate an application portfolio to the cloud

Large enterprises looking for ways to modernize and migrate a portfolio of business applications to cloud will need to adopt a methodical approach. The approach should give enterprises ways to build a pipeline of applications suitable for infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) or software-as-a-service (SaaS) types of cloud enablement.

The approach should also provide an ability to decide on an appropriate cloud deployment model: public, private or hybrid. Of these, the IaaS is the quick route for enterprises looking at cloud enablement for moving towards an OPEX model for lower cost of operations. The success of IaaS in public and private cloud deployments and the associated cost savings have already transformed the operations of many data centers.

This chart provides the key elements of a framework that will help enterprises move toward relocating their applications to cloud:

More of the ThoughtsOnCloud post


24
Mar 14

Computerworld – Amazon vs. Google vs. Windows Azure: Cloud computing speed showdown

The cloud computing sales pitch is seductive because the cloud offers many advantages. There are no utility bills to pay, no server room staff who want the night off, and no crazy tax issues for amortising the cost of the machines over N years. You give them your credit card, and you get root on a machine, often within minutes.

To test out the options available to anyone looking for a server, I rented some machines on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure and took them out for a spin. The good news is that many of the promises have been fulfilled. If you click the right buttons and fill out the right Web forms, you can have root on a machine in a few minutes, sometimes even faster. All of them make it dead simple to get the basic goods: a Linux distro running what you need.

At first glance, the options seem close to identical. You can choose from many of the same distributions, and from a wide range of machine configuration options. But if you start poking around, you’ll find differences — including differences in performance and cost. The machines may seem like commodities, but they’re not. This became more and more evident once the machines started churning through my benchmarks.

More of the Computerworld article


18
Mar 14

Baseline – IT Leaders: Game-Changers for Governance, Security

A key frustration of CIOs and IT managers is the inability to articulate risk to the organization’s senior managers and corporate decision-makers who may not have the technical background to fully appreciate the scope and breadth of weaknesses in their own data environments. Often, company decision-makers rely on IT leaders to set budgets, recommend operational solutions and generally “keep the lights on” without fully understanding the complexities surrounding any given project.

When a project is critical to the business, however, these IT leaders face tremendous pressure to deliver results to management.

Even more challenging is when a crisis occurs, and questions surface about what happened and how it occurred. In these situations, IT leaders often find themselves explaining complex problems to an unsympathetic audience.

The ability to uncover and correct weaknesses in a data environment may be less about what resources are available to the IT department and more about the willingness of the business to truly embrace good data governance. The fact is, poor data governance is generally not the result of some single breakdown attributable only to the IT department. Rather, it is often a failure of the business to support specific risk-mitigation measures and initiatives—both inside and outside of IT—that create an environment in which positive data governance can flourish.

More of the Baseline article


17
Mar 14

CIOInsight – Inviting the App Store Into Your Enterprise

App stores are the mobile equivalent of desktop Web browsers in many ways. They are the predominant mechanism for delivering content and functionality in the mobile world. They provide much more control than traditional browsers and the direct ability for developers and administrators to deploy, monitor, manage and monetize apps and the content that passes through them. This, along with being end user-facing, has made them the cornerstone of the outside-in view for users and developers. Within the enterprise, they will soon be the final manifestation of application management and device management capabilities for end users and IT administrators, and become as ubiquitous as the intranet.

Let’s take a look at the outside-in view and the key characteristics of app stores so we can evaluate them for the enterprise.

User and Device Identity. One of the most important characteristics of app stores that differentiate them from regular Web browsers is the fact that app stores have identifiable users with strong identities. App stores play an integral role in the mobile ecosystem of devices, developers, publishers and end users by identifying the users they provide services to, blocking usage when required and ensuring security. In an enterprise, user identity elements span not just employees, but also partners, vendors and resellers so having the right level of access and functionality for each stakeholder is critical to its success.

More of the CIO Insight article


14
Mar 14

Arthur Cole – Is the Cloud Cheaper than IT? Not Always

The cloud is cheaper than standard IT infrastructure. This has been a given for so long that hardly anyone questions its veracity. And after all, who would argue the cost advantages of leasing resources from an outside provider vs. building and maintaining complex internal infrastructure?

Well, some very bright minds in the IT industry are starting to do just that.

Rob Enderle, writing for CIO.com, for one, notes that speed and flexibility, while important, do not necessarily translate into lower costs. The hard truth, of course, is that spending on IT is dropping while spending on cloud services is increasing, but this has more to do with timing and availability rather than simple economics. Indeed, recent analyses show that once internal infrastructure begins deploying cloud services of its own, it can meet enterprise needs for about $100 per user while Amazon and other providers come in at around $200 per user when purchased individually or in small groups, as is the practice with many business units. And a proprietary platform like Oracle can run as much as $500 per user.

More of the IT Business Edge article


13
Mar 14

Data Center Knowledge – State of the Data Center Puts IT in the Spotlight

One in every nine people on earth is an active Facebook user, and mankind created 1.9 trillion GB of data in 2013. The growth of social sites and the proliferation of information are two trends that Emerson Network Power captures in its “State of the Data Center 2013” infographic. These trends have a huge impact on the communications network, IT department and, most importantly, data centers.

In 2011, Emerson Network Power introduced our “State of the Data Center” infographic, a scan of major trends that affect data centers. We also researched the number of outages and the cost of downtime. This infographic provided a baseline for comparing future trends.

We recently completed “State of the Data Center 2013,” which we developed as an infographic that illustrates the facts of the year. To sum up the results in a few words, the global dependence on everything digital is pushing IT to the forefront of the organization. Data centers increasingly are relied upon in areas that were traditionally offline pursuits, and consumers have high expectations of speed and performance. I’ll share trends that support these findings, and I’ll also discuss a significant consequence of IT being in the spotlight.

More of the Data Center Knowledge article by Jack Pouchet


10
Mar 14

HBR – Strategy in a World of Constant Change

Am I the only person to be getting a bit weary of hearing it repeatedly asserted that we’re living in a world of constant, accelerating change? That competitive advantages are becoming ever more transient and that the secret to survival will be to the ability to transform on a dime? Otherwise, what happened to Tom Tom will happen to you. Please!

Let me share a fun clip with you, sent to me the other day my former colleague Jonathan Rotenberg, founder of the Boston Computer Society. It chronicles Steve Jobs’ first public introduction of the brand new Macintosh, which happened in January 1984 at Jonathan’s Society in Boston. The whole event was was a cool trip down memory lane.

The moment I loved most was during the Q&A when an older gentleman asked Jobs a challenging question about the mouse as user interface technology: did it really compare favorably to the traditional keystroke approach? It was fun to watch a younger, mellower Jobs give a patient, reassuring response and not insinuate that the questioner was a moron. Jobs turned out to be quite right in his answer, which was that once people gave the mouse a try, they would see that it was far superior to keystrokes.

More of the HBR post