POWER > Systems Management > High Availability

Recent Trends in High Availability Show Soaring Cost of Downtime

HA reliability survey
 

The never-ending workday has become the culture of modern businesses. Technology veterans might remember that when a system went down 25 years ago, the general feeling among office workers was, “Great! We can hit the water cooler and chat.” But those days are gone.

One of the biggest high availability (HA) trends for 2016 is organizations’ low tolerance for downtime.

If a network goes down today, “People start moaning and groaning,” says Laura DiDio, principal, Information Technology Intelligence Corp (ITIC). “It takes about 30 seconds before complaints start flooding the IT department because we are totally reliant on all of our networks.”

“Competition is intense and cutthroat,” explains DiDio, who has conducted server reliability surveys for nine years. One mishap could damage your reputation, and people have long memories. If, for example, a group deciding between two business proposals likes both vendors and it looks like all services are equal, “it comes down to, ‘Well, there was that time their networks went down and it took a while to get back up; because of that, we missed our deadlines.’ That’s always a concern,” she says.

Constantly Connected

DiDio recalls how people used to cover their typewriter, go home and disconnect. But now, workers want continuous access to work. “The requirements for uptime and availability have been steadily increasing,” she says.

As a case in point, one-quarter of respondents in the latest ITIC Global Server Hardware and Server OS Reliability Survey of C-level executives and IT managers (bit.ly/2aFiNtV) report that they need 99.999 percent availability. In ITIC’s 2013 reliability survey, only 11 percent said their firms required five nines of reliability.

The latest survey, which includes over 600 organizations, also found that 72 percent of respondents consider 99.99 percent to be the minimum acceptable level of reliability for their main line of business servers. This trend among corporations surveyed is illustrated in Figure 1. To add perspective, in ITIC’s 2014 survey, 49 percent said their businesses required a minimum of 99.99 percent availability.

Here’s how the numbers break down in terms of downtime: five nines of reliability equates to 5.26 minutes of unplanned downtime per server, per annum. Four nines of uptime equates to 52.56 minutes of downtime per server, per annum (see Figure 2).

DiDio says this trend will increase. In an excerpt from the ITIC 2016 survey report, she wrote: “Corporations are increasingly deploying virtualization and cloud computing and migrating to an Internet of Things (IoT) environment that is built on interconnected devices embedded with sensors to deliver business intelligence and big data analytics. In a world where everything is interconnected, the old adage, ‘A chain is only as strong as its weakest link,’ has never rung truer. In virtualized, cloud and IoT environments, the potential for collateral damage has increased by orders of magnitudes.”

So, a virtual server running multiple instances of a main line-of-business application has a greater impact on operations and employee productivity in the event of a service interruption than a server running only one application.


comments powered by Disqus

Advertisement

Advertisement

2018 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Data is Money

A recent survey explores the state of Power Systems resilience

POWER > SYSTEMS MANAGEMENT > HIGH AVAILABILITY

Determine the Right Level of Business Continuity

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store