SPONSORED ADVERTISING CONTENT

This content made possible by our sponsor. It is not written by or reflect the views of MSP TechMedia or IBM Systems Magazine.

Rebecca Levesque

Rebecca Levesque

CEO

21st Century Software


Rebecca Levesque is CEO of 21st Century Software. She has 20-plus years’ experience working with clients on resiliency and recovery strategies.


LinkedIn

Constant cost pressures and growing needs for agility and automation can inhibit our vision of the mainframe future and discourage innovative thinking. We all know that nearly every day a new threat emerges and another company’s name grabs unwanted attention in the news. Instead of being victim to changing business needs, it’s time to be a part of the conversation and be seen as an innovator.

Everyone knows hardware is resilient and reliable, but only a small percentage of outages are true disasters. What about operational recovery? What about applications? Resiliency and recovery have made tremendous strides in the past 25 years. Yet with some applications, especially batch jobs, practitioners still follow 25-year-old backup and recovery methods. Who has the tribal knowledge to understand this now?
 
In our digital world, we expect everything to operate like a utility. If I flip the light switch I expect light to come on immediately. Yet, business resumption and resiliency plans do not operate at that speed. Recovery will take time that your business can’t afford.

Why is any of this important? Time to data. Time is the most expensive, nonrenewable resource, and consumers in 2018 expect immediate service.

Here are 3 chinks in your resiliency armor:
  1. Remote recovery is not enough. Storage remote replication (e.g., Metro/Global Mirror, SRDF, HUR, etc.) only provides a single, point-in-time copy of your data for recovery. After investing in a remotely replicated environment for disaster recovery, management and application owners often think they’re recoverable from a corruption event that affects batch. Remote replication by itself does not provide protection. A corruption in a batch file will get replicated to the remote site in a few seconds or less.
  2. Tribal knowledge automation. Time to recovery depends on understanding the intricate data interdependencies known by the original application owner. We must take advantage of automation tools to speed and simplify recovery for our critical batch applications to meet business service-level agreement expectations.
  3. IT compliance and regulations. Understanding regulations that apply to your company is paramount to avoiding financial penalties for noncompliance. I often see two common mistakes: Sharing production and non-production data puts you at risk for Payment Card Industry noncompliance and not having a complete inventory of backups puts you at risk for Gramm-Leach-Bliley Act noncompliance.
My goal is bringing awareness and the ability to manage ever-increasing storage, close the gap in batch processing and make the mainframe practitioners’ job easier.