Why ‘Continuous Restore’ is a Game-Changer in a Data-Driven World

Distributed environments consisting of core, edge and cloud computing resources are becoming the predominant architecture for enterprises today. As you go through the motions of your day-to-day life, you are most likely already engaging with countless distributed environments, such as the internet and email, mobile networks, multiplayer video games, online banking, e-commerce platforms, reservation systems and cryptocurrency platforms, just to name a few. In the future, distributed environments will impact our lives even more; thanks to distributed computing, we’re already seeing rapid advancements in areas such as autonomous driving, industrial automation, smart cities, weather forecasting, scientific computing, agriculture and health care.

The Future of Computing is Distributed

For each of these examples of distributed environments, data is the lifeblood that pulses throughout the system. Consider a simple example of an online banking transaction you may make on your mobile phone. Using your mobile banking app, you can check your account balance, make payments and move money from one account to another. The account balance information you see on your phone is provided by data stored in the cloud and when you make a transaction, data is collected at the edge, communicated to and processed by one or more software applications. New, updated information is immediately stored again and becomes instantly visible to you on your phone, no matter where you are. The near-instantaneous movement and reliable storage of data makes this work, and most of us take this for granted because it has become the fabric of our world.

Dealing with Data in a Distributed World

Although the majority of the world’s population is oblivious to how data is managed behind the scenes, enterprises and their IT teams certainly are not. Collecting, storing, applying, moving, replicating and protecting data are now among the most critical responsibilities of all data-driven organizations. Companies frequently need to move stateful applications and their data volumes among diverse or heterogeneous environments to achieve cost-efficiency, performance, security and disaster recovery imperatives. Unfortunately, this can be difficult, time-consuming and expensive. 

In the past, an organization with a critical store of data in a VM-based, on-premises data center might choose to back up their data by replicating it to a second data center in a different location. The quick way to do this was to “mirror” their data center, i.e., create an exact replica of their original data center setup—same platform, same storage volumes. Although this approach provides the best possible recovery time objective (RTO) and recovery point objective (RPO) and is good for legacy applications, it demands expensive hardware contracts with vendors, dedicated dark fiber between the sites and dedicated personnel to maintain. This synchronous replication is not only expensive but also has an inherent limit on the distance that the data centers can stretch. Moreover, this approach cannot be extended to cloud-native, multi-cloud environments.

A more affordable solution is asynchronous replication, which can stretch much longer distances than synchronous replication, but which results in lags in data. The remote site may be a few minutes to hours behind the production site. Asynchronous replication delivers better RPO and the best RTO; however, it has the same drawbacks as synchronous replication-based solutions. At best, these solutions can work between two homogenous sites, but extending them to multiple sites—much less multiple heterogeneous sites—is practically impossible.

Therein lies one of the biggest challenges in data storage and protection today: We have no easy-to-use, affordable solution for continuous data volume replication across heterogeneous infrastructure environments. Considering the diversity of environments in the distributed systems that are the future of computing, this is a serious problem.

Fortunately, that’s about to change. 

Introducing the Game-Changer: Continuous Restore

Although demand for disaster recovery (DR) solutions continues to rise, legacy technologies cannot keep up with the technology change happening in the cloud. In contrast to legacy applications, cloud-native applications are more dynamic and elastic and are managed by DevOps scripts. Therefore, the new generation of disaster recovery solutions must be compatible with the DevOps paradigm. A core tenet of the DevOps model is doing things repeatedly to ensure they work all the time: Continuous integration, continuous deployment, continuous testing, etc. Disaster recovery solutions should not be any different. To meet standard compliance requirements, companies only have to test their DR posture once every year, which is absolutely not acceptable in the DevOps world. DR should be tested as often as possible.

Fortunately, in 2022, we will see the introduction of continuous restore data replication capabilities that are entirely storage-, cloud- and Kubernetes distribution-agnostic. By employing asynchronous replication principles, this capability will enable users in cloud-native environments to continuously stage data at multiple and heterogeneous sites. This means that applications—regardless of where they reside—will be able to tap into that data and be brought online instantaneously. The data will still lag (with RPO determined by the backup schedule at the remote site or sites), but the RTO will be exceptional.  

Consider the implications of this advancement. High-value use cases for the continuous restore capability include the following: 

  1. Disaster Recovery: Users will be able to recover from outages or failures in a matter of seconds or minutes rather than days or weeks. Using continuous restore, recovery time objectives will improve by over 80% versus traditional methods.
  2. Application Migrations: Continuous restore allows IT teams to optimize performance and achieve better total cost of ownership (TCO) by choosing the infrastructure best suited to their current needs. For example, an e-commerce application that normally runs on-premises can be moved to a public cloud prior to Black Friday to take advantage of cloud elasticity during peak demand. Rather than taking a week or more to coordinate this transition, the IT team can make the switch in seconds. Then, after Cyber Monday concludes, the application and its data can instantly be moved back to less-expensive on-premises storage.

Continuous restore capability will also enable organizations to unify their infrastructure, especially those that have grown quickly and adopted a variety of compute platforms and storage solutions to meet their unique needs. Continuous restore will make possible tremendously fast application mobility across their infrastructure silos, making them silos no more.

  1. Testing/Development: Developers can increase the velocity of CI/CD pipelines by staging data for multiple test/dev environments. These test/dev environments can be spun up in seconds with continuously replicated production data, enhancing test veracity and accelerating the push of validated changes into production. In fact, DevOps teams can use this capability to test their restore protocols to ensure that restore will work when needed.
  2. Curation of Edge Data:  As mentioned above, as distributed environments proliferate in the future, massive amounts of data will collected at the edge, and this information from diverse architectures will need to be rapidly replicated and moved throughout distributed systems where it can be assimilated and centrally analyzed by any number of different applications.

In short, this new capability will deliver cloud-native application portability and recoverability in seconds, helping to unify disparate information silos. Continuous restore is a game-changer that will make today’s modern businesses even more competitive and resilient, and it’s coming soon. To learn more, visit Trilio at Booth P11 during KubeCon + CloudNativeCon Europe 2022.


Join us for KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain (and virtual) from May 16-20—the first in-person European event in three years!

Murali Balcha

Murali Balcha is the Founder and Chief Technology Officer at Trilio, a leader in cloud-native data protection. Balcha is a seasoned technical leader who has designed several innovative technologies for the enterprise infrastructure market, including the TrilioVault cloud-native data protection platform. Balcha is responsible for Trilio's technology and product strategy, global engineering organization and patent portfolio.

Murali Balcha has 1 posts and counting. See all posts by Murali Balcha