The Shift to Cloud-Native Isn’t a Legacy Loss

The question will inevitably arise when an organization with on-premises operations opts to make the leap to cloud environments: what will become of investments in legacy data center infrastructure?

This concern is valid. Enterprises — especially in traditional industries — often have significant investments in data centers and server equipment. In addition to the equipment, organizations in markets like insurance and/or banking have often spent years maintaining their existing monolithic applications written with legacy languages such as COBOL. Many organizations’ entire business operations rely on millions of lines of code they must maintain even as they begin their digital transformation.

Furthermore, a profitable company, for instance, might not think they have much incentive to update or upgrade their legacy infrastructure because it is mission-critical and works at least reasonably well. Why run the risk of rewriting the code?

In all of these cases, organizations fear losing those investments, which may add up to tens of millions of dollars over several years. However, organizations — even if they are profitable — cannot afford not to digitally transform.

Getting to ‘Yes’

Key technologies that DevOps teams can rely on to shift operations to the cloud while still integrating legacy infrastructure include: in-memory computing, caching, cloud bursting and future-proofing processes with a modern operational data store (ODS) for both on-premises and cloud infrastructures including change data capture (CDC), real-time processing of data and analytics queries.

The idea is to do more than just avoid losing your investments in legacy infrastructure. Instead, the best course of action is to leverage existing on-premises servers and infrastructure in combination with cloud services for significant performance gains while lowering costs and, most importantly, improving customer experience.

Cloud Bursting

Cloud bursting can allow for seamless addition of more server or memory capacity in a cloud environment in conjunction with on-premises needs. This way, capacity is scaled with the potential to access server or computing resources as needed.

At the same time, your organization should be able to rely on cloud bursting to scale from on-premises to cloud environments. This capability, among other things, extends the capacity of your on-premises infrastructure to handle peak computing and data loads.

The underlying platform that facilitates cloud bursting should also rely on in-memory storage and processing capabilities that can maintain computing performance throughout the network, where needed, by boosting scaling speeds and maintaining low-latency data transfers.

ODS’ Place

Like “digital transformation,” a “single pane of glass” is, arguably, in the buzzword category. However, both descriptive terms describe what is also necessary to successfully deploy resources in cloud environments. In this way, operational data stores could be considered a “single pane of glass.” This is because ODS describes a configuration that removes data silos by combining data from different sources for a single view (hence, “a single pane of glass”). These sources might include data from legacy mainframe servers, cloud environments, and, of course, microservices-connected data from highly distributed on-premises and multi-cloud environments.

Another benefit of ODS is added security. Access to a centralized data system of record, for example, remains limited, unlike certain data pools where several users might have API access — a common gateway for security breaches.

Real-Time Data Access

The ability to manage an organization’s entire range of data is largely contingent on maintaining connections between all data sources, including on-premises legacy servers and multi-cloud environments often connected between microservices through APIs. The reach of this connectivity should allow for real-time access to the data, regardless of its location.

Change data capture is a technology that ensures high-speed data connectivity between legacy on-premises servers and cloud environments. It ensures any change made to a source database — regardless of its location in a network — is automatically updated in a target data store.

Data Jurisdictions

Every update to a mainframe database or to a cloud data store — such as when data is input to a data center server or a customer completes an online transaction — should be replicated between all ODS implementations automatically. These data transfers should be encrypted and configured to meet local jurisdictional data mandates and compliance.

However, in some cases, such as large global organizations that require multiple data routes in multi-cloud environments, data stores need to be delineated or tenanted according to geographical zones and jurisdiction. Global organizations, therefore, cannot rely on a single cloud vendor. Instead, to comply with localized data storage laws and mandates, their architectures must support multiple cloud environments.

Meeting data access compliance requirements serves as another example of how technologies such as smart ODS, CDC and intelligent data replication methods can come into play. It is now possible — again, with the right systems in place — for an organization with an on-premises legacy infrastructure and a rapidly growing presence across multi-cloud environments to meet both the performance and compliance demands required to service online customers across multiple geographic zones and jurisdictions.

Leverage Your Legacy

The decision to benefit from cloud environments still lets organizations with traditional mainframe and other legacy servers take advantage of their investments, as described above. Through cloud bursting, modern ODS platforms and other technologies, enterprises in long-established sectors such as insurance can certainly leverage both on-premises and cloud resources. Thanks to these resources, legacy players can offer end user services that can remain out of reach for even the most agile cloud-native startups. In other words, your legacy infrastructure does not have to be a liability — in fact, it should be considered an asset.

Tal Doron


Tal brings over a decade of technical experience in enterprise architecture specializing in mission critical applications with focus on real-time analytics, distributed systems, identity management, fusion middleware and innovation. Bridging the gap between business and technology, architecting and strategizing digital transformation from ideas to success with strong business impact. Managing pre-sales activities engaging with all levels of decision makers from architects to strategic dialogue with C-level executives. Prior to joining GigaSpaces, Tal served in various positions at software technology companies including Dooblo, Enix, Experis BI and Oracle.

Tal Doron has 1 posts and counting. See all posts by Tal Doron