Optimizing the Relationship Between Apache Ignite and Kubernetes

Developers are increasingly adopting Kubernetes and the Apache Ignite in-memory computing platform to gain development agility and increase performance and scale for business applications. However, there are important considerations when using Ignite with Kubernetes to ensure companies obtain optimal results.

Apache Ignite enables businesses to create real-time business processes, enhance customer experiences and implement infrastructure innovations. For example, companies can use Ignite to create digital integration hubs (DIHs) for real-time data access across multiple data sources and applications or to implement hybrid transactional/analytical processing (HTAP) platforms for high-speed operational and analytical processing on the same in-memory dataset. Ignite is a distributed in-memory computing solution deployed on a cluster of commodity servers. It pools the available CPUs and RAM of the cluster and distributes data and compute to the individual nodes. Deployed on-premises, in a public or private cloud or on a hybrid environment, Ignite can be deployed as an in-memory data grid inserted between an existing application and its disk-based database to accelerate the application without major modifications to either the database or the application. Ignite supports ANSI-99 SQL and ACID transactions and can also be deployed as a standalone in-memory database.

The relationship between Apache Ignite and Kubernetes can be tricky because the way in which the two solutions work together—and the benefits to the business application they support—depend on whether the business application and Apache Ignite are both in Kubernetes or one of these components remains outside the orchestration platform. Let’s look at what happens in each situation.

Alternative 1: Business Application and the Ignite Cluster Inside Kubernetes

Companies that deploy their business application and the Apache Ignite cluster in the same namespace of the same Kubernetes environment gain the most benefits from the solutions. Since the application and the Ignite cluster are in one environment, configuring the relationship between the two is straightforward. There is no border between the two systems that could introduce greater complexity or limit choice.

It is like the difference between traveling from state to state within the U.S. versus crossing the border from the U.S. into Canada or Mexico. An international border introduces complexity and limitations that must be addressed, so having both the business application and Ignite in Kubernetes is by far the simplest configuration.

This configuration also offers the flexibility to use either thin or thick clients. As discussed below, thick clients cannot be used when an application is inside Kubernetes while Ignite is outside. So having both the application and Ignite in Kubernetes offers more flexibility and can eliminate the need for additional application development.

There are, however, a couple of potential downsides to this configuration that companies should consider. Moving Ignite into Kubernetes may require additional development resources and expertise to manage the new environment. Beyond that, those who work on the database and are familiar with Apache Ignite will need to understand the implications of moving Ignite into Kubernetes.

For example, Ignite uses a discovery protocol to identify cluster nodes and, typically, discovery is configured by listing the socket addresses of the servers where nodes are expected. However, this does not work in Kubernetes because pods can be moved between underlying servers and addresses can change during restarts, so there is no static configuration. To overcome this, developers need to use Ignite’s Kubernetes IP Finder.

Developers will also need to use different Kubernetes controllers depending on whether Ignite is being used as an in-memory data grid—where the data in the cache does not persist, creating a stateless cluster—or as an in-memory database that employs persistent storage and is therefore a stateful cluster. However, these are not difficult challenges for most developers to overcome.

Alternative 2: Business Applications Inside Kubernetes, Ignite Outside

While keeping the business application and Apache Ignite in Kubernetes delivers the greatest benefit, some companies may prefer a step-by-step approach, initially transitioning only their business applications to containerized environments. This way, they can obtain the agility benefits of developing applications in containers while waiting to move their existing and stable Ignite environment. A company may also feel it lacks the expertise to manage the Ignite in-memory computing environment within Kubernetes.

While this approach can still result in many benefits, it comes with key limitations. First, the Kubernetes load balancer that sits in front of the Kubernetes cluster creates an “international border” between Kubernetes and everything outside of it, making configuration between the business application and Ignite more complex.

Also, as noted above, Ignite thick clients are currently not compatible with this approach because the Kubernetes load balancer typically prevents applications from having direct access to the Ignite cluster. This means server nodes trying to connect directly with a specific thick client will fail. Developers may have to transition to Ignite thin clients or JDBC/ODBC drivers, resulting in additional development work and potential increased load on servers. One bit of good news, however, is that the Apache Ignite community is working to allow the use of thick clients in this scenario, so companies that see this limitation as a showstopper should stay tuned.

Another potential disadvantage to consider is a performance hit. As mentioned above, this could come from a thin client increasing the load on servers. Deploying the Kubernetes cluster and the Apache Ignite cluster in different locations could also introduce performance issues, with the Kubernetes load balancer becoming a bottleneck.

Alternative 3: Business Applications Outside Kubernetes, Ignite Inside

Some companies have attempted to keep business applications outside Kubernetes while deploying Ignite inside Kubernetes, perhaps because they are just starting with Apache Ignite and want to work toward a containerized environment but aren’t yet ready to transition an application. This is not a typical deployment scenario and is generally not recommended. As a fully distributed environment that is not updated often, Apache Ignite does not by itself benefit from being in a containerized environment. Companies opting to take this route as an intermediate step should recognize that thick clients will not be an option and that configuring the relationship between the application and Apache Ignite is even more complex than the prior scenarios.

Conclusion

Using Kubernetes and Apache Ignite to gain development agility and increase performance and scale for business applications has become a game-changer for many businesses. To gain the most benefit from this strategy, put the business application and Ignite in the same Kubernetes environment. If you must take a step-by-step approach, you can put your business application into Kubernetes while leaving Ignite out, but be sure to consider the limitations of this approach and have a plan in place for dealing with them.

If your company lacks the in-house resources to determine the best approach or to implement the approach you decide on, get expert advice from a trusted third party that has extensive experience working with both Apache Ignite and Kubernetes.

Valentin Kulichenko

Valentin Kulichenko is the Director of Product Management at GridGain Systems, a provider of enterprise-grade in-memory computing solutions based on Apache® Ignite®. A software engineer, solutions architect, and a distributed systems enthusiast, he has been working with in-memory distributed systems since 2010.

Valentin Kulichenko has 1 posts and counting. See all posts by Valentin Kulichenko