Migrating Existing Apps to Azure Kubernetes Service

AKS is a managed service that simplifies Kubernetes deployment and management of containerized applications in Azure. It eliminates much of the complexity and overhead associated with running your own Kubernetes clusters.

AKS employs various features to ease Kubernetes management, including automated updates, cluster scaling, and self-healing. Azure manages the Kubernetes cluster master, and AKS customers manage agent nodes within the cluster, paying only for the VMs the nodes run on. AKS supports creating or managing a cluster through the Azure CLI or Azure portal. 

Azure offers Resource Manager templates to help you automate cluster creation. You can specify which features you want to use in your clusters – for example, advanced networking, monitoring, and integration with Azure Active Directory (AD).

Note that a downside of AKS is that it does not allow you to easily migrate your workloads to other cloud platforms. If you need to extend the benefits of Kubernetes across multi-cloud or hybrid cloud environments, consider a cloud-agnostic Kubernetes platform.

Migrate to AKS With the App Containerization Tool 

The Azure Migrate App Containerization tool supports Azure migration by letting you containerize and migrate existing applications to AKS. The tool currently supports:

  • ASP.NET applications running on Windows Server can be containerized as Windows containers
  • Java web applications running on Apache Tomcat can be containerized as Linux containers
  • WebLogic, JBoss, and WebSphere applications can be automatically migrated, running on WildFly in Azure Kubernetes Service containers.

The tool provides a codeless way to package traditional applications and migrate them to Azure as container images. The application server hosting the application can be: 

  • A virtual machine (VM) or physical server running in an on-premises data center
  • A VM running in Azure
  • A VM running in another cloud

How the App Containerization Tool Works

The App Containerization Tool uses the specified application server details to connect to the server over the network and detect the web application running on it. Then, it checks the detected applications and application configuration and selects the apps to containerize. 

You parametrize application configuration settings, such as the database connection string. You can also specify additional content folders to include in the container image, and content folders or directories that should be moved to a Kubernetes persistent volume. The persistent volume option is useful for applications that use the local file system to store content files or application state. 

Image Source: Azure

Next, the App Containerization tool uses your configuration parameters to generate a recommended Dockerfile, which you can use to build a container image for your application. Before building the container image for your application, you can inspect the Dockerfile and make any necessary changes. 

Image Source: Azure

Once you approve the Dockerfile, container images representing your application are built and pushed to the Azure Container Registry for your Azure subscription.

After the container image is built, you can deploy your containerized application to AKS. The tool creates a Kubernetes resource definition specification file and deploys the app to an AKS cluster. Before deploying to Kubernetes, you can specify the values to use to parameterize the application configuration and view and customize the resulting YAML file.

Image Source: Azure

Azure Kubernetes Service Migration Best Practices 

Leverage High Availability to Maintain Business Continuity

You need to ensure high availability for applications that can’t afford downtime during the migration process. A complex application typically requires gradual migration, step by step, rather than migrating the whole application in one go. However, this approach necessitates network communication between the old and new hosting environments. An application that currently communicates using ClusterIP services might require exposure as type LoadBalancer with stronger security measures.

At the end of migration, you need to point the clients to your new services running on AKS. You could update DNS to redirect traffic by pointing to the load balancer in front of the AKS cluster.

Another option is to use Azure Traffic Manager to direct customers to specified Kubernetes clusters and application instances. Traffic Manager is a load balancer based on DNS traffic, distributing network traffic across different regions. Directing all application traffic to an AKS cluster through Traffic Manager provides higher redundancy and improved performance.

If you have a multi-cluster deployment, you should let customers connect to Traffic Manager DNS names pointing to all the relevant services on the AKS clusters. Use Traffic Manager endpoints (service load balancer IPs) to define the services. This configuration allows you to direct network traffic from one region’s endpoint to another.

Alternatively, you could use Azure Front Door Service to route traffic for AKS clusters. This service lets you define, monitor, and manage your global web traffic routing and implement instant global failover to ensure high availability and optimize performance.

Use AKS With VM Scale Sets and Standard Load Balancer 

AKS supports a set of regions from which you have to select. You might need to tweak your existing applications, so they stay healthy when run via the AKS-managed control plane when you transition them from existing clusters to AKS.

You can back AKS clusters with the Azure Standard Load Balancer and virtual machine (VM) scale sets to enable them to access Azure Availability Zones, multiple node pools, authorized IP ranges, the cluster autoscaler, and Azure Policy for AKS.

Check Your Quotas

You deploy other VMs into your subscription while migrating, so you need to verify that the quotas and limits you’ve set are appropriate for the resources. To avoid exhausting your IPs, you might need to request increases in vCPU and network quotas.

You can check your existing quotas under subscriptions in the Azure portal. Choose the subscription and click on Usage + quotas.

Deploy Your AKS Cluster Configuration via a CI/CD Pipeline

You should deploy your configuration to AKS using an existing Continuous Integration and Continuous Delivery (CI/CD) pipeline. Azure Pipelines lets you build and deploy applications to an AKS cluster. You clone the existing deployment tasks, pointing the kubeconfig to your new cluster.

In some cases, it’s impossible to ensure the kubeconfig points to the desired cluster, so you need to export the resource definitions from an existing Kubernetes cluster and import them to AKS. You can export objects using kubectl—for example:

kubectl get deployment -o yaml > deployments.yaml

You must check the output and remove redundant live data fields.

Conclusion

In this article, I explained the basics of AKS migration and presented several best practices that can help you migrate to Azure Kubernetes Service more effectively:

  • Leverage high availability—Use DNS updates, or Azure Traffic Manager to ensure there is no disruption to production services during your migration.
  • Use VM scale sets—This allows your workloads to gain access to Azure Availability Zones, multiple node pools, and more.
  • Deploy Your AKS Cluster Configuration via CI/CD—Use your existing CI/CD pipeline to deploy your configuration to AKS.

Image source: https://www.vecteezy.com/photo/3045119-abstract-light-line-glow-blue-led-line-motion-technology-background

Gilad David Mayaan

Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.

Gilad David Mayaan has 53 posts and counting. See all posts by Gilad David Mayaan