Microservices: The More the Merrier or Meshy-er?

Microservices are here and the size of their deployment is growing. As discussed in Part 1 of our series, “Containers and Functions: Leveraging Ephemeral Infrastructure Effectively,” a successful journey through adoption of a container architecture and subsequent deployment of microservices does not happen with the flip of a switch or the implementation of a new tool; it’s nearly always an evolutionary journey. The evolution happens across all aspects of application development, architecture, packaging and infrastructure. Each aspect of how software is created and delivered has evolved significantly over the past several years, just as the definition of what it means to be cloud native has.

As teams navigate this evolving software lifecycle landscape, unique aspects of their organization—their individuals’ collective experience and their specific project needs—shape the evolutionary journey, which is to say that not every path to cloud-native is the same. Applications and teams that journey toward being cloud native will leverage one, a combination of or all the approaches along their way.

Here, we’re going to focus on microservices—the promise of their benefits is broad. Not all applications or teams that attempt a journey toward creation of microservices (from scratch or as they break down a monolith) will be successful in realizing the benefits of a microservice-based deployment. Often, teams fail to achieve significant success given the previously unprecedented level of monitoring and management this application design requires. From establishing an environment and infrastructure that will support distributed systems concerns; to organizing and training teams and fostering the culture and instituting operational practices; infusing observability and infrastructure as code and incorporating modern DevOps monitoring tools, a team’s first experience with microservices can be cloudy in more ways than one. Once a steady cadence of continuous delivery is achieved, however, their benefits (e.g., speed of delivery) is unparalleled by enterprise-architected applications.

Microservices help teams deliver and iterate on services more quickly. Microservices provide democratization of language and technology choice across independent service teams—teams that create new features quickly as they iteratively and continuously deliver software (typically as a service). As a cloud-native approach to designing scalable, independently delivered services, microservices allow teams to myopically and prioritize their own service’s needs. This practice of delivering loosely coupled functionality drives agility and iterative delivery, as well as forces a practice of making good on the “contractual obligation” inherent to APIs they expose.

Challenges from Every Angle

With the common design pattern of each microservice exposing an API, uniformity of their behavior and consistency of their versioning schemes are two examples of a collection of challenges latent to any deployment. Microservice deployments amplify these and other issues commonly seen with the introduction of any new technology or methodology. A multitude of microservices exacerbates challenges of not only creating functionality in what are commonly eventually-consistent environments, challenges of infusing DevOps culture and practices, but also challenges of ensuring interoperability of the multiple of new services. The more microservices deployed, the more aggravated these challenges become.

In microservice deployment, more moving pieces and additional services increase monitoring difficulties. Consider this: You have one application comprised of five services with each service composed of about 10 containers. Your mental rendering of your application’s physical topology and its services’ logical interactions can quickly become amorphous given that its components literally move around. Unlike static, consistent VMs, which traditional monitoring tools support, microservices require monitoring tools that have first-class considerations for ephemeral constructs and native service discovery integrations. If you’re running outmoded monitoring tooling, you’re flying blind.

Microservice deployment can also create organizational overhead and complexities. While focusing on one service rather than many services enables continuous delivery and agility, it also allows developers and operations teams to work independently from one another. As such, there must be constant communication and collaboration between teams to ensure quickly iterating on services is used for good, rather than negating the larger purpose of the DevOps culture.

With all of this in mind, how can service teams avoid this diffusion of responsibility and work to ensure successful delivery and operation of across microservices?

Mitigating Challenges to Reap Benefits

While certainly not the holy grail, service meshes strike at the heart of one well-known distributed systems challenges of not having homogeneous, reliable, unchanging networks. Aimed at directly addressing these challenges, service meshes provide a new layer of cloud native visibility, security, and control.

The service mesh landscape is a burgeoning area of tooling that isn’t relegated to cloud native applications, but provides much value to non-containerized, non-microservice workloads as well. This additional layer of tooling provides policy-based networking for microservices, describing desired behavior of the network in the face of constantly changing conditions and network topology.

Ultimately, service meshes provide a services-first network; a network that is primarily concerned with alleviating application developers from building infrastructure concerns into their application code; a network that diffuses the responsibility of service management by providing an independently addressable layer of tooling for its management, layer 5, if you will. Service meshes create a network that empowers operators with the ability to define fine-grained traffic control, affecting application behavior without the need to necessarily engage developers. Through declarative policy, operators regain control over the volume of service requests flowing through their infrastructure.

Service meshes provide visibility, resiliency, traffic and security control of distributed application services and provide immediate observability into the number of requests, distributed tracing and the latency by the amount of time it takes for services to respond. By deploying microservices on top of a service mesh, a DevOps team can get immediate metrics, logs, and tracing—without making application code changes. Service mesh facilitate visibility into why apps are running slowly, which is one of the most bothersome issues service owners face, because they don’t know what’s causing the app to perform slowly.

DevOps teams can benefit from enhanced security offered by most service meshes. By revealing bad actors, flagging and blocking a user sending in a high volume of requests per second, and controlling if someone is authenticated but not authorized to call a service, service mesh architecture helps DevOps teams.

Putting in into Practice—What You Need to Know

There a number of service mesh deployment models that may be employed on your journey to a healthy and happy microservices. Following a few best practices can help along the way:

  • Gauge necessity: Reaching the point where a service mesh becomes necessary is evolutionary. The more dynamic your environment and the more complex your services, the more value a service mesh is likely to be. Teams often start with a few containers in a single node, find the need for container orchestration when a couple nodes are needed for scale and resiliency, then arrive at a service mesh deployment as they eventually face more difficult challenges as their deployment grows.
  • Know your use case: Understanding the extent to which a service mesh is necessary, and what exactly it needs to do, is the first step in implementation. It’s important to start small and make sure it’s delivering value you want it to before diving in. The type of infrastructure and type of services you’re running will help determine what service mesh to leverage.

For example, some meshes focus more so on containerized environments while others more readily account for services running directly on your VM/bare metal operating system. Consider which service meshes you can deploy in your environment (some are less complex than others, some are easier to deploy than others) and what functionalities you need the service mesh to have. The Layer5.io service mesh landscape provides helpful insight here.

  • Identify what you expect from your network vs your application: What do you want out of a network that connects your microservices? You may want your network to be as intelligent and resilient as possible; to route traffic away from failures to increase the aggregate reliability of your cluster; and a network to avoid unwanted overhead like high-latency routes or servers with cold caches. You may want your network to ensure that the traffic flowing between services is secure against trivial attack. You may want your network to provide insight by highlighting unexpected dependencies and root causes of service communication failure. You may want self-instrumented networks that gather and emit service-level metrics for all requests that transit your microservices.

Conclusion

Successful deployment of microservices or adoption of a container architecture does not happen overnight; most teams are laden with responsibility for existing infrastructure and services. As such, you’re likely to take an evolutionary journey to being cloud native. Microservices have significant benefits and some challenges for which service meshes may be the best tool to address. When implementing or considering a service mesh deployment, first gauge necessity. Once you’re sure a service mesh is the right fit, consider your use case before choosing a deployment model. When controlled and managed properly, more is merrier when it comes to microservice deployment.

Lee Calcote

Lee Calcote is the Head of Technology Strategy at SolarWinds, where he stewards strategy and innovation across the business. Previously, Calcote led software-defined data center engineering at Seagate, up-leveling the systems portfolio by delivering new predictive analytics, telemetric and modern management capabilities. Prior to Seagate, Calcote held various leadership positions at Cisco, where he created Cisco’s cloud management platforms and pioneered new, automated, remote management services.

Lee Calcote has 3 posts and counting. See all posts by Lee Calcote