Ambassador Labs has added HTTP/3 support for both its Ambassador Edge Stack control plane for Envoy proxy software deployed on Kubernetes clusters and the Emissary-ingress application programming interface (API) gateway it previously donated to the Cloud Native Computing Foundation (CNCF).
HTTP/3 has gained traction recently; it uses the same semantics as earlier versions of the protocol except that it encodes and maintains session state differently.
Daniel Bryant, head of developer relations at Ambassador Labs, says that capability results in faster load times. That is especially critical for latency-sensitive applications built using microservices that need to run at scale across a distributed computing environment.
In addition, the company announced a series of learning programs, including the new Kubernetes Learning Center (KLC) site, that it is providing to increase the skills of developers and platform engineering teams that build these applications on Kubernetes clusters.
At its core, Ambassador Edge Stack is used to publish, monitor and update services in a way that is simpler for developers to consume. It combines the functions of an API gateway, a Kubernetes ingress controller and a Layer 7 load balancer along with tools for managing traffic flows in a single platform.
It’s clear that builders of modern applications today need access to tools and platforms that abstract away much of the underlying network complexity that tends to make it difficult to build and deploy distributed applications, says Bryant. That’s especially problematic when building applications that are deployed at the network edge, he adds.
As it becomes easier to take advantage of tools and platforms that provide that higher level of abstraction, it’s also becoming more common for network operations to be an extension of a DevOps workflow in Kubernetes environments. There will always be a natural separation of concerns among IT specialists, Bryant notes, but the ability to consume services by invoking APIs across a distributed network should get simpler.
In the meantime, the overall IT environment continues to grow in complexity as more microservices-based applications are deployed across fleets of Kubernetes clusters that need to be networked together. The challenge is the inherent complexity of those IT environments often results in many organizations opting to continue relying on legacy monolithic architectures for building applications; however, those are less flexible and resilient than a microservices-based application would be. In effect, application modernization efforts are slowed by all the lower-level networking technologies that need to be mastered.
Of course, it’s not clear how network operations professionals view the rise of microservices-based applications. In theory, the network becomes the backbone for integrating a wide range of microservices that can be shared across multiple applications. In practice, many of those applications are so latency-sensitive that they stretch existing networks to their breaking point.
One way or the other, the divide that has historically existed between networking and application development is finally on the verge of being bridged. In fact, microservices-based applications may drive a wave of upgrades to the underlying network infrastructure upon which they depend.