F5 this week made generally available an integrated application networking platform that centralizes the management of load balancing, web and application servers, application programming interface (API) gateways and cybersecurity.
Shawn Wormke, vice president and general manager for NGINX at F5, said NGINX One makes it possible to manage both instances of F5 NGINX instances and NGINX Open Source via a software-as-a-service (SaaS) platform via a single console.
NGINX One streamline implementing NGINX Plus, NGINX Open Source, NGINX Unit, NGINX Gateway Fabric and the Kubernetes Ingress Controller the company provides in a way that unifies application networking across both monolithic and microservices-based applications that have been deployed across a hybrid cloud computing environment, noted Wormke.
Additionally, NGINX One provides access to additional telemetry data and artificial intelligence (AI) capabilities that make it easier to deploy and manage applications at scale, he noted.
In general, application networking has become more complex with the rise of cloud-native applications based on microservices. By reducing the cognitive load required to network those applications, it becomes simpler to make network and security operations a more natural extension of DevOps workflows, said Wormke.
For example, a platform engineering team might create a set of templates for integrating applications that individual application development teams can then customize to meet specific requirements, he added.
In other instances, a network operations team might implement those templates on behalf of a DevOps team, noted Wormke. Regardless of approach, IT organizations need to be able to flexibly address multiple use cases as needed, he noted.
Most organizations have some level of experience with application networking using proxy servers, but today there is an increased need for gateways and ingress controllers to provide these capabilities at scale has become more apparent. As a result, more IT organizations are starting to revisit how they are structured. There may always be a need for dedicated networking specialists to manage the physical network underlay, but as other networking services become more integrated with DevOps workflows, responsibility for some networking services is starting to shift further left toward DevOps teams that will either deploy these capabilities themselves or take advantage of a SaaS platform such as NGINX One. The overall goal is to be able to dynamically provision application networking services without having to wait for a network administrator to provision them.
Regardless of how application networking evolves, however, the rigidity that has characterized the delivery of network services for decades should finally start to fade away. Today, outside of a cloud computing environment, it’s still common for IT teams to provision virtual machines or Kubernetes pods in hours only to wait days or even weeks for network connectivity to be provisioned. As IT environments become more hybrid and more workloads are pushed to the edge, there is a clear need for a more agile approach.
Each organization, of course, will need to decide how best to approach application networking. However, as IT environments become more complex it’s increasingly apparent that legacy approaches to application networking are not going to meet the needs of modern IT environments.