Overview of F5 solutions for the 5G ecosystem
Here in this blog, we are going learn the overview of F5 solutions for the 5G ecosystem.
A contemporary service provider is probably going to start with cloud-native deployment when creating 5G architecture. Building architecture using cloud-native techniques has several advantages, such as improved adaptability, resource efficiency, and function, deployment, and modification agility.
These main advantages are unlocked by Red Hat OpenShift, which gives the service provider freedom in horizontal scaling from core to edge, a range of pre-existing tooling and pattern designs, and uniform orchestration. Nevertheless, there are difficulties in implementing 5G core CNFs with unified cloud-native platforms, such as:
- Network protocol management issues (NGAP/SCTP, HTTP/2, Diameter, GTP, SIP, legal intercept, etc.)
- Insufficient integration of routing with service provider networks
- Restricted ability to escape
- Strict security regulations and their incorporation into general security guidelines
- Insufficient visibility and revenue management
Implementing a 5G core with the help of F5 and OpenShift products together lowers:
- Operational complexity: Allows for intricate IP address administration
- Complicated security measures: Make sure every new CNF has unique extra security guidelines.
- Architectural intricacy: Draws attention to defects and internal intricacy
In order to accomplish this, F5 enhances OpenShift with two networking components: Carrier-Grade Aspen Mesh (CGAM) and Service Proxy for Kubernetes (SPK).
- A cloud-native solution for managing application traffic intended for 5G networks offered by service providers is called Service Proxy for Kubernetes (SPK). To proxy and load balance low-latency 5G, SPK incorporates F5’s containerized Traffic Management Microkernel (TMM), Ingress Controller, and Custom Resource Definitions (CRDs) into the OpenShift container platform.
- In order to facilitate service providers’ transition from 4G virtualized network functions (VNF) to 5G’s service-based architecture (SBA), which is dependent on a microservice infrastructure, F5 developed the Carrier Grade Aspen Mesh (CGAM), which is based on the Cloud Native Computing Foundation’s Istio project.
Each CNF needs a Multus underlay as an extra interface to communicate with services and nodes outside of the cluster when SPK is not present, as illustrated in the picture below. Operationally, this can be more difficult and error-prone when it comes to supplying interfaces and IP addresses to each CNF replica. Additionally, there can be a serious security issue if every CNF pod is visible to outside networks.
Because CGAM is utilized for East-West traffic and SPK is concentrated on North-South traffic, both SPK and CGAM operate independently of one another, tackling related but distinct challenges. The service provider can then choose the best technology based on their requirements. The following graphic shows this traffic pattern and where both F5 solutions are located in an OpenShift cluster:
Architecture of SPK
Each SPK deployment serves one NF namespace. An instance of SPK consists of two parts. A data plane pod manages traffic (TMM) and a control plane pod acts as an ingress controller (F5 controller).
TMMs come with two network interfaces each. The internal interface uses CNI to interface with the cluster, while the external interface faces the outside world. You can generate a Configuration Resource object (CR) as needed by using the several Custom Resource Definitions (CRD) that come with SPK.
For example, if the CNF needs to make its Kubernetes service (Service X) public so that external clients can connect with Service X, then an F5-SPK-IngressTCP CR can be put into place. This establishes a Virtual Service (VS) address (Service X VIP) that is listening on the external network at a specified port in SPK. The CNF’s Service X and its endpoints are mapped to this CR. When an external client wishes to communicate with this CNF service, it sends traffic to Service X VIP of the F5-SPK-IngressTCP CR in SPK. Using the load balance technique also specified in that CR, SPK then distributes the traffic among Service X’s endpoints.
Changes within the namespace are continuously monitored by the F5 Controller, which notifies the TMM. In order to maintain the Kubernetes service and endpoints in accordance with its F5IngressTCP CR information, the TMM modifies the IP service pool as needed whenever an endpoint or pod is added or removed in accordance with the update given by the F5 Controller.
Description of Carrier Grade Aspen Mesh (CGAM)
Service providers can transition from 4G virtualized network functions (VNF) to 5G’s service-based architecture, which is built on a microservice infrastructure, with the help of CGAM. Among the main services offered by CGAM are:
- Communication: mTLS for any kind of node-to-node exchange. North-South or East-West Encrypted Traffic
- Packet Inspector: traceability for billing, troubleshooting, and compliance tracking is made possible by per-subscriber and per-service traffic visibility.
- Control: Ingress/Egress GWs, dashboards, DNS Controller, and other tools that let operators manage and keep an eye on the traffic
Partnership between Red Hat and F5
In order to provide cooperative services, solutions, and platform integrations that streamline and expedite the process of creating, deploying, and safeguarding enterprise applications, F5 and Red Hat are working together on a number of projects. These initiatives include work with Red Hat Enterprise Linux, Red Hat OpenStack Platform, Red Hat OpenShift, and Red Hat Ansible Automation Platform.
The proposed solution is predicated on a customized, pre-packaged continuous integration process that is overseen by Red Hat. Ansible Playbooks are launched using pipelines, chaining elements, and configurations to automate the deployment process. This deals with F5 SPK and CGAM integration and testing with various OpenShift versions. Red Hat Distributed Continuous Integration (DCI) is the solution that makes these CI workflows possible.
F5 is able to continually test, validate, and pre-certify their products on OpenShift by joining a large community with over 10,000 CI jobs launched for various partner use cases, and Red Hat is able to respond to this feedback more quickly to fix errors.