An adoption strategy for Red Hat OpenStack Platform 17.1 on OpenShift
Here in this blog, we will learn about an adoption strategy for Red Hat OpenStack Platform 17.1 on OpenShift.
The introduction of Red Hat OpenStack Services on OpenShift marks a significant transformation in the structure and architecture of OpenStack services, altering their deployment and management. The OpenStack control panel has transitioned from conventional containers in Red Hat Enterprise Linux to a sophisticated pod-based architecture. This blog post will explore the process of upgrading from Red Hat OpenStack Platform 17.1 to Red Hat OpenStack Services on OpenShift, as well as strategies to maximize the benefits of this pivotal change.
Red Hat OpenStack Services function within the control plane of an OpenShift cluster.
In the revised architecture, the OpenShift cluster is built upon Red Hat CoreOS. Red Hat OpenShift offers a robust platform for the deployment of containerized applications and microservices. CoreOS, a streamlined Linux distribution tailored for container workloads, functions as the operating system for the OpenShift cluster.
Within this OpenShift cluster, the Red Hat OpenStack Services on the OpenShift control plane are implemented as pods, utilizing the adaptability and scalability provided by Kubernetes orchestration. Each element of the control plane, including services such as Nova, Neutron, Cinder, and Keystone, operates as a separate pod, facilitating efficient resource management and straightforward administration.
This pod-centric deployment approach presents numerous benefits compared to conventional containerized deployments. By utilizing Kubernetes features (including deployments, services, and pods), one achieves detailed control over the lifecycle of OpenStack services. Additionally, it offers inherent capabilities for scaling, rolling updates, and self-healing.
The data plane operates on Red Hat Enterprise Linux
While the control plane undergoes transformation within the OpenShift cluster, the data plane continues to utilize Red Hat Enterprise Linux (RHEL). The data plane includes the virtualized infrastructure responsible for the creation and management of Red Hat OpenStack Services instances within OpenShift.
The introduction of the adoption mechanism
Transitioning from Red Hat OpenStack Platform 17.1 to Red Hat OpenStack Services on OpenShift necessitates a meticulously planned strategy. Unlike standard in-place upgrades, this transition calls for a strategic adoption mechanism that recognizes the significant architectural changes within the control plane.
Control plane transition: Side-by-side deployment
The primary focus of this migration is the control plane, where the transition from operating as containers in RHEL to utilizing pods in OpenShift represents a significant departure from traditional upgrade practices. To navigate the complexities of this process, the adoption mechanism employs a side-by-side deployment strategy. This method facilitates the simultaneous operation of both the existing Red Hat OpenStack Platform 17.1 deployment and the newly established Red Hat OpenStack Services on the OpenShift environment.
A key benefit of maintaining both the source and target control planes is the option to revert to the Red Hat OpenStack Platform 17.1 environment should any unforeseen issues arise during the upgrade. This contingency plan provides reassurance to administrators and mitigates potential risks.
Data Plane Upgrade: In-Place Migration
In contrast to the control plane’s significant transformation through side-by-side deployment, the data plane upgrade adopts a more conventional in-place migration strategy. This differentiation is vital, as the data plane components are responsible for essential workloads and services that require minimal disruption during the upgrade.
The in-place migration of the data plane guarantees continuity and stability, enabling your organization to transition data and workloads to the new environment smoothly, without sacrificing performance or availability. By utilizing established best practices and automation tools, administrators can optimize the upgrade process, reduce downtime, and enhance operational efficiency.
Essential Steps in the Adoption Process
The latest OpenShift control plane for Red Hat OpenStack Services on OpenShift is constructed on Red Hat OpenShift 4.16, featuring pre-installed operators for Red Hat OpenStack Services on OpenShift.This includes components such as NMstate, MetalLB, Cert-manager, and the Baremetal Operator (Metal3).
Adoption Preparation
During the phase of preparing for adoption, operators collect vital information from the existing Red Hat OpenStack Platform 17.1 environment:
- Configuration of Red Hat OpenStack Platform 17.1 services: This information is integrated into the control plane with each service adoption.
- Networking configuration: The addresses assigned in the overcloud and the os-net-config networking templates are incorporated into the data plane custom resources.
These configurations are integrated into the control plane at the time of adopting each service, as well as during the implementation of the data plane.
To conclude, establish a set of shell variables containing details regarding the original database, the mappings of compute service cells, and the hostnames associated with the compute services. This information will be utilized subsequently for verification during the adoption process.
databases to the control plane.
In its preliminary configuration, the operator establishes the OpenStackControlPlane custom resource (CR), which includes the deployment of essential backend services while keeping all OpenStack services inactive. This constitutes the essential foundation for the control plane.
It is essential to halt the Red Hat OpenStack Services on the OpenShift services located on the controller nodes prior to initiating the database migration. This step is crucial to prevent any inconsistencies in the data being migrated during the data plane adoption process, which may arise from resource modifications occurring after the database has been transferred to the new deployment.
Certain services can be easily stopped as they engage in brief asynchronous operations; however, other services may present challenges in being stopped gracefully due to their synchronous or continuous operations, which one may prefer to complete rather than terminate abruptly.
The operator will then migrate the databases from the initial OpenStack deployment to the MariaDB instances located within the OpenShift cluster. The following steps should be undertaken:
- Deploy the adoption assistance pod, mariadb-copy-data.
- Create a dump of the original databases.
- Restore the databases from the .sql files into the control plane MariaDB.
Finally, the mariadb-copy-data pod must be removed.
The process of migrating the OVN database entails a similar procedure, comprising the following steps:
- Implement the adoption assistant pod referred to as ovn-copy-data.
- Create a backup of the OVN databases.
- Begin the OVN database services for the control plane prior to the import, making certain that northd/ovn-controller is not active.
- Upgrade the database schema for the backup files.
- Restore the database backup to the control plane OVN database servers.
- Finally, activate the ovn-northd service to synchronize the OVN northbound and southbound databases, and also enable the ovn-controller.
Integrating Red Hat OpenStack Services with the control plane services of OpenShift.
The subsequent phase involves the sequential deployment of Red Hat OpenStack Services on OpenShift services. In this instance, the procedure for adopting the Identity Service (Keystone) is as follows:
- Enhance the OpenStackControlPlane to improve the deployment procedure of the Identity service.
- As the deployment of Keystone progresses, a keystone-db-sync pod is instantiated. This pod identifies that the database schema originates from an earlier version and executes a fast-forward process through four releases: Wallaby, Xena, Yoga, Zed, and Antelope.
- After the completion of keystone-db-sync, the Keystone Database has transitioned to using the Antelope DB schema.
- The Operator employs the OpenStack client pod to run OpenStack commands for the purpose of removing services and endpoints that continue to reference the previous control plane.
All other Red Hat OpenStack Services operating on the OpenShift control plane follow a similar protocol.
In the event that you experience an issue while implementing the Red Hat OpenStack Services on OpenShift control plane services, you have the option to revert the control plane adoption. The process of rollback encompasses the subsequent steps.
- Reestablish the functionality of the original control plane
- Eliminate the partially or fully deployed target control plane
Transitioning to the data plane
The subsequent phase of the adoption process involves the integration of the data plane. Initially, it is necessary to halt the remaining backend services within the control plane.
Prior to the implementation of the compute services, it is essential for the operator to establish an OpenStackDataplaneDeployment resource. This implementation must include the pre-adoption-validation playbook, which is provided as an OpenStackDataplaneService.
Subsequent to the deployment, an Ansible Execution Environment job is initiated automatically. This job performs the following validation checks:
- Validation of the hostname
- Verification of kernel arguments
- Assessment of the tuned profile.
The following steps are necessary:
- Configure the IP address management (IPAM) NetConfig resources utilizing the network configuration obtained in the initial step, along with any EDPM (External Dataplane Management) related Custom Resources applicable in a Greenfield deployment, such as secrets.
- Create a nova-compute-extra-config service with the parameter disable_compute_service_check_for_ffu configured to true. This configuration is intended solely to support a Fast-Forward upgrade (FFU), during which new control services are initiated prior to the compute nodes updating their service records. In an FFU scenario, the service records in the database may be outdated by more than one version until the compute nodes are operational, necessitating that control services be active beforehand.
- Deploy the OpenStackDataPlaneNodeSet and OpenStackDataPlaneDeployment Custom Resources. Ansible Execution Environment (EE) jobs will be initiated to run playbooks for each service, including bootstrap, download-cache, configure-network, and others.
- The bootstrap service guides the RHEL system to the RHOSO18 repositories and facilitates the installation of necessary packages, including openstack-selinux.
- The OpenStack Data Plane Services will substitute the storage, compute, and networking containers from OSP 17.1 with those from RHOSO 18.
Finally, eliminate the pre-fast-forward upgrade workarounds for the compute control plane services by setting the flag disable_compute_service_check_for_ffu to false, and execute the online migrations for the compute database to finalize the fast-forward upgrade.
At this point, all content related to the Red Hat OpenStack Platform within the cluster has been successfully upgraded to Red Hat OpenStack Services on OpenShift. You are now operating within a supported environment.
Conclusion
The transition from Red Hat OpenStack Platform 17.1 to Red Hat OpenStack Services on OpenShift represents a notable progression in the evolution of cloud infrastructure. By adopting a pod-based control plane within an OpenShift cluster on CoreOS, while ensuring a strong data plane on Red Hat Enterprise Linux, you can achieve remarkable levels of agility, scalability, and resilience. With Red Hat, you are poised to begin a journey toward a future where cloud infrastructure serves not merely as a platform, but as a driving force for innovation and transformation.