Red Hat’s MicroShift 4.16 provides new features for scalability
Here in this blog, we will learn about Red Hat’s MicroShift 4.16 provides new features for scalability.
MicroShift is a Kubernetes distribution tailored for deployment on small, resource-limited edge devices. It extends data center orchestration to the edge, even at the farthest point. A number of new quality-of-life features introduced in MicroShift 4.16 simplify the management of a large fleet of edge devices. It offers many other features like the ability to connect to various networks, streamlined, direct updates for long-term support versions, and new application lifecycle management via GitOps. Teams may still scale with MicroShift 4.16’s consistent tooling and procedures for various edge locations.
Let’s examine some of the most intriguing characteristics.
RHEL 9.4 and direct upgrades from one EUS version to another.
First, let’s discuss the operating system that is currently in use and possible upgrading routes. Red Hat Enterprise Linux (RHEL) 9.4 hosts MicroShift 4.16, an extended update support (EUS) release. RHEL 9.2 and 9.3, which were supported in earlier editions, have been replaced with that. To view the operating system changes, view the RHEL 9.4 release notes.
With just one reboot, you may go straight from 4.14 to 4.16 thanks to support for direct EUS to EUS upgrades. Edge deployment downtime is reduced by omitting 4.15 since just one restart—rather than two—is needed. Furthermore, a rollback is completely supported on OSTree-based systems. The Greenboot health check will immediately revert to the prior image (back at 4.14), if it finds signs of an unhealthy system.
MicroShift releases with even numbers are EUS releases. Red Hat offers backports of urgently needed bug fixes and security updates with significant impact through EUS. Check out the product life cycle page for precise times and information.
Utilizing Multus, attach a pod to several networks.
The MicroShift Multus plugin now enables support for multiple network usage. You can attach more networks to pods if your networking needs are more complex. A pod that needs to connect to an operational network for industrial control systems or sensor networks is a typical use case for this.
You can install the optional Multus plugin on day one for a fresh installation, or on day two. Simply include the RPM package microshift-multus in your image build or deployment.
After downloading the MicroShift Multus RPM package, you can use the Bridge, MACVLAN, or IPVLAN plugins to add extra networks using the NetworkAttachmentDefinition API.
This is also excellent as an intermediate path to IPv6. IPv6 is not supported by MicroShift at this time, however, full support is planned for the near future. Until then, you can use the bridge network plugin to link a pod to an NIC with an IPv6 address.
MicroShift: enabling scalable GitOps (tech preview)
Now that we work closely with the OpenShift GitOps team, MicroShift offers optional installation of a small, light GitOps agent. This makes it possible for edge deployments to handle application lifecycles consistently at scale.
Comparing managing a big fleet of edge devices to traditional, centralized computing can be difficult. It is likely that you may have to deal with different locations, connectivity problems, workforce availability, and architectural variances. When employing a complex deployment in microservices-based systems, this might become much more intricate. It is imperative that every deployment on every edge device be on the same, consistent version, regardless of how difficult it may be.
The industry-standard method for addressing this problem has emerged in recent years: a GitOps-based strategy where the target configuration is hosted in a Git repository. Using standard Git workflows (pull requests are evaluated and approved, for example), the repository is modified. One central controller often oversees data center installations, balancing the configuration of the pending Git repository with the clusters’ real present configuration. Any discrepancy is synchronized to the cluster and either reported or corrected. Because the managed clusters are receiving configuration updates from the central GitOps controller via its API endpoint, this strategy is known as push-based. One well-known open-source project that offers such a GitOps controller is Argo CD.
There are a few issues with this for edge deployments. The API endpoint of an edge device is frequently inaccessible from the core system because it is shielded by a firewall. Since the reconciliation only functions, while connectivity is available, human-made local alterations are not identified and fixed when the system is offline.
A pull-based strategy can be used to address these problems. Endpoints in this model inquire about pending updates. Every edge device has a local GitOps controller that synchronizes with both a remote Git repository and the local cluster API. When connectivity is available, the pending configuration is retrieved from Git and cached locally. Then, even in the event that connectivity is unavailable, the reconciliation with the API server can still take place. Additionally, this solution is firewall-friendly because it establishes a link to the central Git repository from the edge device.
The GitOps controller operating on the edge device is a crucial factor to take into account with this strategy. That uses more resources, thus having a compact and light controller is essential. That’s precisely what MicroShift has accessible right now. To deploy Argo CD on MicroShift in a tiny, lightweight manner, add the microshift-gitops RPM package.
Personalized certificates for API servers
An internal MicroShift cluster certificate authority (CA) is responsible for issuing the default API server certificate. It is challenging for clients not part of the cluster to verify the certificate of the API server. When security requirements prohibit the use of self-signed certificates, it can pose a problem to expose an API.
Custom server certificates that are externally issued by a custom CA that clients trust can be used in place of the API server certificate. It is even feasible to have more than one certificate for different names. For information on how to configure this, see the documentation found here.
Router controls for ingress
The ingress router in earlier iterations was always up and listening on ports 80 and 443 for any IP address that might be accessible. This can be a concern in multi-homed edge devices, because ingress traffic may only be expected on a certain network. Also, minimizing attack surfaces isn’t in keeping with security best practices.
Administrators can now, as of MicroShift 4.16:
- Turn off the router. MicroShift is egress only in certain usage scenarios. For example, industrial IoT solutions may not have any inbound services at all if pods are only connected to northbound cloud systems and scalability southbound shop floor systems. The router pod is not started, which limits the attack surface and lowers scalability resource consumption.
- Choose the ports that the router is available to listen on.
- Set up the network interfaces and IP addresses that the router is listening on. In certain industrial use situations, the router should only be reachable via internal shop floor networks and not via public scalability networks that travel northward (or vice versa, or both).
Adaptable audit logging
The audit logging feature of MicroShift, which records all API requests, formerly relied on hard-coded audit log policies and setup. More configuration options for audit logging are available scalability starting with 4.16, which might assist you in adhering to the audit logging policies of your company.
By setting up parameters to control the audit log file’s rotation and retention, far-edge devices can avoid using up more storage than they can handle. Logging data accumulation on certain devices may restrict host system or cluster workloads, perhaps leading to device failure. Having audit log policies in place can help scalability guarantee that vital processing capacity is always accessible.
You can enforce restrictions on the size, number, and age of audit log backups by setting values for audit log limits. Prioritization is not used while processing field values; instead, they are handled separately.
By combining certain parameters, you can specify a maximum storage limit for retained logs. As an illustration:
- To provide an upper limit for log storage, set both maxFileSize and maxFiles.
- Regardless of the maxFiles value, set a maxFileAge number to automatically delete files older than the timestamp in the file name.
It is also possible to define audit profiles to regulate the amount of information that is recorded in the audit log.