How we used Crossplane for the things we should not have
30. Sep 2025
At Swiss Cloud Native Day 2025 in Bern, our colleague Liene Luksika shared an honest and entertaining story about VSHN’s journey with Crossplane. What started as a simple use case evolved into a complex architecture, full of learnings, mishaps, and valuable lessons for anyone building managed services on Kubernetes.
From healthcare to cloud native
Liene comes from the healthcare sector, so when she joined the cloud native world at VSHN, she had to quickly get used to Kubernetes lingo – namespaces, instances, and of course, the obsession with laptop stickers. Luckily, VSHN has been around for more than 10 years, providing 24/7 managed services and building cloud native platforms for customers in Switzerland, Germany, and beyond.
Why Crossplane?
As customers increasingly asked VSHN to run their software as a service – databases, Nextcloud, and other critical apps – we needed a solid way to provision and manage infrastructure across private and public clouds. Crossplane seemed like the perfect fit:
It lets engineers define desired state vs. observed state
It automatically reconciles the two – like making coffee appear if that is your desired state
It provides flexible building blocks to expose clean APIs for managed services on Kubernetes
VSHN has used Crossplane in production since early 2021 (around v0.14) and runs the Crossplane Competence Center in Switzerland.
The evolution: from simple to complex
Our first use case was straightforward: a customer wanted two types of databases (Redis and MariaDB), T-shirt sized, no extras. Crossplane handled this beautifully.
Then reality hit. Customers wanted backups and restores, logs and metrics, alerting, maintenance and upgrades, scaling and user management, special features like Collabora for Nextcloud, and the freedom to choose infrastructure. To serve this, we adopted a split architecture:
A control cluster for all Crossplane logic
Separate service clusters for customer workloads
This runs today with customers like health organizations in Gesundheitsamt Frankfurt and HIN in Switzerland, on providers such as Exoscale and Cloudscale, keeping data sovereign and operations reliable.
When things go wrong
Building complex platforms means learning in production:
Deletion protection surprise: a minor Crossplane change removed labels before deletion, wiping our safeguard. Backups saved the day
Race conditions: a split approach to connection details occasionally made apps unreachable until we cleaned up code
The big one: during a planned “no-downtime” maintenance for a fleet with 1’300+ databases, objects hit an invalid state and Kubernetes garbage collection deleted 230 database objects. Some restores were fresh, some older. We pulled in 20 people overnight, communicated openly, and recovered together with the customer
Key lessons: test at realistic scale and keep recent, tested backups. Also, practice the restore path, not just the backup.
Crossplane 2.0 – where next?
Crossplane 2.0 introduces major breaking changes. Staying put is not an option, but migrating means real effort, especially for our split control plane architecture. We are evaluating whether Crossplane 2.0 fits our needs or if alternatives are a better match. As always, we will document our decisions openly in VSHN’s Architecture Decision Records.
Final thoughts
Cloud native success is not just about tools. It is about learning fast, designing for failure, and communicating clearly with customers. Crossplane has enabled a lot of innovation for us, and it has also tested us. Whether we proceed with Crossplane 2.0 or chart a different course, we will keep building sovereign, reliable, open managed services for our customers.
👉 Watch the whole video on YouTube:
Markus Speth
Marketing, Communications, People
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Now Available: DevOps in Switzerland Report 2025 🚀
25. Jun 2025
We’re absolutely thrilled to release the sixth edition of our “DevOps in Switzerland” report – and this time with a special focus on Platform Engineering and Artificial Intelligence (AI)! 🤖
From January to April 2025, we conducted a study with professionals from the Swiss tech community. The result: valuable insights into how DevOps teams in Switzerland work today – what tools they use, how their teams are structured, the challenges they face, and where AI is already being used in practice.
💡 Want a sneak peek?
💡Swiss companies are no longer asking whether to adopt DevOps – they’re asking how to scale it.
📈 Platform Engineering and AI are reshaping how teams ship software faster, safer, and smarter.
💡1 in 3 Swiss DevOps teams already use AI in production – for code reviews, CI/CD optimization, and architecture support. Another third are gearing up to follow.
💡54% of Swiss companies now have dedicated Platform Engineering teams.
Internal Developer Platforms (IDPs) are becoming the secret weapon for enabling autonomy and reducing complexity.
💡 Devs say yes to AI! 79% of Swiss developers are comfortable using AI in their workflows – but only 20% believe it’s fully ready.
The report shows: AI is promising, but needs better measurement and trust to scale.
You’ll find all of this (and much more!) in our compact PDF report (available in English only). Just like last year, the report begins with an executive summary – perfect for those short on time.
Redis 8 Now Available in the VSHN Application Catalog – Open Source Is Back!
11. Jun 2025
We’re thrilled to announce that Redis 8 is now available through the VSHN Application Catalog – and this release is a special one: Redis is officially open source again!
But that’s not all: Redis is now also available on Servala – the open, cloud-native service hub operated by VSHN, connecting developers, software vendors, and cloud providers across multiple infrastructures.
Why This Is a Big Deal
For years, Redis has been one of the most popular in-memory databases for developers and DevOps teams alike. However, licensing changes in previous versions created friction for open ecosystems and cloud-native users. With version 8, that’s finally changing: Redis has returned to its open source roots, now licensed under the GNU AGPLv3.
“Redis 8 brings Redis back to its open source roots. All future development of Redis will happen under the AGPLv3 license.” – Redis team, official announcement
This means greater transparency, broader collaboration, and long-term sustainability for users who rely on Redis as a key part of their stack.
Redis 8 with VSHN and Servala: Fully Managed, Highly Available
With Redis 8 now available in both the VSHN Application Catalog and on Servala, you get more than just the latest open source release:
Production-grade deployments on Kubernetes and OpenShift
Guaranteed availability, monitoring, and automated failover
Lifecycle management, including upgrades and security patches
Cloud provider flexibility – deploy in your infrastructure or through partners
Self-service provisioning via Servala with built-in automation
Whether you’re running Redis as part of your internal platform, or offering it to teams and customers, we’ve got you covered.
Supported Versions
We continue to support the most widely used Redis versions, with Redis 8 now part of our officially maintained portfolio. Check out the complete list of supported versions on the VSHN Redis product page and the Servala Redis page.
Why Choose Redis 8 via VSHN or Servala?
✅ Fully open source and community-driven again
✅ Kubernetes-native, GitOps-ready deployments
✅ High availability, failover, and backup strategies included
✅ Integrated with your infrastructure, or offered as a managed service
✅ Supported by VSHN, the DevOps experts behind Servala
Redis 8 is a major milestone for the open source world – and we’re proud to bring it to your production environment through VSHN and Servala.
The Technical Challenges Behind Servala: Standardizing Application Delivery
14. Apr 2025
In this follow-up to our Servala introduction, we explore the technical challenges of bringing managed services to cloud providers everywhere. Discover how the repetitive and inconsistent nature of application packaging, deployment, and operations inspired our vision for standardization.
We explore the problems platform engineers face today, including inconsistent container behaviors, unpredictable Helm charts, and the chaos of day-2 operations across security, configuration, and dependencies.
Learn how Servala’s proposed open standards will transform the landscape for:
Software vendors – Accelerating time-to-market and expanding reach without operational overhead
Cloud providers – Enriching service catalogs with enterprise-grade managed services
End users – Enjoying self-service freedom with consistent, secure, and compliant applications
Join us on this journey to simplify application delivery and make managed services accessible to everyone.
In our introduction to Servala, we mentioned the technical challenges of enabling software vendors to onboard themselves onto our platform. As we continue building Servala in 2025, we’re tackling the most fundamental challenge: creating a standardized approach to application delivery. Let’s explore these challenges and our proposed solution in more detail.
The Repetitive Nature of Application Management
Over the past years at VSHN, we have taken care of numerous applications as part of our managed services offering that now forms the foundation of Servala. For every single application, we had to do the same tedious tasks:
Packaging: Prepare the application in a deployable format, typically by creating an OCI-compliant container image compatible with Docker, Podman, Kubernetes, and OpenShift. Automate the packaging process to trigger whenever a new version is available.
Deployment: Deploy the packaged application to the target system, typically through automated processes rather than manual steps. Most deployments span multiple environments, such as test, staging, pre-prod, and production, or support self-service provisioning for SaaS. This process often involves creating Helm Charts and setting up automation pipelines or APIs provided by tools like Kubernetes operators (e.g., Crossplane).
Day-2 Operations: After deployment, ongoing responsibilities include collecting metrics, setting up alerts, updating the application, scaling in response to performance issues, backing up and restoring data, analyzing logs, offering 24/7 support, and ensuring compliance with various standards, along with many other operational tasks.
The Current Challenge for Servala
Doing these same steps over and over again becomes tedious. Solving the same problems whenever we must take care of a new application doesn’t feel valuable. In reality, we have to deal with a multitude of different ways in which these things are done. It puts a lot of burden on engineers, having to cope with all the many ways all these tasks can be done. Usually, parts of the functions mentioned above are already done. As an example, container images are already available, but every image behaves differently from the other. And that means we must always figure out how to integrate into the next step. The same applies to the various Helm Charts out there. Standardization will relieve us from this burden, making the process more efficient and less repetitive.
The core issue stems from the flexibility of the tools involved. Container images vary widely in how they’re built and behave, while Helm Charts accept parameters in inconsistent formats. For example, the container image reference might appear as img, image, or image-registry, depending on the chart author.
Security scanning and compliance reporting vary widely between applications. Some include Software Bills of Materials (SBOMs), while others require manual inventory. Configuration handling is equally inconsistent—some applications use environment variables, others expect config files in specific locations, and others require custom configuration APIs.
Day-2 operations vary significantly across applications. Some expose metrics in a Prometheus-compatible format, while others don’t. Identical metrics might use different names, and logging formats range from structured JSON to custom plain text. Dependency management is often neglected, with minimal information about required services or components. As a result, maintaining these applications turns into a tedious game of whack-a-mole.
We must solve these fundamental inconsistencies so that Servala can scale and enable software vendors to onboard their applications easily.
Our Proposed Solution: Standardization
How could we solve these obstacles? We propose to define a set of documents that specify patterns for all the various parts needed to deliver applications through Servala. We could also call these documents specifications, golden paths, patterns, standards, conventions, or defaults. Ultimately, the goal is to document a commonly agreed-upon way to solve the mentioned tasks so that we don’t have to iterate over them repeatedly.
However, doing that just for us feels wrong. As a company, we embrace Open-Source and Open Standards to work together in a defined way. Therefore, we propose to form a group of people from various companies, document the patterns together, and agree on them.
The Vision: A Transformed Application Delivery Landscape
What will application delivery look like once the Servala specifications are widely adopted? The benefits will be transformative for all parties involved:
For Software Vendors:
Accelerated Time to Market: Instead of spending months building deployment, monitoring, and maintenance systems, vendors can focus on their core product and leverage Servala’s standardized delivery mechanisms to reach cloud providers globally.
Reduced Operational Overhead: By conforming to the Servala specification, vendors automatically inherit proven operational practices like monitoring, metrics, logs, backups, etc, without maintaining their own operations team.
Expanded Market Reach: The ability to deploy on any Servala-compatible cloud provider opens new markets without additional engineering effort.
Enhanced Security Posture: Standardized security scanning, compliance reporting, and configuration management significantly reduce risk, enabling vendors to confidently deploy their applications on Servala-compatible cloud providers, even without dedicated in-house security expertise.
For Cloud Providers:
Enriched Service Catalogs: Providers can instantly offer dozens of managed services that follow consistent operational patterns, dramatically increasing their value proposition.
Operational Consistency: All services follow the same patterns for monitoring, alerting, and maintenance, reducing the complexity of running multiple third-party applications.
Competitive Differentiation: Smaller cloud providers can now compete with hyperscalers by offering comparable catalogs of managed services.
For End Users:
With Servala’s standardized delivery Mechanisms, end users can deploy complex managed services confidently, knowing they follow consistent operational patterns. This empowerment gives them a sense of control and confidence in their operations.
The operational interfaces remain consistent regardless of application deployment, providing end users a predictable and secure experience. This predictability reassures them of the system’s stability and reliability.
Enterprise Readiness: All services automatically include security, backup/restore, monitoring, and other enterprise features without custom integration work.
Simplified Compliance: Standardized security scanning and compliance reporting make regulatory audits more straightforward and less resource-intensive.
Dependency Clarity: Clear visibility into service dependencies and compatibility requirements reduces deployment failures and configuration errors.
The Servala Specification Areas
We envision documenting patterns for:
Container image behavior, such as where to store data, how to expose ports, how the entry point behaves, and with which permissions the application runs.
Helm Chart “API”: How do the standard values behave? What does the configuration structure look like?
Unified Operational Framework:
Backup and Restore: Standardized interfaces for consistent application and data backup procedures with well-defined restoration paths and verification methods
Metrics: Well-defined endpoints to get application metrics for alerting, monitoring, and performance insights
Alerting and Monitoring: Common alert definitions, severity classifications, and response expectations across applications
Logging Standards: Uniform logging formats, retention policies, and search capabilities to simplify troubleshooting
SLA Definitions: Standardized metrics for measuring and reporting on availability, performance, and reliability
Maintenance Windows: Clear protocols for coordinating and communicating maintenance events with minimal disruption
Billing: A Uniform way of billing service usage
Security Scanning and Compliance: Standardized approaches for vulnerability management, security policy enforcement, and compliance reporting across all applications
Configuration Management: Unified patterns for handling application configuration, secrets management, and runtime reconfiguration
Dependency Management: Clear declaration and handling of service dependencies, including versioning requirements and compatibility matrices
Self-Service API Architecture: Propose standardized structures for Kubernetes resources, creating predictable interfaces for application management across environments.
Previous work we want to build on
There have been successful efforts to standardize that we want to build on:
Open Container Image (OCI) Image Format
After a decade of fragmentation in how containers were built and stored, the OCI initiative introduced a unified image format adopted by tools like Docker and Podman. It standardized filesystem locations (e.g., /var/lib/docker/), defined predictable image layering, and enabled interoperability across registries such as Docker Hub, GitHub Container Registry (GHCR), and Quay.
Kubernetes as a container orchestrator
Kubernetes has established itself as the de facto standard for managing container fleets. It provides a unified API for managing compute, networking, and storage regardless of the infrastructure provider.
Kubernetes Pod and Container Lifecycle Conventions
The Kubernetes community has standardized application behavior during lifecycle events, such as startup, shutdown, and health checks, ensuring consistent health monitoring. Applications now respond predictably to restarts and draining, greatly easing the work of platform engineers. Implementing lifecycle hooks has become a de facto standard.
Prometheus Metrics Format
Many applications already implement exposing metrics in the Prometheus OpenMetrics format, and where, by convention, the “/metrics” endpoint exposes a human-readable or OpenMetrics-compliant format, there are some Standard naming conventions (http_requests_total, etc.). While it’s not perfect yet, as some metric names still vary, this is one of the most widely accepted informal standards adopted by applications, exporters, sidecars, service monitors, etc.
Software Bill of Materials (SBOM) standards
With open SBOM standards now established and widely supported by vendors like GitHub, GitLab, and Docker, generating and consuming SBOMs has become a best practice. It’s so fundamental that the EU Cyber Resilience Act (CRA) now mandates SBOMs for all proprietary and open-source software.
12-Factor App
While some inconsistencies remain, we still want to honorably mention the https://12factor.net/ manifesto, which laid the Foundation for cloud-native apps in 2011 and still influences architecture and platform design today. Solving Inconsistent application structure and runtime expectations, these are now widely adopted best practices: config via env vars, statelessness, logging to stdout, etc., often enforced indirectly by platforms.
Helm Chart Best Practices / Guidelines
The Helm community recognizes the inconsistency in chart structures and naming and has responded with best practices, guidelines, and tools like helm lint and helm create. While adoption remains partial, projects like Bitnami, KubeApps, and Backstage increasingly rely on these conventions, laying a strong foundation for what Servala aims to standardize.
OpenAPI / Swagger
The OpenAPI Initiative has significantly impacted API standardization. It enables machine-readable API definitions, automatic generation of SDKs, tests, mocks, and human-friendly documentation. Widely adopted across platforms – from Kubernetes CRDs to GitHub APIs – OpenAPI has brought consistency and interoperability to API design and consumption.
OpenServiceBroker API
We’ve implemented the OpenServiceBroker API and are using it actively with a few clients, but it’s missing the declarative and cloud-native approach to service listing and provisioning.
Crossplane
We’re fans of Crossplane’s declarative approach to service definitions and instantiations. Crossplanes Composite Resource Definitions (XRDs) provide opinionated Kubernetes Custom Resource Definitions (CRDs), which all have the same structure as what the engineer defined in the XRD. Servala is not replacing Crossplane but uses Crossplane under the hood.
Open Application Model (OAM)
The Open Application Model (OAM) supports Servala’s mission by offering a platform-agnostic, standardized way to define cloud-native applications. It cleanly separates core logic (components), operational features (traits), and dependencies (scopes), providing a consistent interface for developers and operators. With reusable metrics, backups, and autoscaling definitions, OAM helps eliminate the fragmentation found in Helm charts and container images, making it an interesting foundation for Servala’s specification and enabling consistent managed services across environments.
The Platform Specification
The Platform Spec initiative contributes to Servala’s vision by offering a standardized contract between developers and platform engineers, defining a standard interface for deploying and managing applications across any internal developer platform (IDP). It focuses on creating a consistent, YAML-based specification for app workloads, including container images, environment variables, secrets, service bindings, and deployment rules. It solves many issues Servala identifies with inconsistent Helm charts and runtime behaviors. By adopting or aligning with Platform Spec, Servala can simplify onboarding, reduce integration overhead, and ensure applications are deployed in a predictable, platform-agnostic way, regardless of the underlying orchestrator or infrastructure.
Cloud Native Application Bundle (CNAB)
The Cloud Native Application Bundle (CNAB) specification supports Servala’s goals by providing a portable, standardized way to package and distribute multi-component applications, including Kubernetes manifests or Helm charts and Terraform plans, scripts, and other deployment artifacts. CNAB defines a consistent format for bundling an application’s code, configuration, and lifecycle operations (install, upgrade, uninstall), making it ideal for complex managed services that span multiple tools or environments. By leveraging CNAB, Servala could offer a unified packaging format that encapsulates everything needed for reliable, repeatable deployment, helping reduce fragmentation, simplify onboarding, and enable consistent Day-2 operations across cloud providers.
The Path Forward
Servala aims to accelerate application onboarding by a factor of 10, reducing weeks of custom integration to just days or hours, while dramatically improving reliability through consistent, proven patterns for deployment and operations. We’ve already begun implementing these standards in our development roadmap, but broad industry collaboration is essential for success. We invite software vendors, cloud providers, and platform engineers to join us in shaping these standards openly and collaboratively. With a solid foundation, Servala can redefine how managed services are delivered, empowering cloud providers to expand their service catalogs and enabling vendors to become SaaS providers without heavy operational overhead.
Subscribe to Updates
Always be the first to find out all the latest news about Servala. Simply add your email below:
What’s Next?
In 2025, we will focus on enabling software vendors to onboard themselves onto the Servala service delivery platform. Find more information about Why we launched Servala in our Servala launch announcement.
Contact us
Interested in learning more? Book a meeting or write us to explore how Servala can help you! Experience the future of cloud native services on servala.com. 🚀
Markus Speth
Marketing, Communications, People
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Why VSHN Managed OpenShift Customers Are Safe from the Recent Ingress NGINX Vulnerability
26. Mar 2025
A recently disclosed set of vulnerabilities, known as IngressNightmare, has raised alarms for Kubernetes users relying on the Ingress NGINX Controller. These vulnerabilities (CVE-2025-24513, CVE-2025-24514, CVE-2025-1097, CVE-2025-1098, and CVE-2025-1974), with a critical CVSS score of 9.8, could allow attackers to gain unauthorized access to a Kubernetes cluster, potentially leading to remote code execution and full cluster compromise. However, OpenShift 4.x customers are not affected by this exploit, as OpenShift uses the OpenShift Ingress Operator, based on HAProxy, as the default ingress controller.
The vulnerabilities affect the Ingress NGINX Controller, which is responsible for managing external traffic and routing it to internal services in a Kubernetes cluster. Specifically, they target the admission controller, which, if exposed without authentication, allows attackers to inject malicious configurations, resulting in remote code execution. Since OpenShift 4.x uses the OpenShift Ingress Operator (based on HAProxy) as the ingress controller, customers are not exposed to these risks.
OpenShift 4.x further enhances security by restricting permissions and not permitting the default ingress controller to access sensitive data, such as secrets stored across Kubernetes namespaces. This design decision helps protect OpenShift customers from potential exploits by preventing unauthorized access to critical cluster resources.
As a result, VSHN Managed OpenShift users can be confident that their clusters remain secure without having to worry about this specific vulnerability.
Markus Speth
Marketing, Communications, People
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Announcing Redis by VSHN – Enhance Your Containerized Workloads
6. Nov 2024
We are thrilled to announce the general availability of Redis by VSHN, now available in OpenShift through the VSHN Application Marketplace. This powerful, in-memory data structure store, known for its blazing-fast performance and versatility, is now optimized for containerized environments on OpenShift. Whether you’re building microservices, real-time analytics, or caching layers, Redis by VSHN offers the reliability and scalability your applications need.
Why Redis on OpenShift?
Redis has been a favorite among developers for its simplicity, performance, and robustness. By integrating Redis into OpenShift, we are enabling seamless deployment and management of Redis instances within your containerized infrastructure. This means you can now leverage Redis’s capabilities while enjoying the benefits of OpenShift’s orchestration and container management.
Key Features
Containerized Deployment: Effortlessly deploy and manage Redis instances in your OpenShift environment.
Scalability: Scale your Redis instances up or down based on your application needs.
High Availability: Ensure your data is always available with Redis’s built-in replication and persistence mechanisms.
Integrated Monitoring: Utilize OpenShift’s monitoring tools to keep an eye on your Redis performance and health.
Security: Benefit from OpenShift’s security features to protect your Redis instances and data.
Benefits for Your Containerized Applications
Performance: Redis’s in-memory data structure ensures lightning-fast read and write operations, ideal for real-time applications.
Flexibility: Support for a variety of data structures, including strings, hashes, lists, sets, and more.
Compatibility: Seamlessly integrate with your existing OpenShift applications and services.
Developer Productivity: Simplified deployment and management allow developers to focus on building features rather than infrastructure.
Getting Started
To get started with Redis by VSHN, visit our Redis product page on the OpenShift Application Marketplace. Our comprehensive documentation will guide you through the setup process, ensuring you can quickly and efficiently integrate Redis into your workflows.
Do you want to see all our open documentation for how to create or use any of the services in the marketplace? You can find them all openly available here VSHN AppCat User Documentation
Be on the lookout for more services as we continue expanding our marketplace. Let’s keep those containers humming!
VSHN Managed OpenShift: What you need to know about OpenShift 4.16
16. Oct 2024
Upgrade to OpenShift version 4.16
As we start to prepare the upgrade to OpenShift v4.16 for all our customers clusters, it is a good opportunity to look again at what’s new in the Red Hat OpenShift 4.16 release. The release is based on Kubernetes 1.29 and CRI-O 1.29 and brings a handful of exciting new features which will make VSHN Managed OpenShift even more robust. Additionally, the new release also deprecates some legacy features which may require changes in your applications.
The Red Hat infographic highlights some of the key changes:
Red Hat OpenShift 4.16: What you need to know Infographic by Ju Lim
Changes which may require user action across all VSHN Managed OpenShift, including APPUiO
For VSHN Managed OpenShift, we’re highlighting the following changes which may require user action in our Release notes summary
Clusters which use OpenShift SDN as the network plugin can’t be upgraded to OpenShift 4.17+
This doesn’t affect most of the VSHN Managed OpenShift clusters since we’ve switched to Cilium as the default network (CNI) plugin a while ago and most of our older managed clusters have been migrated from OpenShift SDN to Cilium over the last couple of months.
The proxy service for the cluster monitoring stack components is changed from OpenShift OAuth to kube-rbac-proxy
Users who use custom integrations with the monitoring stack (such as a Grafana instance which is connected to the OpenShift monitoring stack) may need to update the RBAC configuration for the integration. If necessary, we’ll reach out to individual VSHN Managed OpenShift customers once we know more.
The ingress controller HAProxy is updated to 2.8
HAProxy 2.8 provides multiple options to disallow insecure cryptography. OpenShift 4.16 enables the option which disallows SHA-1 certificates for the ingress controller HAProxy. If you’re using Let’s Encrypt certificates for your applications no action is needed. If you’re using manually managed certificates for your Routes or Ingresses, you’ll need to ensure that you’re not using SHA-1 certificates.
Legacy service account API token secrets are no longer generated
In previous OpenShift releases, a legacy API token secret was created for each service account to enable access to the integrated OpenShift image registry. Starting with this release, these legacy API token secrets aren’t generated anymore. Instead, each service account’s image pull secret for the integrated image registry uses a bound service account token which is automatically refreshed before it expires.
If you’re using a service account token to access the OpenShift image registry from outside the cluster, you should create a long-lived token for the service account. See the Kubernetes documentation for details.
Linux control groups version 1 (cgroupv1) deprecated
The default cgroup version has been v2 for the last couple OpenShift releases. Starting from OpenShift 4.16, cgroup v1 is deprecated and it will be removed in a future release. The underlying reason for the pending removal is that Red Hat Enterprise Linux (RHEL) 10 and therefore also Red Hat CoreOS (RHCOS) 10 won’t support booting into cgroup v1 anymore.
If you’re running Java applications, we recommend that you make sure that you’re using a Java Runtime version which supports cgroup v2.
Warning for iptables usage
OpenShift 4.16 will generate warning event messages for pods which use the legacy IPTables kernel API, since the IPTables API will be removed in RHEL 10 and RHCOS 10.
If your software still uses IPTables, please make sure to update your software to use nftables or eBPF. If you are seeing these events for third-party software that isn’t managed by VSHN, please check with your vendor to ensure they will have an nftables or eBPF version available soon.
Other changes
Additionally, we’re highlighting the following changes:
RWOP with SELinux context mount is generally available
OpenShift 4.16 makes the ReadWriteOncePod access mode for PVs and PVCs generally available. In contrast to RWO where a PVC can be used by many pods on a single node, RWOP PVCs can only be used by a single pod on a single node. For CSI drivers which support RWOP, the SELinux context mount from the pod or container is used to mount the volume directly with the correct SELinux labels. This eliminates the need to recursively relabel the volume and can make pod startup significantly faster.
However, please note that VSHN Managed OpenShift doesn’t yet support the ReadWriteOncePod access mode on all supported infrastructure providers. Please reach out to us if you’re interested in this feature.
Monitoring stack replaces prometheus-adapter with metrics-server
OpenShift 4.16 removes prometheus-adapter and introduces metrics-server to provide the metrics.k8s.io API. This should reduce load on the cluster monitoring Prometheus stack.
Exciting upcoming features
We’re also excited about multiple upcoming features which aren’t yet generally available in OpenShift 4.16:
Node disruption policies
We’re looking forward to the “Node disruption policy” feature which will allow us to deploy some node-level configuration changes without node reboots. This should reduce the need for scheduling node-level changes to be rolled out during maintenance, and will enable us to say confidently whether a node-level change requires a reboot or not.
Route with externally managed certificates
OpenShift 4.16 introduces support for routes with externally managed certificates as a tech preview feature. We’re planning to evaluate this feature and make it available in VSHN Managed OpenShift once it reaches general availability.
This feature will allow users to request certificates with cert-manager (for example from Let’s Encrypt) and reference the cert-manager managed secret which contains the certificate directly in the Route instead of having to create an Ingress resource (that’s then translated to an OpenShift Route) which references the cert-manager certificate.
Changes not relevant to VSHN customers
There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. In particular, OVNKubernetes gains support for AdminNetworkPolicy resources, which provide a mechanism to deploy cluster-wide network policies. Please note that similar results should be achievable with Cilium’s CiliumClusterWideNetworkPolicy resources, and Cilium is actively working on implementing support for AdminNetworkPolicy.
Summary
OpenShift 4.16 brings deprecates some features which may require changes to your applications in order to make future upgrades as smooth as possible. Additionally, OpenShift 4.16 is the last release that supports OpenShift SDN as the network plugin and disables support for SHA-1 certificates in the ingress controller. For those interested in the nitty gritty details of the OpenShift 4.16 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.
VSHN customers will be notified about the upgrades to their specific clusters in the near future.
Interested in VSHN Managed OpenShift?
Head over to our product page VSHN Managed OpenShift to learn more about how VSHN can help you operate your own OpenShift cluster including setup, 24/7 operation, monitoring, backup and maintenance. Hosted in a public cloud of your choice or on-premises in your own data center.
Simon Gerber
Simon Gerber is a DevOps engineer in VSHN.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Announcing General Availability of PostgreSQL by VSHN – On OpenShift
3. Oct 2024
We have some fantastic news – our PostgreSQL service is now generally available on OpenShift in our Application Catalog through the VSHN Application Marketplace. After seeing our container-based database solution work wonders for a few lucky customers, we’re excited to open it up for all to enjoy!
Why You’ll Love It
Always On: our high availability setup keeps your data accessible.
Safety First: top-notch security features to keep your data safe.
Grow As You Go: easily scale with your business needs.
Hands-Free Maintenance: automatic updates and backups? Yes, please!
Expert Help: our team is always here to support you.
Do you have an application that you run in containers or are moving to containers that uses PostgreSQL? Do you not want the complexity of running your database within a Kubernetes cluster?
Then check out PostgreSQL by VSHN to dive into the details and get started today.
Do you want to see all our open documentation for how to create or use any of the services in the marketplace? You can find them all openly available here VSHN AppCat User Documentation
Be on the lookout for more services as we continue expanding our marketplace. Let’s keep those containers humming!
Announcing Keycloak by VSHN: Your Ultimate Open Source IAM Solution
4. Sep 2024
Hey there! We’re thrilled to introduce Keycloak by VSHN – your new go-to for robust, open-source identity and access management (IAM). Bringing together the expertise of VSHN and Inventage, designed to simplify authentication and boost security, our managed service is here to make your life easier and your apps more secure. Let’s dive into what makes Keycloak awesome, and why the VSHN managed version is even better!
What’s the Buzz About Keycloak?
Keycloak is an open-source powerhouse for managing identities and access. With it, you can integrate authentication across your services with little fuss. It’s packed with features like user federation, strong authentication, comprehensive user management, and finely tuned authorization controls. In short, it’s all about making secure access as straightforward as possible.
Why Should Your Enterprise Use Keycloak?
Here are just a few reasons:
Single Sign-On (SSO) Magic: Log in once and access all your apps without breaking a sweat. Plus, logging out from one logs you out from all – neat, right?
Integration Ease: Keycloak plays nicely with OpenID Connect, OAuth 2.0, and SAML 2.0, sliding seamlessly into your existing setup.
Empowering User Federation: Whether it’s LDAP or Active Directory, Keycloak connects smoothly, ensuring all your bases are covered.
Granular Control: With its intuitive admin console, managing complex authorization scenarios is a breeze.
Scalable Performance: From startups to large enterprises, Keycloak scales with your needs without skipping a beat.
Open Source vs. Proprietary: Why Go Open?
Choosing Keycloak means embracing benefits like:
Cost Efficiency: Forget about expensive proprietary license fees; open-source is wallet-friendly.
Total Transparency: Open-source means anyone can check the code, which helps in keeping things secure and up to snuff.
Community Driven: Benefit from the innovations and support of a global community.
Ultimate Flexibility: Adapt and extend Keycloak however you see fit. You’re in control!
What Makes Keycloak by VSHN Special?
All the Keycloak Goodies: Everything Keycloak offers, we provide, managed and tuned by experts.
Swiss-Based Hosting: Enjoy top-tier privacy, security, and adherence to Swiss regulations.
Expert Support: The Keycloak wizards from Inventage and the Kubernetes experts from VSHN are here to help you every step of the way. To learn more visit the Keycloak Competence Center Switzerland (Keycloak Competence Center Switzerland ) run by Inventage.
Transparent Pricing: What you see is what you get. No surprises here!
Solid SLAs and High Availability: We promise uptime and smooth operations, come rain or shine.
We Love Open Source!
With VSHN, you’re not just getting a service; you’re tapping into a philosophy. Backed by Red Hat and supported by Inventage, we bring you unparalleled expertise right from the heart of Switzerland.
Why Choose Keycloak?
It’s not just stable and feature-rich; it’s a part of the open-source legacy of Red Hat, enhanced by VSHN’s partnership with Inventage – making us a powerhouse of knowledge and reliability in the container and cloudnative IAM domain.
Ready to Jump In?
Dive into seamless identity management with Keycloak by VSHN. Curious for more? Visit our Keycloak product definition page (Keycloak by VSHN) for all the juicy details and kickstart your journey towards streamlined application security.
Ready to roll? Contact us now!
Markus Speth
Marketing, Communications, People
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Exploring Namespace-as-a-Service: A Deep Dive into APPUiO’s Implementation
30. Aug 2024
In the rapidly evolving world of cloud computing and container orchestration, Namespace-as-a-Service (NSaaS) is becoming part of the wide array of different ways you can host your application. Offering NSaaS is possible for companies that have already built years of experience running Kubernetes and container-based platforms. Although many organizations worldwide are still early in their container, Kubernetes, and cloud journeys, those companies that have been doing it for a while are now able to take things to the next level. These experienced organizations can leverage their deep expertise to provide NSaaS with greater stability, maturity, and innovation. As a result, they can offer a robust and reliable cloud service that caters to diverse needs while driving significant advancements in the industry.
This blog post will explore what NSaaS means, delve into its pros and cons, and highlight how its stability and maturity are now enabling its widespread adoption. With the well-established ecosystem and landscape of Kubernetes, NSaaS provides a stable and secure environment for managing application workloads within namespaces efficiently. Could this concept offer cost savings in terms of resources and operational overhead for your organization? What types of organizations and workloads are best suited for a NSaaS platform? Join us as we dive into the details of NSaaS, uncovering why it might be the ideal solution for your cloud computing needs.
What is Namespace-as-a-Service?
Namespace-as-a-Service (NSaaS) is a cloud service model that allows users to create, manage, and utilize namespaces in a Kubernetes environment with ease. In Kubernetes, a namespace is a form of isolation within a Kubernetes cluster, providing a way to divide cluster resources between multiple users. This isolation using namespaces can apply both to applications, user access (developers), storage volumes and network traffic. NSaaS abstracts the complexity of namespace management, offering a simplified and efficient way for users to leverage namespaces without needing in-depth Kubernetes knowledge. To the majority of users, this is no different from having a full Kubernetes cluster or having a Heroku-style PaaS. The advantage is that it’s a bit in between the two, you get the familiar Kubernetes API and environment definitions but without the overhead and complexity of managing a Kubernetes cluster.
Pros of Namespace-as-a-Service
Lower operational overhead
NSaaS eliminates the need for users to manage the underlying infrastructure and complex Kubernetes configurations. This simplification allows developers to focus on application development rather than infrastructure management.
Scalability
With NSaaS, users can easily scale their applications. As namespaces are lightweight, creating and managing multiple namespaces is efficient and allows for better resource utilization.
Isolation and Security
Namespaces provide logical isolation within a Kubernetes cluster. NSaaS leverages this feature to ensure that applications running in different namespaces do not interfere with each other, enhancing security and stability. The platform provider who is running the Kubernetes clusters is responsible for the workload isolation from a security and workload optimization perspective between the different namespaces.
Cost Efficiency
By optimizing resource allocation and utilization, NSaaS can reduce costs. Users only pay for the resources they use, and the efficient management of these resources can lead to significant savings.
Simplified billing
NSaaS allows billing per namespace, keeping the usages of each namespace separate so that it is clear either which departments, application teams or even customers have what usage and in turn provide the bill outline for the namespace.
Cons of Namespace-as-a-Service
Limited Customization (compared with running separate Kubernetes cluster)
While NSaaS simplifies many aspects of namespace management, it may also limit customization options. Advanced users who require fine-tuned control over their Kubernetes environments might find NSaaS restrictive. Fine-tuned control might include: deploying your own Kubernetes operators, defining the maintenance window and Kubernetes version
Dependency on Cluster Provider (compared with running separate Kubernetes cluster)
Users are dependent on the cluster provider for managing not just the underlying infrastructure but also the wider cluster configuration and maintenance. Any issues or downtime at the cluster level managed by the provider can directly impact the user’s applications.
Potential Security Risks (compared with a “Heroku style” PaaS)
Although namespaces offer both isolation and some customization powers, then improper configuration or vulnerabilities in the NSaaS implementation or the name space configuration can lead to security risks. It is crucial to ensure both that the service provider follows best practices for security, but also that any namespace configuration doesn’t open up unnecessarily in terms of for example network traffic.
Learning Curve (compared with a “Heroku style” PaaS)
For users unfamiliar with Kubernetes concepts, there might still be a learning curve associated with understanding namespaces and how to utilize NSaaS effectively. Although the Kubernetes cli and concepts might be more complex and require a little extra learning from developers then it’s considered more of a standard today and therefore less lock-in. It is also possible for a Platform Engineering team to implement a fully automated CICD so that developer only need to know “git push” which is similar to the “cf push” that was popularised by cloudfoundry (that also came out from the team that originally built Heroku). The kubectl cli although more complex simple cli from other providers provides a standard interface and also provides more power to customise the deployment of applications into the namespace.
Implementation of Namespace-as-a-Service on APPUiO
For APPUiO, we have embraced Namespace-as-a-Service to provide our users with a seamless and efficient cloud experience. Here’s how we’ve implemented it:
User-Friendly Interface
Our platform offers a user-friendly interface that abstracts the complexities of Kubernetes, allowing users to create and manage namespaces with just a few clicks.
Automated Provisioning
We have automated the provisioning of namespaces, ensuring that users can instantly create namespaces without any delays. This automation extends to resource allocation and configuration management.
Robust Security Measures
Security is a top priority at APPUiO. We have implemented stringent security measures to ensure isolation and protect user data. Our NSaaS implementation includes role-based access control (RBAC), network policies, and regular security audits. The tech stack behind APPUiO includes OpenShift, Isovalent Cilium Enterprise, Kyverno policies and our APPUiO agent that ensure security policies at all levels are adhered to.
Scalability and Flexibility
Our platform is designed to scale with our users’ needs. Whether you are a small startup or a large enterprise, APPUiO can handle your workloads efficiently. Users can easily scale their applications within their namespaces as their requirements grow.
Comprehensive Support
We provide comprehensive support to help users get the most out of our NSaaS offering. Our documentation, tutorials, and support team are always available to assist users in navigating any challenges they might encounter.
Conclusion
Namespace-as-a-Service represents a significant advancement in cloud computing, simplifying the management of Kubernetes environments and enhancing scalability, security, and cost efficiency. At APPUiO, we are proud to offer a robust NSaaS solution that empowers our users to focus on what they do best—developing great applications. By leveraging the power of namespaces, we provide a flexible, scalable, and secure cloud environment that meets the diverse needs of our users.
Whether you’re new to Kubernetes or an experienced user, APPUiO’s Namespace-as-a-Service can help you achieve your cloud goals with ease and efficiency.
Markus Speth
Marketing, Communications, People
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
VSHN Managed OpenShift: Upgrade to OpenShift version 4.15
17. Jul 2024
As we start to prepare to rollout upgrades to OpenShift v4.15 across all our customers clusters it is a good opportunity to look again at what was in the Red Hat OpenShift 4.15 release. It brought Kubernetes 1.28 and CRI-O 1.28 and it was largely focused on small improvements in the core platform and enhancements to how OpenShift runs on underlying infrastructure including bare-metal and public cloud providers.
The Red Hat infographic highlights some of the key changes:
What’s New in Red Hat OpenShift 4.15 Infographic by Sunil Malagi
For our VSHN Managed OpenShift and APPUiO customers, we want to highlight the key changes in the release that are relevant for them.
Across all VSHN Managed OpenShift clusters – including APPUiO
Our summary highlights that apply are the following:
There are some node enhancements (such faster builds for unprivileged pods, and compatibility of multiple image repository mirroring objects)
The release also brings updated versions for the monitoring stack (Alertmanager to 0.26.0, kube-state-metrics to 2.10.1, node-exporter to 1.7.0, Prometheus to 2.48.0, Prometheus Adapter to 0.11.2, Prometheus Operator to 0.70.0, Thanos Querier to 0.32.5)
It also includes some additional improvements and fixes to the monitoring stack
There are some changes to the Bare-Metal Operator so that it now automatically powers off any host that is removed from the cluster
There are some platform fixes including some security related ones like securing the cluster metrics port using TLS
OLM (Operator Lifecycle Management is being introduced as v1 and this brings three new life cycle classifications for cluster operators that are being introduced: Platform Aligned, for operators whose maintenance streams align with the OpenShift version; Platform Agnostic, for operators who make use of maintenance streams, but they don’t need to align with the OpenShift version; and Rolling Stream, for operators which use a single stream of rolling updates.
On VSHN Managed OpenShift clusters with optional features enabled
The changes that might relate to some VSHN Managed OpenShift customers who have optional features enabled would include:
OpenShift Service Mesh 2.5 based on Istio 1.18 and Kiali 1.73
Enhancements to RHOS Pipelines
Machine API – Defining a VMware vSphere failure domain for a control plane machine set (Technology Preview)
Updates to hosted control planes within OSCP
Bare-Metal hardware provisioning fixes
Changes not relevant to VSHN customers
There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. It is also interesting to note the deprecation of the OpenShift SDN network plugin, which means no new clusters can leverage that setup. Additionally, there are new features related to specific cloud providers (like Oracle Cloud Infrastructure) or specific hardware stacks (like IBM Z or IBM Power).
The changes to handling storage and in particular storage appliances is also not relevant for VSHN customers as none of the storage features affect how we handle our storage on cloud providers or on-prem.
Features in OpenShift open to customer PoCs before we enable for all VSHN customers
We do have an interesting customer PoC with Red Hat OpenShift Virtualization which is an interesting feature that continues to mature in OpenShift 4.15. We are excited to see the outcome of this PoC and to potentially making that available to all our customers looking to leverage VMs inside OpenShift. We know due to the pricing changes from Broadcom that this is an area many companies and organizations are looking at. Moving from OpenShift running on vSphere to running on bare metal and having VMs inside OpenShift is an exciting transformation, and we hope to be able to bring an update on this in an upcoming separate blog post.
Likewise, we are open to customers who would like to explore leveraging OpenShift Serverless (now based on Knative 1.11 in Openshift 4.15) or perhaps with the new OpenShift Distributed Tracing Platform that is now at version 3.2.1 in the OpenShift 4.15 release (this version includes both the new platform based on Tempo and the now deprecated version based on Jaeger). This can also be used together with the Red Hat Open Telemetry Collector in OpenShift 4.15. There are also new versions of OpenShift Developer Hub (based on Backspace), OpenShift Dev Spaces and OpenShift Local. These are all interesting tools, part of the Red Hat OpenShift Container Platform.
If any of the various platform features are interesting for any existing or new VSHN customers, we would encourage you to reach out so we can discuss potentially doing a PoC together.
Summary
Overall, OpenShift 4.15 brings lots of small improvements but no major groundbreaking features from the perspective of the clusters run by VSHN customers. For those interested in the nitty gritty details of the OpenShift 4.15 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.
VSHN customers will soon be notified about the upgrades to their specific clusters.
Interested in VSHN Managed OpenShift?
Head over to our product page VSHN Managed OpenShift to learn more about how VSHN can help you operate your own OpenShift cluster including setup, 24/7 operation, monitoring, backup and maintenance. Hosted in a public cloud of your choice or on-premises in your own data center.
Markus Speth
Marketing, Communications, People
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.
We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.
The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.
Watch the Recording of “How to Keep Container Operations Steady and Cost-Effective in 2024”
1. Feb 2024
Yesterday took place the “How to Keep Container Operations Steady and Cost-Effective in 2024” event on LinkedIn Live, and for those who couldn’t attend live, you can watch the recording here.
In a rapidly evolving tech landscape, staying ahead of the curve is crucial. This event that will equip you with the knowledge and tools needed to navigate container operations effectively while keeping costs in check.
In this session, we’ll explore best practices, industry insights, and practical tips to ensure your containerized applications run smoothly without breaking the bank.
We will cover:
Current Trends: Discover the latest trends shaping container operations in 2024.
Operational Stability: Learn strategies to keep your containerized applications running seamlessly.
Cost-Effective Practices: Explore tips to optimize costs without compromising performance.
Industry Insights: Gain valuable insights from real-world experiences and success stories.
Schedule:
17:30 – 17:35 – Welcome and Opening Remarks 17:35 – 17:50 – Navigating the Container Landscape: 2024 Trends & Insights 17:50 – 17:55 – VSHN’s Impact: A Spotlight on Our Market Presence 17:55 – 18:10 – Guide to Ensuring Steady Operations in Containerized Environments 18:10 – 18:25 – Optimizing Costs without Compromising Performance: A Practical Guide 18:25 – 18:30 -Taking Action: Implementing Best Practices for Container Operations 18:30 -> Q&A
Don’t miss out on this opportunity to set a solid foundation for your containerized applications in 2024.
Adrian Kosmaczewski
Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Crossplane has recently celebrated its fifth birthday, but at VSHN, we’ve been using it in production for almost three years now. In particular, it has become a crucial component of one of our most popular products. We’ve invested a lot of time and effort on Crossplane, to the point that we’ve developed (and open-sourced) our own custom modules for various technologies and cloud providers, such as Exoscale, cloudscale.ch, or MinIO.
In this blog post, we will provide an introduction to a relatively new feature of Crossplane called Composition Functions, and show how the VSHN team uses it in a very specific product: the VSHN Application Catalog, also known as VSHN AppCat.
Crossplane Compositions
To understand Composition Functions, we need to understand what standard Crossplane Compositions are in the first place. Compositions, available in Crossplane since version 0.10.0, can be understood as templates that can be applied to Kubernetes clusters to modify their configuration. What sets them apart from other template technologies (such as Kustomize, OpenShift Template objects, or Helm charts) is their capacity to perform complex transformations, patch fields on Kubernetes manifests, following more advanced rules and with better reusability and maintainability. Crossplane compositions are usually referred to as “Patch and Transform” compositions, or “PnT” for short.
As powerful as standard Crossplane Compositions are, they have some limitations, which can be summarized in a very geeky yet technically appropriate phrase: they are not “Turing-complete”.
Compositions don’t support conditions, meaning that the transformations they provide are applied on an “all or nothing” basis.
They also don’t support loops, which means that you cannot apply transformations iteratively.
Finally, advanced operations are not supported either, like checking for statuses in other systems, or performing dynamic data lookups at runtime.
To address these limitations, Crossplane 1.11 introduced a new Alpha feature called “Composition Functions”. Note that as of writing, Composition Functions are in Beta in 1.14.
Composition Functions
Composition functions complement and in some cases replace Crossplane “PnT” Compositions entirely. Most importantly, DevOps engineers can create Composition Functions using any programming language; this is because they run as standard OCI containers, following a specific set of interface requirements. The result of applying a Composition Function is a new composite resource applied to a Kubernetes cluster.
Let’s look at an elementary “Hello World” example of a Composition Function.
The example above, borrowed from the official documentation, does just one thing: it reads a request object, modifies a value, and returns it to the caller. Needless to say, this example is for illustration purposes only, lacking error checking, logging, security, and more, and should not be used in production. Developers use the Crossplane CLI to create, test, build, and push functions.
Here are a few things to keep in mind when working with Composition Functions:
They run in order, as specified in the “pipeline” array of the Composition object, from top to bottom.
The output of the previous Composition Function is used as input for the following one.
They can be combined with standard “PnT” compositions by using the function-patch-and-transform function, allowing you to reuse your previous investment in standard Crossplane compositions.
In the Alpha release, if you combined “PnT” compositions with Composition Functions, “PnT” compositions ran first, and the output of the last one is fed to the first function; since the latest release, this is no longer the case, and “PnT” compositions can now run at any step of the pipeline.
Composition Functions must be called using RunFunctionRequest objects, and return RunFunctionResponse objects.
In the Alpha release, these two objects were represented by a now deprecated “FunctionIO” structure in YAML format.
RunFunctionRequest and RunFunctionResponse objects contain a full and coherent “desired state” for your resources. This means that if an object is not explicitly specified in a request payload, it will be deleted. Developers must pass the full desired state of their resources at every invocation.
Practical Example: VSHN AppCat
Let’s look at a real-world use case for Crossplane Composition Functions: the VSHN Application Catalog, also known as AppCat. The AppCat is an application marketplace allowing DevOps engineers to self-provision different kinds of middleware products, such as databases, message queues, or object storage buckets, in various cloud providers. These products are managed by VSHN, which frees application developers from a non-negligible burden of maintenance and oversight.
Standard Crossplane “PnT” Compositions proved limited very early in the development of VSHN AppCat, so we started using Composition Functions as soon as they became available. They have allowed us to do the following:
Composition Functions enable complex tasks, involving the verification of current deployment values and taking decisions before deploying services.
They can drive the deployment of services involving Helm charts, modifying values on-the-go as required by our customers, their selected cloud provider, and other parameters.
Conditionals allow us to script complex scenarios, involving various environmental decisions, and to reuse that knowledge.
Thanks to Composition Functions, the VSHN team can generalize many activities, like backup handling, automated maintenance, etc.
All things considered, it is difficult to overstate the many benefits that Composition Functions have brought to our workflow and to our VSHN AppCat product.
Learnings of the Alpha Version
We’ve learned a lot while experimenting with the Alpha version of Composition Functions, and we’ve documented our findings for everyone to learn from our mistakes.
Running Composition Functions in Red Hat OpenShift used to be impossible in Alpha because OpenShift uses crun, but this issue has now been solved in the Beta release.
In particular, when using the Alpha version of Composition Functions, we experienced slow execution speeds with crun but this is no longer the case.
We learned the hard way that missing resources on function requests were actually deleted!
Our experience with Composition Functions led us to build our own function runner. This feature uses another capability of Crossplane, which allows functions to specify their runner in the Composition definition:
Functions run directly on the gRPC server, which, for security reasons, must run as a sidecar to the Crossplane pod. Just like everything we do at VSHN, the Composition Function gRPC Server runner (as well as its associated webhook and all of its code) is open-source, and you can find it on our GitHub. As of the composition function beta, we replaced the custom GRPC logic with the go-sdk. To improve the developer experience, we have created a proxy and enabled the gRPC server to run locally. The proxy runs in kind and redirects to the local gRPC server. This enables us to debug the code and to test changes more efficiently.
Moving to Beta
We recently finished migrating our infrastructure to the most recent Beta version of Composition Functions, released in Crossplane 1.14, and we have been able to do that without incidents. This release included various bits and pieces such as Function Pipelines, an ad-hoc gRPC server to execute functions in memory, and a Function CRD to deploy them directly to clusters.
We are also migrating all of our standard “PnT” Crossplane Compositions to pure Composition Functions as we speak, thanks to the functions-go-sdk project, which has proven very helpful, even if we are missing typed objects. Managing the same objects with the “PnT” and Composition Functions increases complexity dramatically. As it can be difficult to determine where an actual change happens.
Conclusion
In this blog post, we have seen how Crossplane Composition Functions compare to standard “PnT” Crossplane compositions. We have provided a short example, highlighting their major characteristics and caveats, and we have outlined a real-world use case for them, specifically VSHN’s Application Catalog (or AppCat) product.
Crossplane Composition Functions provide an unprecedented level of flexibility and power to DevOps engineers. They enable the creation of complex transformations, with all the advantages of an Infrastructure as Code approach, and the flexibility of using the preferred programming language of each team.
“Composition Functions in Production” by Tobias Brunner at the Control Plane Day with Crossplane
17. Oct 2023
VSHN has been using Crossplane’s Composition Functions in production since its release. In this talk, Tobias Brunner, CTO of VSHN AG, explains what Composition Functions are and how they are used to power crucial parts of the VSHN Application Catalog or AppCat. He also introduces VSHNs custom open-source gRPC server which powers the execution of Composition Functions. Learn how to leverage Composition Functions to spice up your Compositions!
Adrian Kosmaczewski
Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
We have just upgraded our APPUiO Cloud clusters from version 4.11 to version 4.13 of Red Hat OpenShift, and there are some interesting new features for our APPUiO Cloud and APPUiO Managed users we would like to share with you.
Kubernetes Beta APIs Removal
OpenShift 4.12 and 4.13 respectively updated their Kubernetes versions to 1.25 and 1.26, providing cumulative updates to various Beta APIs. If you are using objects with the CRDs below, please make sure to migrate your deployments accordingly.
As a reminder, the next minor revision of Red Hat OpenShift will update Kubernetes to version 1.27.
Web Console
APPUiO users will discover a neat new feature on the web console: resource quota alerts are displayed now on the Topology screen whenever any resource reaches its usage limits. The alert label link will take you directly to the corresponding ResourceQuotas list page.
Have any questions or comments about OpenShift 4.13? Contact us!
Aarno Aukia
Aarno is Co-Founder of VSHN AG and provides technical enthusiasm as a Service as CTO.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
Yesterday evening, on Monday, July 24th, 2023, at around 21:15 CEST / 12:15 PDT, our security team received a notification about a critical security vulnerability called “Zenbleed” potentially affecting the cloud providers where VSHN’s customers systems run on.
This blog post provides details about Zenbleed and the steps taken to mitigate its risks.
What is Zenbleed?
Zenbleed, also known as CVE-2023-20593, is a speculative execution bug discovered by Google, related to but somewhat different from side channel bugs like Meltdown or Spectre. It is a vulnerability affecting AMD processors based on the Zen2 microarchitecture, ranging from AMD’s EPYC datacenter processors to the Ryzen 3000 CPUs used in desktop & laptop computers. This flaw can be exploited to steal sensitive data stored in the CPU, including encryption keys and login credentials.
VSHN’s Response
VSHN immediately set up a task force to discuss this issue, including the team of one of our main cloud providers (cloudscale.ch) in a call to determine choices of action; among possible options, were contemplated ideas like isolating VSHN customers on dedicated nodes, or patching the affected systems directly.
At around 22:00 CEST, the cloud provider decided after a fruitful discussion with the task force that the best approach was to implement a microcode update. Since Zenbleed is caused by a bug in CPU hardware, the only possible direct fix (apart from the replacement of the CPU) consists of updating the CPU microcode. Such updates can be applied by updating the BIOS on affected systems, or applying an operating system kernel update, like the recently released new Linux kernel version that addresses this vulnerability.
Zenbleed isn’t limited to just one cloud provider, and may affect customers operating their own infrastructure as well. We acknowledged that addressing this vulnerability is primarily a responsibility of the cloud providers, as VSHN doesn’t own any infrastructure that could directly be affected.
The VSHN task force handed the monitoring over to VSHN Canada to test the update as it rolled out to production systems, who stayed in close contact to ensure there were no QoS degradations after the microcode update.
Aftermath
cloudscale.ch successfully finished its work at 01:34 CEST / 16:34 PDT. All VSHN systems running on that provider have been patched accordingly, and the tests carried show that this specific vulnerability has been fixed as required. VSHN Canada confirmed that all systems were running without any problems.
We will continue to monitor this situation and to inform our customers accordingly. All impacted customers will be contacted by VSHN. Please do not hesitate to contact us for more information.
Adrian Kosmaczewski
Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
We are thrilled to announce the fourth edition of our “DevOps in Switzerland” report!
From February to April 2023 we conducted a study to learn how Swiss companies implement and apply DevOps principles.
We compiled the results into a PDF file, and just like in the previous edition, we provided a short summary of our findings in the first pages.
You can get the report here. Enjoy reading and we look forward to your feedback!
Adrian Kosmaczewski
Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
VSHN Canada Hackday: A Tale of Tech Triumphs and Tasty Treats
24. Mar 2023
The VSHN Canada Hackday turned into an epic two-day adventure, where excitement and productivity went hand in hand. Mätthu, Bigli, and Jay, our stellar team members, joined forces to level up VSHN as a company and expand their skill sets. In this blog post, we’re ecstatic to share the highlights and unforgettable moments from our very own Hackday event.
🏆 Notable Achievements
1️⃣ Revamping Backoffice Tools for VSHN Canada
Mätthu dove deep into several pressing matters, including:
Time tracking software that feels like a relic from the 2000s. With the Odoo 16 Project underway, we explored its impressive features and found a sleek solution for HR tasks like time and holiday tracking and expenses management. Now we just need to integrate it as a service for VSHN Canada.
Aligning the working environments of VSHN Switzerland and Canada. Although not identical, we documented the similarities and differences in our handbook to provide a seamless experience.
Tidying up our document storage in Vancouver. Previously scattered across Google Cloud and Nextcloud, a cleanup session finally brought order to the chaos.
Bigli and Jay teamed up to craft fully managed SKS GitLab runners using Project Syn, aiming to automate GitLab CI processes and eliminate manual installation and updates. This collaboration also served as an invaluable learning experience for Jay, who delved into Project Syn’s architecture and VSHN’s inner workings. Hackday milestones included:
Synthesizing the GitLab-runner cluster
Updating the cluster to the latest supported version
Scheduling cluster maintenance during maintenance windows
Developing a component for the GitLab-runner
Implementing proper monitoring, time permitting
📖 Documentation available on our wiki.
🍻 Festive Fun
On Hackday’s opening day, we treated ourselves to a team outing at “Batch,” our go-to local haunt nestled in Vancouver’s scenic harbor. Over unique beers and animated chatter, we toasted to our first-ever Canadian Hackday.
🎉 Wrapping Up
VSHN Canada’s Hackday was an exhilarating mix of productivity, learning, and amusement. Our team banded together to confront challenges, develop professionally, and forge lasting memories. We can hardly wait for future Hackday events and the continued growth of VSHN Canada and VSHN.
Jay Sim
Jay Sim is a DevOps engineer in VSHN Canada.
Latest news
OpenShift
Press
Security Update: Red Hat Consulting GitLab Incident
We use cookies to enhance your experience on our website. By clicking "Accept Cookies" you agree to the storing of cookies on your device to improve site navigation, analyze site usage, and assist in our marketing efforts.Accept CookiesReject CookiesPrivacy policy