AppCat Tech

VSHN AppCat Update – Major PostgreSQL Upgrades and More Control over Costs

16. Apr 2026

Operating stateful services on Kubernetes has always been one of the more complex challenges – especially when it comes to databases like PostgreSQL.

With the latest VSHN AppCat release, we’re introducing a major milestone for our PostgreSQL offering based on CloudNativePG, along with improvements that give users more control over how their services are managed and operated.

This release focuses on three key areas: database lifecycle management, user control and cost optimization.

What is VSHN AppCat?

VSHN AppCat is the VSHN Application Catalog – a curated collection of production-ready applications that can be deployed and operated on Kubernetes with minimal effort.

Instead of manually installing and maintaining complex software stacks, AppCat allows teams to run services such as databases, identity providers or collaboration platforms as fully managed applications.

AppCat includes:

  • Automated operations and lifecycle management
  • Built-in monitoring, backup and maintenance
  • Consistent deployments across Kubernetes environments

This enables teams to focus on their applications, while VSHN handles the operational complexity behind the scenes.

👉 Learn more about VSHN AppCat

Major PostgreSQL Version Upgrades with CloudNativePG

This release introduces a significant new capability for PostgreSQL in AppCat: major version upgrades using CloudNativePG.

For the first time, users can upgrade the major version of their PostgreSQL instances within the AppCat environment.

Major version upgrades are one of the more sensitive operations in database lifecycle management. Making this available as part of a managed service is an important step in enabling long-term, sustainable operations of PostgreSQL workloads on Kubernetes.

👉 Learn more about PostgreSQL upgrades

PostgreSQL User Management for CloudNativePG

We’ve also introduced user management capabilities for PostgreSQL in our CloudNativePG-based offering.

While PostgreSQL itself has always supported user management, this functionality is now available and integrated within the AppCat-managed CloudNativePG setup.

This gives users more direct control over database access and roles within their managed PostgreSQL instances.

👉 Learn more about PostgreSQL user management

Bring Your Own Bucket for Backups

Another important addition in this release is the “Bring Your Own Bucket” feature.

Users can now specify their own unmanaged object storage bucket for backups instead of relying on automatically provisioned storage.

If a bucket is provided, AppCat will use the supplied credentials and skip internal bucket provisioning entirely.

This can be particularly useful for organizations that:

  • Already operate their own storage infrastructure
  • Want to consolidate storage across services
  • Are looking for ways to optimize backup-related costs

👉 Learn more about backup configuration

Continuous Platform Evolution

Operating databases and stateful services on Kubernetes is continuously evolving – and this release represents an important step forward.

With major version upgrades, integrated user management, and more control over backup infrastructure, AppCat continues to expand the capabilities of managed services on Kubernetes.

As always, the goal remains the same: making complex operations simpler, safer and more predictable for teams running production workloads.

👉 Learn more about VSHN AppCat

👉 VSHN AppCat Changelog

Liene Luksika

Product Manager

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Espejote: A GitOps journey

14. Apr 2026

Espejote (big mirror in Spanish) manages arbitrary resources in a Kubernetes cluster. Built from the ground up to take advantage of Server-Side Apply and Jsonnet templating.

VSHN manages a large fleet of Kubernetes clusters for our customers, and we try to automate as much as possible to keep our operations efficient and sustainable. We use GitOps principles, but sometimes external state needs to be merged into the desired state defined in Git. This GitOps journey took us from Ansible playbooks directly applying YAML, to various operators, to bash “reconcilers”, and finally to Espejote, our shiny new GitOps operator.

Chapter 1: The Ansible era and first operator attempts

In the beginning, we used Ansible playbooks and custom roles to manage our OpenShift 3 Kubernetes clusters. We had a set of YAML files that defined the desired state of our clusters, and we would run Ansible playbooks to apply those YAML files to the clusters. This worked, but it was not very efficient. We had to run the playbooks manually, and if we forgot to run them, the clusters would drift from the desired state.

The collection of roles was nicknamed “mungg”, the Swiss German word for “marmot”. Nobody seems to know why, but it stuck.

We were just getting into writing operators and developed espejo to quickly sync resources between namespaces. It was the very early days of our operator journey.

Chapter 2: The sea of operators and tears

To solve the problem of manual intervention (and because we migrated to OpenShift 4, where the install procedure doesn’t use Ansible anymore), we started looking into Kubernetes operators.
It can’t be that hard to patch a Kubernetes manifest. Right? Wrong.
Some of the operators were buggy, some of them were not flexible enough, some of them loved to randomly go into reconcile loops, and most of them used too many resources. Some of them crashed our API servers. We started with resource-locker-operator, migrated to patch-operator, generated outages with Kyverno, and tested all other policy engines we could find. Kubewarden was the only one we really liked, but the cluster context API was not yet flexible enough for our use cases.

Espejo had been a good start, but we did not yet have the experience to build well-designed operators.
It showed. Every event triggered a full reconciliation of every resource, so syncing slowed down dramatically on larger clusters. We missed a lot of flexibility.

Chapter 3: Getting desperate for safe landings

We were fed up with the constant bugs and breaking changes in Kyverno, and patch-operator was barely maintained. Espejo was at its limits.

Desperate times called for desperate measures, so we started using an amalgamation of bash “reconcilers” – hacks with cron jobs, tiny custom controllers, and pre-processing resources in Project Syn.

We were using Jsonnet more and more. Project Syn components primarily use Jsonnet. We use Jsonnet for our cloudscale machine-api provider, for our SSO solution, and many other projects.

A growing issue were our heavily patched OpenShift alerting rules. We curate upstream rules and only enable the ones we need. Some are heavily patched. Every OpenShift release the upstream definitions are moved around and are sometimes only available embedded into Go code. We needed something that was able to patch rules already deployed in the cluster, as this was the only stable interface we had.

Chapter 4: Espejote, the shiny new GitOps operator

Bolstered by our growing operator experience and our love for Jsonnet, we decided to build our own operator to rule them all. We wanted something that was flexible, efficient, and easy to use. We wanted something that could handle all our use cases, from syncing resources between namespaces to patching OpenShift alerting rules.

Espejote is the result of that journey. It merges cluster state with GitOps principles, using Jsonnet to define the desired state of our clusters. It efficiently caches cluster state, and the reconcile trigger logic is explicitly defined. Sane controller-runtime rate limits apply. Jsonnet allows a huge amount of flexibility, and native server-side apply makes adding and removing keys a breeze. Every Espejote “resource manager” – the dynamic controller spawned for a config unit – uses its own ServiceAccount for least privilege.

Espejote is the operator we always wanted, and we are excited to share it with the world.

What is Espejote?

Espejote is a Kubernetes operator allowing you to manage arbitrary resources in a Kubernetes cluster.
It can mix GitOps principles with in-cluster state.

Why Espejote?

There are plenty of similar tools (and policy engines), but Espejote sets itself apart by focusing on three core pillars:

1. Powered by Jsonnet

Espejote uses Jsonnet as its templating engine. Unlike YAML combined with Go templates, Jsonnet treats the configuration as a data structure. It understands objects, arrays, and strings. It can’t accidentally generate broken YAML because Jsonnet ensures the internal data structure is valid before it ever exports the final file.

2. Native Server-Side Apply

Espejote is built from the ground up to leverage server-side apply (SSA). This means Espejote plays nicely with other controllers and operators. It can manage a single annotation or an entire resource; SSA ensures that the changes are merged without stomping on other tools.

3. Reliability

Reliability isn’t an afterthought. Espejote was born out of the frustration of watching operators enter infinite reconcile loops or crash clusters. It features:

  • Sane rate limiting and backoff strategies.
  • Every configuration unit or “resource manager” runs its own dynamically spawned controller, so a misbehaving unit won’t affect others.
  • Least privilege: Every resource manager runs with its own ServiceAccount.
  • Explicit control: There are no implicit watches or “magic” triggers. You have complete control over what gets reconciled and when.

Real-World Use Cases

What can you actually do with Espejote? Here are a few ways VSHN is using it in production:

  • Secret Syncing: Automatically replicate specific secrets (like image pull secrets or certificates) across multiple namespaces.
  • Autoscaler Patching: Patching the OpenShift Cluster Autoscaler using Admission Webhooks.
  • Alerting Rule Management: Curate and patch OpenShift alerting rules across different cluster versions.

The Future: WASM and Beyond

The roadmap includes a kro-like API builder for easy custom resource creation and support for WebAssembly plugins, which will allow developers to write custom logic in almost any language and run it safely within the Espejote controller.

Getting Started

Example

This example ManagedResource patches the RedHat OperatorHub config singleton to disable all default sources. It shows the simplest usecase of unconditionally patching a static manifest.
More complex use cases can be found in the above getting started section.

apiVersion: espejote.io/v1alpha1
kind: ManagedResource
metadata:
  annotations:
  name: disable-default-sources
  namespace: openshift-marketplace
spec:
  serviceAccountRef:
    name: disable-default-sources
  triggers:
    - name: operatorhub
      watchResource:
        apiVersion: config.openshift.io/v1
        kind: OperatorHub
        name: cluster
  template: |-
    {
        "apiVersion": "config.openshift.io/v1",
        "kind": "OperatorHub",
        "metadata": {
            "name": "cluster"
        },
        "spec": {
            "disableAllDefaultSources": true
        }
    }

Sebastian Widmer

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
AppCat Tech

VSHN AppCat Update – Enhancing PostgreSQL Flexibility and Platform Safety

13. Apr 2026

Running applications on Kubernetes is not just about deploying them – it’s about operating them reliably over time. That includes managing databases, handling authentication, and ensuring that services evolve safely without breaking changes.

With the latest VSHN AppCat release, we’re introducing improvements that focus on database flexibility, better integration capabilities and safer operations across the platform.

As always, many of these enhancements happen behind the scenes – but they directly improve how teams work with and operate their services.

What is VSHN AppCat?

VSHN AppCat is the VSHN Application Catalog – a curated collection of production-ready applications that can be deployed and operated on Kubernetes with minimal effort.

Instead of manually installing and maintaining complex software stacks, AppCat enables teams to run services like databases, identity providers or collaboration platforms as fully managed applications.

This includes:

  • Automated operations and lifecycle management
  • Built-in monitoring, backup and maintenance
  • Consistent deployments across Kubernetes environments

This allows teams to focus on their applications, while VSHN takes care of operating the underlying services.

👉 Learn more about VSHN AppCat:
Application Catalog – VSHN AG

More Flexibility with PostgreSQL Extensions

One of the key improvements in this release focuses on PostgreSQL powered by CloudNativePG.

We’ve introduced additional configuration options for PostgreSQL extensions, allowing users to enable and configure extension-specific libraries where required. This provides greater flexibility for teams that rely on advanced PostgreSQL features.

In addition, support for VACUUM operations has been improved and is now more clearly integrated into scheduled maintenance processes. Regular VACUUM operations are essential for maintaining database performance and storage efficiency over time.

Together, these improvements give users more control over how their PostgreSQL instances behave – without sacrificing the benefits of a managed service.

Improved Integration with Forgejo

This release also enhances integration capabilities for developer platforms.

We’ve added support for configuring OAuth2 clients in Forgejo, making it easier to integrate Forgejo-based services with external identity providers and authentication systems.

This is particularly useful for teams that want to connect their development workflows with centralized identity management solutions.

Safer Operations for Nextcloud

Operating applications reliably also means preventing unsafe configurations.

With this release, we’ve introduced a safeguard for Nextcloud: major version downgrades are now explicitly blocked.

Since Nextcloud itself does not support downgrades between major versions, allowing them at the platform level could lead to broken instances or data inconsistencies. By enforcing this constraint, AppCat helps ensure that upgrades remain safe and predictable.

For users, the guidance is simple: stay up to date and follow the supported upgrade paths when moving between major versions.

👉 Learn more about Nextcloud:
Nextcloud by VSHN – VSHN AG

Continuous Platform Improvements

Operating applications on Kubernetes is an ongoing process of refinement.

This AppCat release focuses on increasing flexibility where it matters – and enforcing safety where it’s critical. From more configurable PostgreSQL extensions to safer upgrade paths and improved integrations, these changes help teams operate their services with greater confidence.

As always, VSHN AppCat continues to evolve to make running production workloads on Kubernetes simpler, more reliable and more consistent.

👉 Learn more about VSHN AppCat:
Application Catalog – VSHN AG

👉 VSHN AppCat Changelog:
Changelog 2026-03-24

Liene Luksika

Product Manager

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
AppCat Tech

VSHN AppCat Update – Improving Reliability, Operations and Developer Platforms

16. Mar 2026

Operating modern applications on Kubernetes is powerful – but it also comes with complexity. Databases, identity services, collaboration platforms and developer tools all require careful lifecycle management, upgrades, monitoring and security configuration.

That’s exactly where VSHN AppCat comes in.

With the latest release of AppCat, we’re introducing several improvements that further strengthen the reliability and operational consistency of the applications running on the platform.

While many of the changes happen behind the scenes, they directly improve the day-to-day experience of teams running production workloads on Kubernetes.

What is VSHN AppCat?

VSHN AppCat is the VSHN Application Catalog – a curated collection of production-ready applications that can be deployed and operated on Kubernetes with minimal effort.

Instead of installing and maintaining complex software stacks manually, AppCat allows teams to deploy widely used services such as databases, identity providers, collaboration platforms or developer tools as fully managed applications.

AppCat provides:

  • Production-ready Kubernetes deployments
  • Automated operations and lifecycle management
  • Built-in monitoring, backup and maintenance processes
  • Consistent deployments across Kubernetes environments

This means platform teams and developers can focus on building and running their applications while VSHN takes care of the operational complexity behind the scenes.

👉 Learn more about VSHN Application Catalog

Smarter Alerting for End Users

Reliable operations require good alerting – but alerts should be actionable, not overwhelming.

In this release we improved end-user alerting behaviour, reducing alert noise while keeping important signals visible. For example, alerts for storage capacity issues previously triggered every minute. While technically correct, this behaviour often resulted in unnecessary alert noise.

The new configuration adjusts the interval to 15 minutes, reducing alert flapping while still giving users enough time to react to potential issues.

If you haven’t configured alerting for your AppCat instances yet, we strongly recommend enabling it.

Improving PostgreSQL Reliability

Several improvements in this release focus on making PostgreSQL operations more robust.

One issue could occur when multiple databases were created simultaneously. In rare situations this could lead to race conditions during user management. By adjusting the database bootstrap process and using a more suitable template database, this issue has now been resolved.

These kinds of improvements might be invisible to users – but they are essential for making platform operations reliable at scale.

CloudNativePG for Keycloak

We also improved the PostgreSQL backend used by the Keycloak identity service in AppCat.

New Keycloak instances will now use CloudNativePG, a Kubernetes-native PostgreSQL operator designed specifically for cloud-native environments.

Existing Keycloak installations remain unchanged and continue to operate normally. The new backend will automatically be used when new Keycloak instances are created.

👉 Learn more about Keycloak

Updates to Developer Collaboration Platforms

This release also includes updates to Forgejo and Codey, ensuring that these services run on supported upstream versions.

What is Codey?

Codey is VSHN’s European code collaboration platform, built on the open-source Forgejo project.

Instead of operating their own Git platform, teams can run fully managed Forgejo instances operated by VSHN, including monitoring, backups and lifecycle management.

Codey provides features such as:

  • Git repository hosting
  • Pull requests and code reviews
  • CI/CD compatible with GitHub Actions workflows
  • Package and container registries
  • Integrated project management tools

Codey runs on European cloud infrastructure, including Switzerland, helping organizations maintain control over their development data while benefiting from a fully managed service.

👉 Learn more about Codey

Improved Security for Nextcloud

The release also includes improvements to the security configuration of the Nextcloud cronjob.

Previously the job could run with incorrect user permissions in some Kubernetes environments. The new configuration ensures the job runs with the correct security context, improving compatibility across Kubernetes platforms such as vanilla Kubernetes and OpenShift.

👉 Learn more about Nextcloud

Consistent Maintenance Scheduling

For services that include PostgreSQL dependencies – such as Keycloak and Nextcloud – we fixed edge cases where maintenance windows could slightly deviate from the configured schedule.

Maintenance operations are now handled more consistently across dependent services.

Continuous Platform Improvements

Operating applications reliably on Kubernetes requires constant refinement of automation, observability and operational processes.

This AppCat release focuses on exactly that: improving reliability, consistency and operational experience across the platform.

While many improvements happen behind the scenes, they ultimately help teams run critical services more safely and with less operational overhead.

If you’re looking for a simpler way to run production-ready services on Kubernetes, VSHN AppCat provides a proven foundation.

👉 VSHN AppCat Changelog

Liene Luksika

Product Manager

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Events General Tech

Cloud Native Computing Switzerland Meetup – March 2026 Recap

10. Mar 2026

On March 10, the Cloud Native Computing Switzerland Meetup Community gathered again at the VSHN Tower in Zürich for an afternoon of technical talks, discussions, and community exchange.

With more than 3,000 members in the meetup group, the CNC Switzerland community continues to bring together platform engineers, DevOps practitioners, architects, and open-source enthusiasts from across the Swiss cloud-native ecosystem.

The March edition featured four talks covering topics from Kubernetes security and networking to platform engineering and MLOps.

Opening and Community Updates

Aarno Aukia and Patrick Mathers – VSHN

The meetup kicked off with a short welcome and community update by the organizers. As always, the CNC Switzerland meetup follows a few important principles:

  • All talks are technical and open-source focused
  • No product or sales pitches
  • Talks are held in English
  • Speakers from diverse backgrounds are strongly encouraged

These principles help keep the meetup a true technical community event rather than a marketing stage.

TLS Hot Reload in Kubernetes

Janne Kataja – SIX

Janne Kataja from SIX explained how applications can implement hot reloading of TLS certificates, allowing certificates stored in Kubernetes Secrets to be updated without restarting pods.

Instead of forcing service restarts during certificate renewals – which can introduce downtime and operational risk – hot reload mechanisms detect changes in mounted secret volumes and reload certificates dynamically.

This approach enables:

  • seamless certificate rotation
  • higher availability
  • the use of shorter-lived certificates for improved security

The talk demonstrated how relatively small architectural decisions can significantly improve reliability and operational resilience.

Application-Centric Platforms with OAM and KubeVela

Raffael Klingler – AXA Schweiz

The second session explored a topic that is gaining traction across many organizations: platform engineering and internal developer platforms.

Raffael Klingler from AXA introduced the Open Application Model (OAM) and how it shifts the focus from Kubernetes infrastructure toward application-centric definitions.

Instead of writing complex Kubernetes manifests, developers define applications using modular building blocks. These definitions are then rendered into deployable infrastructure resources using KubeVela.

The talk showed how this approach allows organizations to:

  • standardize application deployment patterns
  • reduce Kubernetes complexity for developers
  • integrate cloud services and GitOps workflows

As more companies build internal developer platforms, models like OAM illustrate how Kubernetes can become more accessible and developer-friendly.

DevOps for AI: Running ML in Production with Kubeflow

Fabrizio Lazzaretti (Wavestone) & Marco Crisafulli (enki)

AI is everywhere right now, but turning machine learning experiments into reliable production systems remains difficult.

Fabrizio Lazzaretti and Marco Crisafulli explored how MLOps practices and Kubeflow help bridge the gap between data science experimentation and production-grade systems.

The session demonstrated how Kubeflow enables:

  • reproducible ML pipelines
  • collaboration between teams
  • automated training workflows
  • integration with the broader CNCF ecosystem

Using a real end-to-end example, the speakers showed how organizations can move from ad-hoc AI experiments to repeatable, scalable ML platforms running on Kubernetes.

The talk highlighted a key insight: AI systems still need strong DevOps foundations.

Bye-bye Ingress-NGINX – Hello Gateway API

Urs Zurbuchen – Airlock

The final talk addressed a major architectural shift happening in the Kubernetes networking ecosystem.

Urs Zurbuchen from Airlock explained why the traditional Ingress model – often powered by the NGINX Ingress Controller – is reaching its limits.

Many Kubernetes users have experienced challenges such as:

  • configuration complexity
  • heavy reliance on annotations
  • security issues in older controller implementations

The emerging Gateway API aims to address these limitations with a more structured and extensible networking model.

The talk walked through:

  • the architectural improvements of Gateway API
  • why it is becoming the future standard
  • migration considerations for existing Kubernetes clusters

For many attendees, this session provided a helpful overview of where Kubernetes networking is heading next.

Networking and Apéro

After the talks, participants stayed for networking and the traditional Swiss meetup apéro, continuing discussions about Kubernetes, platform engineering, and the rapidly evolving cloud-native ecosystem.

Meetups like these highlight the strength of the Swiss cloud-native community: engineers from different companies sharing real-world experiences, lessons learned, and open-source solutions.

Watch the Talks

The sessions from this meetup will be published on the VSHN TV YouTube channel.

Subscribe to stay notified when the recordings become available.

Join the Community

The Cloud Native Computing Switzerland Meetup welcomes engineers, architects, and developers interested in cloud-native technologies and open source.

If you would like to present a talk or share your project, submit your proposal here.

We look forward to seeing you at the next meetup!

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

DevOps for AI: Running LLMs in Production with Kubernetes and Kubeflow

9. Mar 2026

Large Language Models (LLMs) are rapidly becoming part of modern software systems. From chatbots and copilots to retrieval systems and AI agents, organizations are integrating generative AI into real production environments. But while building AI prototypes has become easier than ever, operating LLMs reliably in production remains a serious challenge.

At Kubernetes Community Days New York, Aarno Aukia shared practical insights into what it takes to run LLMs using proven DevOps practices. His talk highlighted an important reality: AI systems still need strong DevOps foundations – perhaps even more than traditional software systems.

Aarno Aukia’s talk at KCD New York

DevOps Meets AI

DevOps has always been about bridging the gap between development and operations. Developers focus on building application logic and managing data, while operations teams ensure that software runs reliably in production. Over the past decade, DevOps practices have matured around automation, observability, and continuous delivery.

In many organizations today, software follows a well-established pipeline: developers commit code to Git, automated CI/CD pipelines build and package the application, and Kubernetes deploys and runs it in production. Monitoring and logging systems then provide visibility into how the application behaves, allowing developers to continuously improve it.

This feedback loop has become the backbone of modern cloud-native development.

When AI enters the picture, however, the model changes in several important ways.

AI Systems Behave Differently

One of the biggest differences between traditional applications and AI-driven systems is determinism. Traditional software behaves predictably: given the same input, it produces the same output every time. LLMs behave very differently.

Large language models are probabilistic systems. They generate responses by predicting the next token based on context, effectively making statistical decisions about what comes next. This means that even small changes in prompts or user input can produce very different results.

A seemingly harmless modification to a system prompt can completely change the behavior of a model. In one example, simply adding a seasonal theme to a chatbot prompt caused the model to fail at answering basic questions.

For operations teams, this creates a new category of complexity. Instead of debugging deterministic systems, they now have to manage systems whose outputs can change subtly depending on context.

Testing therefore becomes significantly more complicated.

The Challenge of Testing AI

Traditional software testing is relatively straightforward. A test provides an input and verifies that the output exactly matches an expected value.

AI systems do not fit into this model. When an LLM generates an answer, the output might be correct even if the exact wording differs from what was expected. At the same time, the response could contain subtle factual errors or hallucinations.

Determining whether an answer is acceptable often requires semantic evaluation rather than strict comparisons. In some cases, organizations even use another LLM to evaluate the output of the first one. This introduces an entirely new testing paradigm that many teams are still learning how to manage.

More Artifacts to Manage

AI systems also introduce additional artifacts that must be tracked and versioned.

In traditional DevOps pipelines, the primary artifacts are source code and container images. With AI workloads, teams must also manage datasets, training artifacts, prompts, and model files. These models are often very large, sometimes tens of gigabytes in size, and must be stored and versioned carefully.

Without proper versioning, it becomes extremely difficult to debug issues or reproduce results later. If a model behaves unexpectedly, teams need to know exactly which model version, dataset, and configuration were used during deployment.

This dramatically increases the operational complexity of AI systems.

Observability Becomes Critical

Because LLMs are non-deterministic, observability becomes even more important than in traditional systems.

Logging must capture far more context than before. Instead of logging only application events, teams may need to record the full prompt, the model response, the model version, and the surrounding configuration. This allows operators to reconstruct what happened when something goes wrong.

Without detailed observability, debugging AI systems can quickly become impossible.

Open Models vs Hosted APIs

Another important operational consideration is the choice between closed and open models.

Hosted AI APIs offer convenience and powerful capabilities, but they also come with trade-offs. In many cases, organizations cannot control exactly when model updates happen or which minor version is running at a given time. This can make debugging and reproducibility difficult.

Open-weight and open-source models provide more operational control. They can be downloaded, versioned, tested locally, and deployed on internal infrastructure. This allows organizations to decide when and how updates are rolled out.

For many regulated industries such as finance, healthcare, or government, this level of control is essential.

Kubernetes as the Foundation

This is where Kubernetes becomes an important part of the AI infrastructure stack.

Kubernetes already solves many of the operational challenges associated with running distributed systems. It provides mechanisms for container orchestration, resource management, autoscaling, and fault tolerance. Importantly for AI workloads, it can also manage GPU resources.

However, operating Kubernetes itself is not trivial – as discussed in our article Best Kubernetes Distributions in 2026 – And Why You Might Not Want to Run Them Yourself, running clusters in production requires significant operational expertise.

Kubeflow and the Machine Learning Lifecycle

Kubeflow extends Kubernetes with specialized components for machine learning workflows. It helps manage the entire lifecycle of AI models, from training to production inference.

Kubeflow Pipelines allow teams to automate workflows for model development and training. These pipelines can orchestrate complex processes such as data preprocessing, training runs, evaluation steps, and packaging models for deployment.

For many organizations using LLMs, however, the main focus is not training models but serving them reliably in production.

This is where KServe plays a key role.

Serving LLMs with KServe

KServe is a Kubernetes-native model serving framework that simplifies deploying and operating AI models. It allows teams to run inference services on top of Kubernetes using standard APIs.

A typical deployment consists of a container running a model server, often based on runtimes such as vLLM. The container loads the model, uses GPU resources for inference, and exposes an API endpoint for applications.

KServe integrates with Kubernetes autoscaling mechanisms and observability tools, making it possible to scale AI workloads dynamically and monitor their behavior in production.

Because everything runs as Kubernetes resources, teams can apply the same DevOps practices they already use for other applications.

A Rapidly Evolving Ecosystem

The ecosystem around AI infrastructure is evolving extremely quickly. New projects are emerging to address the unique challenges of running LLMs at scale.

One example is LLMD, a Kubernetes operator specifically designed for LLM inference. It builds on existing technologies such as vLLM but adds specialized capabilities like request routing, model selection, caching, and intelligent scaling.

These kinds of tools illustrate how the cloud-native ecosystem is adapting to the operational needs of AI workloads.

AI Still Needs DevOps

Despite the hype surrounding generative AI, one lesson is clear: AI systems still require strong operational foundations.

Running LLMs in production involves far more than simply calling an API. It requires careful management of models, infrastructure, observability, and deployment processes.

Kubernetes and Kubeflow provide a powerful platform for addressing these challenges. By applying proven DevOps principles to AI systems, organizations can build platforms that are not only intelligent but also reliable and scalable.

As AI becomes a standard component of modern applications, the ability to operate these systems effectively will become just as important as the models themselves.

This is also where platform approaches come into play. Instead of every team building and operating complex stacks themselves, platforms can provide ready-to-use services on top of Kubernetes. One example is Servala – Sovereign App Store, a Kubernetes-native marketplace that connects organizations with a catalog of managed cloud-native services such as databases, storage, developer tools, and AI-ready infrastructure components.

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

How we used Crossplane for the things we should not have

30. Sep 2025

At Swiss Cloud Native Day 2025 in Bern, our colleague Liene Luksika shared an honest and entertaining story about VSHN’s journey with Crossplane. What started as a simple use case evolved into a complex architecture, full of learnings, mishaps, and valuable lessons for anyone building managed services on Kubernetes.

From healthcare to cloud native

Liene comes from the healthcare sector, so when she joined the cloud native world at VSHN, she had to quickly get used to Kubernetes lingo – namespaces, instances, and of course, the obsession with laptop stickers. Luckily, VSHN has been around for more than 10 years, providing 24/7 managed services and building cloud native platforms for customers in Switzerland, Germany, and beyond.

Why Crossplane?

As customers increasingly asked VSHN to run their software as a service – databases, Nextcloud, and other critical apps – we needed a solid way to provision and manage infrastructure across private and public clouds. Crossplane seemed like the perfect fit:

  • It lets engineers define desired state vs. observed state
  • It automatically reconciles the two – like making coffee appear if that is your desired state
  • It provides flexible building blocks to expose clean APIs for managed services on Kubernetes

VSHN has used Crossplane in production since early 2021 (around v0.14) and runs the Crossplane Competence Center in Switzerland.

The evolution: from simple to complex

Our first use case was straightforward: a customer wanted two types of databases (Redis and MariaDB), T-shirt sized, no extras. Crossplane handled this beautifully.

Then reality hit. Customers wanted backups and restores, logs and metrics, alerting, maintenance and upgrades, scaling and user management, special features like Collabora for Nextcloud, and the freedom to choose infrastructure. To serve this, we adopted a split architecture:

  • A control cluster for all Crossplane logic
  • Separate service clusters for customer workloads

This runs today with customers like health organizations in Gesundheitsamt Frankfurt and HIN in Switzerland, on providers such as Exoscale and Cloudscale, keeping data sovereign and operations reliable.

When things go wrong

Building complex platforms means learning in production:

  • Deletion protection surprise: a minor Crossplane change removed labels before deletion, wiping our safeguard. Backups saved the day
  • Race conditions: a split approach to connection details occasionally made apps unreachable until we cleaned up code
  • The big one: during a planned “no-downtime” maintenance for a fleet with 1’300+ databases, objects hit an invalid state and Kubernetes garbage collection deleted 230 database objects. Some restores were fresh, some older. We pulled in 20 people overnight, communicated openly, and recovered together with the customer

Key lessons: test at realistic scale and keep recent, tested backups. Also, practice the restore path, not just the backup.

Crossplane 2.0 – where next?

Crossplane 2.0 introduces major breaking changes. Staying put is not an option, but migrating means real effort, especially for our split control plane architecture. We are evaluating whether Crossplane 2.0 fits our needs or if alternatives are a better match. As always, we will document our decisions openly in VSHN’s Architecture Decision Records.

Final thoughts

Cloud native success is not just about tools. It is about learning fast, designing for failure, and communicating clearly with customers. Crossplane has enabled a lot of innovation for us, and it has also tested us. Whether we proceed with Crossplane 2.0 or chart a different course, we will keep building sovereign, reliable, open managed services for our customers.

👉 Watch the whole video on YouTube:

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Now Available: DevOps in Switzerland Report 2025 🚀

25. Jun 2025

We’re absolutely thrilled to release the sixth edition of our “DevOps in Switzerland” report – and this time with a special focus on Platform Engineering and Artificial Intelligence (AI)! 🤖

From January to April 2025, we conducted a study with professionals from the Swiss tech community. The result: valuable insights into how DevOps teams in Switzerland work today – what tools they use, how their teams are structured, the challenges they face, and where AI is already being used in practice.

💡 Want a sneak peek?

  • 💡Swiss companies are no longer asking whether to adopt DevOps – they’re asking how to scale it.
  • 📈 Platform Engineering and AI are reshaping how teams ship software faster, safer, and smarter.
  • 💡1 in 3 Swiss DevOps teams already use AI in production – for code reviews, CI/CD optimization, and architecture support. Another third are gearing up to follow.
  • 💡54% of Swiss companies now have dedicated Platform Engineering teams.
  • Internal Developer Platforms (IDPs) are becoming the secret weapon for enabling autonomy and reducing complexity.
  • 💡 Devs say yes to AI! 79% of Swiss developers are comfortable using AI in their workflows – but only 20% believe it’s fully ready.
  • The report shows: AI is promising, but needs better measurement and trust to scale.

You’ll find all of this (and much more!) in our compact PDF report (available in English only). Just like last year, the report begins with an executive summary – perfect for those short on time.

📥 Download now

You can download the DevOps Report 2025 here. Have fun reading, and let us know what you think!

Enjoy reading – we’re excited to hear your feedback! 🙌

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Servala Tech

Redis 8 Now Available in the VSHN Application Catalog – Open Source Is Back!

11. Jun 2025

We’re thrilled to announce that Redis 8 is now available through the VSHN Application Catalog – and this release is a special one: Redis is officially open source again!

But that’s not all: Redis is now also available on Servala – the open, cloud-native service hub operated by VSHN, connecting developers, software vendors, and cloud providers across multiple infrastructures.

Why This Is a Big Deal

For years, Redis has been one of the most popular in-memory databases for developers and DevOps teams alike. However, licensing changes in previous versions created friction for open ecosystems and cloud-native users. With version 8, that’s finally changing: Redis has returned to its open source roots, now licensed under the GNU AGPLv3.

“Redis 8 brings Redis back to its open source roots. All future development of Redis will happen under the AGPLv3 license.”
– Redis team, official announcement

This means greater transparency, broader collaboration, and long-term sustainability for users who rely on Redis as a key part of their stack.

Redis 8 with VSHN and Servala: Fully Managed, Highly Available

With Redis 8 now available in both the VSHN Application Catalog and on Servala, you get more than just the latest open source release:

  • Production-grade deployments on Kubernetes and OpenShift
  • Guaranteed availability, monitoring, and automated failover
  • Lifecycle management, including upgrades and security patches
  • Cloud provider flexibility – deploy in your infrastructure or through partners
  • Self-service provisioning via Servala with built-in automation

Whether you’re running Redis as part of your internal platform, or offering it to teams and customers, we’ve got you covered.

Supported Versions

We continue to support the most widely used Redis versions, with Redis 8 now part of our officially maintained portfolio.
Check out the complete list of supported versions on the VSHN Redis product page and the Servala Redis page.

Why Choose Redis 8 via VSHN or Servala?

  • Fully open source and community-driven again
  • Kubernetes-native, GitOps-ready deployments
  • High availability, failover, and backup strategies included
  • Integrated with your infrastructure, or offered as a managed service
  • Supported by VSHN, the DevOps experts behind Servala

Redis 8 is a major milestone for the open source world – and we’re proud to bring it to your production environment through VSHN and Servala.

🔧 Need help integrating Redis 8 into your stack?
📬 Contact us for a free consultation!
📩 Or subscribe to our newsletters to stay up to date with more open source news and service updates.

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Servala Tech

The Technical Challenges Behind Servala: Standardizing Application Delivery

14. Apr 2025

In this follow-up to our Servala introduction, we explore the technical challenges of bringing managed services to cloud providers everywhere. Discover how the repetitive and inconsistent nature of application packaging, deployment, and operations inspired our vision for standardization.

We explore the problems platform engineers face today, including inconsistent container behaviors, unpredictable Helm charts, and the chaos of day-2 operations across security, configuration, and dependencies.

Learn how Servala’s proposed open standards will transform the landscape for:

  • Software vendors – Accelerating time-to-market and expanding reach without operational overhead
  • Cloud providers – Enriching service catalogs with enterprise-grade managed services
  • End users – Enjoying self-service freedom with consistent, secure, and compliant applications

Join us on this journey to simplify application delivery and make managed services accessible to everyone.

In our introduction to Servala, we mentioned the technical challenges of enabling software vendors to onboard themselves onto our platform. As we continue building Servala in 2025, we’re tackling the most fundamental challenge: creating a standardized approach to application delivery. Let’s explore these challenges and our proposed solution in more detail.

The Repetitive Nature of Application Management

Over the past years at VSHN, we have taken care of numerous applications as part of our managed services offering that now forms the foundation of Servala. For every single application, we had to do the same tedious tasks:

Packaging: Prepare the application in a deployable format, typically by creating an OCI-compliant container image compatible with Docker, Podman, Kubernetes, and OpenShift. Automate the packaging process to trigger whenever a new version is available.

Deployment: Deploy the packaged application to the target system, typically through automated processes rather than manual steps. Most deployments span multiple environments, such as test, staging, pre-prod, and production, or support self-service provisioning for SaaS. This process often involves creating Helm Charts and setting up automation pipelines or APIs provided by tools like Kubernetes operators (e.g., Crossplane).

Day-2 Operations: After deployment, ongoing responsibilities include collecting metrics, setting up alerts, updating the application, scaling in response to performance issues, backing up and restoring data, analyzing logs, offering 24/7 support, and ensuring compliance with various standards, along with many other operational tasks.

The Current Challenge for Servala

Doing these same steps over and over again becomes tedious. Solving the same problems whenever we must take care of a new application doesn’t feel valuable. In reality, we have to deal with a multitude of different ways in which these things are done. It puts a lot of burden on engineers, having to cope with all the many ways all these tasks can be done. Usually, parts of the functions mentioned above are already done. As an example, container images are already available, but every image behaves differently from the other. And that means we must always figure out how to integrate into the next step. The same applies to the various Helm Charts out there. Standardization will relieve us from this burden, making the process more efficient and less repetitive.

The core issue stems from the flexibility of the tools involved. Container images vary widely in how they’re built and behave, while Helm Charts accept parameters in inconsistent formats. For example, the container image reference might appear as img, image, or image-registry, depending on the chart author.

Security scanning and compliance reporting vary widely between applications. Some include Software Bills of Materials (SBOMs), while others require manual inventory. Configuration handling is equally inconsistent—some applications use environment variables, others expect config files in specific locations, and others require custom configuration APIs.

Day-2 operations vary significantly across applications. Some expose metrics in a Prometheus-compatible format, while others don’t. Identical metrics might use different names, and logging formats range from structured JSON to custom plain text. Dependency management is often neglected, with minimal information about required services or components. As a result, maintaining these applications turns into a tedious game of whack-a-mole.

We must solve these fundamental inconsistencies so that Servala can scale and enable software vendors to onboard their applications easily.

Our Proposed Solution: Standardization

How could we solve these obstacles? We propose to define a set of documents that specify patterns for all the various parts needed to deliver applications through Servala. We could also call these documents specifications, golden paths, patterns, standards, conventions, or defaults. Ultimately, the goal is to document a commonly agreed-upon way to solve the mentioned tasks so that we don’t have to iterate over them repeatedly.

However, doing that just for us feels wrong. As a company, we embrace Open-Source and Open Standards to work together in a defined way. Therefore, we propose to form a group of people from various companies, document the patterns together, and agree on them.

The Vision: A Transformed Application Delivery Landscape

What will application delivery look like once the Servala specifications are widely adopted? The benefits will be transformative for all parties involved:

For Software Vendors:

  • Accelerated Time to Market: Instead of spending months building deployment, monitoring, and maintenance systems, vendors can focus on their core product and leverage Servala’s standardized delivery mechanisms to reach cloud providers globally.
  • Reduced Operational Overhead: By conforming to the Servala specification, vendors automatically inherit proven operational practices like monitoring, metrics, logs, backups, etc, without maintaining their own operations team.
  • Expanded Market Reach: The ability to deploy on any Servala-compatible cloud provider opens new markets without additional engineering effort.
  • Enhanced Security Posture: Standardized security scanning, compliance reporting, and configuration management significantly reduce risk, enabling vendors to confidently deploy their applications on Servala-compatible cloud providers, even without dedicated in-house security expertise.

For Cloud Providers:

  • Enriched Service Catalogs: Providers can instantly offer dozens of managed services that follow consistent operational patterns, dramatically increasing their value proposition.
  • Operational Consistency: All services follow the same patterns for monitoring, alerting, and maintenance, reducing the complexity of running multiple third-party applications.
  • Competitive Differentiation: Smaller cloud providers can now compete with hyperscalers by offering comparable catalogs of managed services.

For End Users:

  • With Servala’s standardized delivery Mechanisms, end users can deploy complex managed services confidently, knowing they follow consistent operational patterns. This empowerment gives them a sense of control and confidence in their operations.
  • The operational interfaces remain consistent regardless of application deployment, providing end users a predictable and secure experience. This predictability reassures them of the system’s stability and reliability.
  • Enterprise Readiness: All services automatically include security, backup/restore, monitoring, and other enterprise features without custom integration work.
  • Simplified Compliance: Standardized security scanning and compliance reporting make regulatory audits more straightforward and less resource-intensive.
  • Dependency Clarity: Clear visibility into service dependencies and compatibility requirements reduces deployment failures and configuration errors.

The Servala Specification Areas

We envision documenting patterns for:

Container image behavior, such as where to store data, how to expose ports, how the entry point behaves, and with which permissions the application runs.

Helm Chart “API”: How do the standard values behave? What does the configuration structure look like?

Unified Operational Framework:

  • Backup and Restore: Standardized interfaces for consistent application and data backup procedures with well-defined restoration paths and verification methods
  • Metrics: Well-defined endpoints to get application metrics for alerting, monitoring, and performance insights
  • Alerting and Monitoring: Common alert definitions, severity classifications, and response expectations across applications
  • Logging Standards: Uniform logging formats, retention policies, and search capabilities to simplify troubleshooting
  • SLA Definitions: Standardized metrics for measuring and reporting on availability, performance, and reliability
  • Maintenance Windows: Clear protocols for coordinating and communicating maintenance events with minimal disruption
  • Billing: A Uniform way of billing service usage
  • Security Scanning and Compliance: Standardized approaches for vulnerability management, security policy enforcement, and compliance reporting across all applications
  • Configuration Management: Unified patterns for handling application configuration, secrets management, and runtime reconfiguration
  • Dependency Management: Clear declaration and handling of service dependencies, including versioning requirements and compatibility matrices

Self-Service API Architecture: Propose standardized structures for Kubernetes resources, creating predictable interfaces for application management across environments.

Previous work we want to build on

There have been successful efforts to standardize that we want to build on:

Open Container Image (OCI) Image Format

After a decade of fragmentation in how containers were built and stored, the OCI initiative introduced a unified image format adopted by tools like Docker and Podman. It standardized filesystem locations (e.g., /var/lib/docker/), defined predictable image layering, and enabled interoperability across registries such as Docker Hub, GitHub Container Registry (GHCR), and Quay.

Kubernetes as a container orchestrator

Kubernetes has established itself as the de facto standard for managing container fleets. It provides a unified API for managing compute, networking, and storage regardless of the infrastructure provider.

Kubernetes Pod and Container Lifecycle Conventions

The Kubernetes community has standardized application behavior during lifecycle events, such as startup, shutdown, and health checks, ensuring consistent health monitoring. Applications now respond predictably to restarts and draining, greatly easing the work of platform engineers. Implementing lifecycle hooks has become a de facto standard.

Prometheus Metrics Format

Many applications already implement exposing metrics in the Prometheus OpenMetrics format, and where, by convention, the “/metrics” endpoint exposes a human-readable or OpenMetrics-compliant format, there are some Standard naming conventions (http_requests_total, etc.). While it’s not perfect yet, as some metric names still vary, this is one of the most widely accepted informal standards adopted by applications, exporters, sidecars, service monitors, etc.

Software Bill of Materials (SBOM) standards

With open SBOM standards now established and widely supported by vendors like GitHub, GitLab, and Docker, generating and consuming SBOMs has become a best practice. It’s so fundamental that the EU Cyber Resilience Act (CRA) now mandates SBOMs for all proprietary and open-source software.

12-Factor App

While some inconsistencies remain, we still want to honorably mention the https://12factor.net/ manifesto, which laid the Foundation for cloud-native apps in 2011 and still influences architecture and platform design today. Solving Inconsistent application structure and runtime expectations, these are now widely adopted best practices: config via env vars, statelessness, logging to stdout, etc., often enforced indirectly by platforms.


Helm Chart Best Practices / Guidelines

The Helm community recognizes the inconsistency in chart structures and naming and has responded with best practices, guidelines, and tools like helm lint and helm create. While adoption remains partial, projects like Bitnami, KubeApps, and Backstage increasingly rely on these conventions, laying a strong foundation for what Servala aims to standardize.

OpenAPI / Swagger

The OpenAPI Initiative has significantly impacted API standardization. It enables machine-readable API definitions, automatic generation of SDKs, tests, mocks, and human-friendly documentation. Widely adopted across platforms – from Kubernetes CRDs to GitHub APIs – OpenAPI has brought consistency and interoperability to API design and consumption.

OpenServiceBroker API

We’ve implemented the OpenServiceBroker API and are using it actively with a few clients, but it’s missing the declarative and cloud-native approach to service listing and provisioning.

Crossplane

We’re fans of Crossplane’s declarative approach to service definitions and instantiations. Crossplanes Composite Resource Definitions (XRDs) provide opinionated Kubernetes Custom Resource Definitions (CRDs), which all have the same structure as what the engineer defined in the XRD. Servala is not replacing Crossplane but uses Crossplane under the hood.

Open Application Model (OAM)

The Open Application Model (OAM) supports Servala’s mission by offering a platform-agnostic, standardized way to define cloud-native applications. It cleanly separates core logic (components), operational features (traits), and dependencies (scopes), providing a consistent interface for developers and operators. With reusable metrics, backups, and autoscaling definitions, OAM helps eliminate the fragmentation found in Helm charts and container images, making it an interesting foundation for Servala’s specification and enabling consistent managed services across environments.

The Platform Specification

The Platform Spec initiative contributes to Servala’s vision by offering a standardized contract between developers and platform engineers, defining a standard interface for deploying and managing applications across any internal developer platform (IDP). It focuses on creating a consistent, YAML-based specification for app workloads, including container images, environment variables, secrets, service bindings, and deployment rules. It solves many issues Servala identifies with inconsistent Helm charts and runtime behaviors. By adopting or aligning with Platform Spec, Servala can simplify onboarding, reduce integration overhead, and ensure applications are deployed in a predictable, platform-agnostic way, regardless of the underlying orchestrator or infrastructure.

Cloud Native Application Bundle (CNAB)

The Cloud Native Application Bundle (CNAB) specification supports Servala’s goals by providing a portable, standardized way to package and distribute multi-component applications, including Kubernetes manifests or Helm charts and Terraform plans, scripts, and other deployment artifacts. CNAB defines a consistent format for bundling an application’s code, configuration, and lifecycle operations (install, upgrade, uninstall), making it ideal for complex managed services that span multiple tools or environments. By leveraging CNAB, Servala could offer a unified packaging format that encapsulates everything needed for reliable, repeatable deployment, helping reduce fragmentation, simplify onboarding, and enable consistent Day-2 operations across cloud providers.

The Path Forward

Servala aims to accelerate application onboarding by a factor of 10, reducing weeks of custom integration to just days or hours, while dramatically improving reliability through consistent, proven patterns for deployment and operations. We’ve already begun implementing these standards in our development roadmap, but broad industry collaboration is essential for success. We invite software vendors, cloud providers, and platform engineers to join us in shaping these standards openly and collaboratively. With a solid foundation, Servala can redefine how managed services are delivered, empowering cloud providers to expand their service catalogs and enabling vendors to become SaaS providers without heavy operational overhead.

Subscribe to Updates

Always be the first to find out all the latest news about Servala. Simply add your email below:

What’s Next?

In 2025, we will focus on enabling software vendors to onboard themselves onto the Servala service delivery platform. Find more information about Why we launched Servala in our Servala launch announcement.

Contact us

Interested in learning more? Book a meeting or write us to explore how Servala can help you! Experience the future of cloud native services on servala.com. 🚀

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Why VSHN Managed OpenShift Customers Are Safe from the Recent Ingress NGINX Vulnerability

26. Mar 2025

A recently disclosed set of vulnerabilities, known as IngressNightmare, has raised alarms for Kubernetes users relying on the Ingress NGINX Controller. These vulnerabilities (CVE-2025-24513, CVE-2025-24514, CVE-2025-1097, CVE-2025-1098, and CVE-2025-1974), with a critical CVSS score of 9.8, could allow attackers to gain unauthorized access to a Kubernetes cluster, potentially leading to remote code execution and full cluster compromise. However, OpenShift 4.x customers are not affected by this exploit, as OpenShift uses the OpenShift Ingress Operator, based on HAProxy, as the default ingress controller.

The vulnerabilities affect the Ingress NGINX Controller, which is responsible for managing external traffic and routing it to internal services in a Kubernetes cluster. Specifically, they target the admission controller, which, if exposed without authentication, allows attackers to inject malicious configurations, resulting in remote code execution. Since OpenShift 4.x uses the OpenShift Ingress Operator (based on HAProxy) as the ingress controller, customers are not exposed to these risks.

OpenShift 4.x further enhances security by restricting permissions and not permitting the default ingress controller to access sensitive data, such as secrets stored across Kubernetes namespaces. This design decision helps protect OpenShift customers from potential exploits by preventing unauthorized access to critical cluster resources.

As a result, VSHN Managed OpenShift users can be confident that their clusters remain secure without having to worry about this specific vulnerability.

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Announcing Redis by VSHN – Enhance Your Containerized Workloads

6. Nov 2024

We are thrilled to announce the general availability of Redis by VSHN, now available in OpenShift through the VSHN Application Marketplace. This powerful, in-memory data structure store, known for its blazing-fast performance and versatility, is now optimized for containerized environments on OpenShift. Whether you’re building microservices, real-time analytics, or caching layers, Redis by VSHN offers the reliability and scalability your applications need.

Why Redis on OpenShift?

Redis has been a favorite among developers for its simplicity, performance, and robustness. By integrating Redis into OpenShift, we are enabling seamless deployment and management of Redis instances within your containerized infrastructure. This means you can now leverage Redis’s capabilities while enjoying the benefits of OpenShift’s orchestration and container management.

Key Features

  1. Containerized Deployment: Effortlessly deploy and manage Redis instances in your OpenShift environment.
  2. Scalability: Scale your Redis instances up or down based on your application needs.
  3. High Availability: Ensure your data is always available with Redis’s built-in replication and persistence mechanisms.
  4. Integrated Monitoring: Utilize OpenShift’s monitoring tools to keep an eye on your Redis performance and health.
  5. Security: Benefit from OpenShift’s security features to protect your Redis instances and data.

Benefits for Your Containerized Applications

  1. Performance: Redis’s in-memory data structure ensures lightning-fast read and write operations, ideal for real-time applications.
  2. Flexibility: Support for a variety of data structures, including strings, hashes, lists, sets, and more.
  3. Compatibility: Seamlessly integrate with your existing OpenShift applications and services.
  4. Developer Productivity: Simplified deployment and management allow developers to focus on building features rather than infrastructure.

Getting Started

To get started with Redis by VSHN, visit our Redis product page on the OpenShift Application Marketplace. Our comprehensive documentation will guide you through the setup process, ensuring you can quickly and efficiently integrate Redis into your workflows.

Do you want to see all our open documentation for how to create or use any of the services in the marketplace? You can find them all openly available here VSHN AppCat User Documentation

Be on the lookout for more services as we continue expanding our marketplace. Let’s keep those containers humming!

Ready to roll?

Contact us now!

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
OpenShift Tech

VSHN Managed OpenShift: What you need to know about OpenShift 4.16

16. Oct 2024

Upgrade to OpenShift version 4.16

As we start to prepare the upgrade to OpenShift v4.16 for all our customers clusters, it is a good opportunity to look again at what’s new in the Red Hat OpenShift 4.16 release. The release is based on Kubernetes 1.29 and CRI-O 1.29 and brings a handful of exciting new features which will make VSHN Managed OpenShift even more robust. Additionally, the new release also deprecates some legacy features which may require changes in your applications.

The Red Hat infographic highlights some of the key changes:

Red Hat OpenShift 4.16: What you need to know Infographic by Ju Lim

Changes which may require user action across all VSHN Managed OpenShift, including APPUiO

For VSHN Managed OpenShift, we’re highlighting the following changes which may require user action in our Release notes summary

Clusters which use OpenShift SDN as the network plugin can’t be upgraded to OpenShift 4.17+

This doesn’t affect most of the VSHN Managed OpenShift clusters since we’ve switched to Cilium as the default network (CNI) plugin a while ago and most of our older managed clusters have been migrated from OpenShift SDN to Cilium over the last couple of months.

The proxy service for the cluster monitoring stack components is changed from OpenShift OAuth to kube-rbac-proxy

Users who use custom integrations with the monitoring stack (such as a Grafana instance which is connected to the OpenShift monitoring stack) may need to update the RBAC configuration for the integration. If necessary, we’ll reach out to individual VSHN Managed OpenShift customers once we know more.

The ingress controller HAProxy is updated to 2.8

HAProxy 2.8 provides multiple options to disallow insecure cryptography. OpenShift 4.16 enables the option which disallows SHA-1 certificates for the ingress controller HAProxy. If you’re using Let’s Encrypt certificates for your applications no action is needed. If you’re using manually managed certificates for your Routes or Ingresses, you’ll need to ensure that you’re not using SHA-1 certificates.

Legacy service account API token secrets are no longer generated

In previous OpenShift releases, a legacy API token secret was created for each service account to enable access to the integrated OpenShift image registry. Starting with this release, these legacy API token secrets aren’t generated anymore. Instead, each service account’s image pull secret for the integrated image registry uses a bound service account token which is automatically refreshed before it expires.

If you’re using a service account token to access the OpenShift image registry from outside the cluster, you should create a long-lived token for the service account. See the Kubernetes documentation for details.

Linux control groups version 1 (cgroupv1) deprecated

The default cgroup version has been v2 for the last couple OpenShift releases. Starting from OpenShift 4.16, cgroup v1 is deprecated and it will be removed in a future release. The underlying reason for the pending removal is that Red Hat Enterprise Linux (RHEL) 10 and therefore also Red Hat CoreOS (RHCOS) 10 won’t support booting into cgroup v1 anymore.

If you’re running Java applications, we recommend that you make sure that you’re using a Java Runtime version which supports cgroup v2.

Warning for iptables usage

OpenShift 4.16 will generate warning event messages for pods which use the legacy IPTables kernel API, since the IPTables API will be removed in RHEL 10 and RHCOS 10.

If your software still uses IPTables, please make sure to update your software to use nftables or eBPF. If you are seeing these events for third-party software that isn’t managed by VSHN, please check with your vendor to ensure they will have an nftables or eBPF version available soon.

Other changes

Additionally, we’re highlighting the following changes:

RWOP with SELinux context mount is generally available

OpenShift 4.16 makes the ReadWriteOncePod access mode for PVs and PVCs generally available. In contrast to RWO where a PVC can be used by many pods on a single node, RWOP PVCs can only be used by a single pod on a single node. For CSI drivers which support RWOP, the SELinux context mount from the pod or container is used to mount the volume directly with the correct SELinux labels. This eliminates the need to recursively relabel the volume and can make pod startup significantly faster.

However, please note that VSHN Managed OpenShift doesn’t yet support the ReadWriteOncePod access mode on all supported infrastructure providers. Please reach out to us if you’re interested in this feature.

Monitoring stack replaces prometheus-adapter with metrics-server

OpenShift 4.16 removes prometheus-adapter and introduces metrics-server to provide the metrics.k8s.io API. This should reduce load on the cluster monitoring Prometheus stack.

Exciting upcoming features

We’re also excited about multiple upcoming features which aren’t yet generally available in OpenShift 4.16:

Node disruption policies

We’re looking forward to the “Node disruption policy” feature which will allow us to deploy some node-level configuration changes without node reboots. This should reduce the need for scheduling node-level changes to be rolled out during maintenance, and will enable us to say confidently whether a node-level change requires a reboot or not.

Route with externally managed certificates

OpenShift 4.16 introduces support for routes with externally managed certificates as a tech preview feature. We’re planning to evaluate this feature and make it available in VSHN Managed OpenShift once it reaches general availability.

This feature will allow users to request certificates with cert-manager (for example from Let’s Encrypt) and reference the cert-manager managed secret which contains the certificate directly in the Route instead of having to create an Ingress resource (that’s then translated to an OpenShift Route) which references the cert-manager certificate.

Changes not relevant to VSHN customers

There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. In particular, OVNKubernetes gains support for AdminNetworkPolicy resources, which provide a mechanism to deploy cluster-wide network policies. Please note that similar results should be achievable with Cilium’s CiliumClusterWideNetworkPolicy resources, and Cilium is actively working on implementing support for AdminNetworkPolicy.

Summary

OpenShift 4.16 brings deprecates some features which may require changes to your applications in order to make future upgrades as smooth as possible. Additionally, OpenShift 4.16 is the last release that supports OpenShift SDN as the network plugin and disables support for SHA-1 certificates in the ingress controller. For those interested in the nitty gritty details of the OpenShift 4.16 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.

VSHN customers will be notified about the upgrades to their specific clusters in the near future.

Interested in VSHN Managed OpenShift?

Head over to our product page VSHN Managed OpenShift to learn more about how VSHN can help you operate your own OpenShift cluster including setup, 24/7 operation, monitoring, backup and maintenance. Hosted in a public cloud of your choice or on-premises in your own data center. 

Simon Gerber

Simon Gerber is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Announcing General Availability of PostgreSQL by VSHN – On OpenShift

3. Oct 2024

We have some fantastic news – our PostgreSQL service is now generally available on OpenShift in our Application Catalog through the VSHN Application Marketplace. After seeing our container-based database solution work wonders for a few lucky customers, we’re excited to open it up for all to enjoy!

Why You’ll Love It

  • Always On: our high availability setup keeps your data accessible.
  • Safety First: top-notch security features to keep your data safe.
  • Grow As You Go: easily scale with your business needs.
  • Hands-Free Maintenance: automatic updates and backups? Yes, please!
  • Expert Help: our team is always here to support you.

Do you have an application that you run in containers or are moving to containers that uses PostgreSQL? Do you not want the complexity of running your database within a Kubernetes cluster?

Then check out PostgreSQL by VSHN to dive into the details and get started today.

Do you want to see all our open documentation for how to create or use any of the services in the marketplace? You can find them all openly available here VSHN AppCat User Documentation

Be on the lookout for more services as we continue expanding our marketplace. Let’s keep those containers humming!

Ready to roll?

Contact us now!

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Now Available: DevOps in Switzerland Report 2024

12. Sep 2024

We are thrilled to announce the fifth edition of our „DevOps in Switzerland” report!

From January to April 2024 we conducted a study to learn how Swiss companies implement and apply DevOps principles.

We compiled the results into a PDF file, and just like in the previous edition, we provided a short summary of our findings in the first pages.

You can get the report here. Enjoy reading and we look forward to your feedback!

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Announcing Keycloak by VSHN: Your Ultimate Open Source IAM Solution

4. Sep 2024

Hey there! We’re thrilled to introduce Keycloak by VSHN – your new go-to for robust, open-source identity and access management (IAM). Bringing together the expertise of VSHN and Inventage, designed to simplify authentication and boost security, our managed service is here to make your life easier and your apps more secure. Let’s dive into what makes Keycloak awesome, and why the VSHN managed version is even better!

What’s the Buzz About Keycloak?

Keycloak is an open-source powerhouse for managing identities and access. With it, you can integrate authentication across your services with little fuss. It’s packed with features like user federation, strong authentication, comprehensive user management, and finely tuned authorization controls. In short, it’s all about making secure access as straightforward as possible.

Why Should Your Enterprise Use Keycloak?

Here are just a few reasons:

  • Single Sign-On (SSO) Magic: Log in once and access all your apps without breaking a sweat. Plus, logging out from one logs you out from all – neat, right?
  • Integration Ease: Keycloak plays nicely with OpenID Connect, OAuth 2.0, and SAML 2.0, sliding seamlessly into your existing setup.
  • Empowering User Federation: Whether it’s LDAP or Active Directory, Keycloak connects smoothly, ensuring all your bases are covered.
  • Granular Control: With its intuitive admin console, managing complex authorization scenarios is a breeze.
  • Scalable Performance: From startups to large enterprises, Keycloak scales with your needs without skipping a beat.

Open Source vs. Proprietary: Why Go Open?

Choosing Keycloak means embracing benefits like:

  • Cost Efficiency: Forget about expensive proprietary license fees; open-source is wallet-friendly.
  • Total Transparency: Open-source means anyone can check the code, which helps in keeping things secure and up to snuff.
  • Community Driven: Benefit from the innovations and support of a global community.
  • Ultimate Flexibility: Adapt and extend Keycloak however you see fit. You’re in control!

What Makes Keycloak by VSHN Special?

  • All the Keycloak Goodies: Everything Keycloak offers, we provide, managed and tuned by experts.
  • Swiss-Based Hosting: Enjoy top-tier privacy, security, and adherence to Swiss regulations.
  • Expert Support: The Keycloak wizards from Inventage and the Kubernetes experts from VSHN are here to help you every step of the way. To learn more visit the Keycloak Competence Center Switzerland (Keycloak Competence Center Switzerland ) run by Inventage.
  • Transparent Pricing: What you see is what you get. No surprises here!
  • Solid SLAs and High Availability: We promise uptime and smooth operations, come rain or shine.

We Love Open Source!

With VSHN, you’re not just getting a service; you’re tapping into a philosophy. Backed by Red Hat and supported by Inventage, we bring you unparalleled expertise right from the heart of Switzerland.

Why Choose Keycloak?

It’s not just stable and feature-rich; it’s a part of the open-source legacy of Red Hat, enhanced by VSHN’s partnership with Inventage – making us a powerhouse of knowledge and reliability in the container and cloudnative IAM domain.

Ready to Jump In?

Dive into seamless identity management with Keycloak by VSHN. Curious for more? Visit our Keycloak product definition page (Keycloak by VSHN) for all the juicy details and kickstart your journey towards streamlined application security.

Ready to roll? Contact us now!

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Tech

Exploring Namespace-as-a-Service: A Deep Dive into APPUiO’s Implementation

30. Aug 2024

In the rapidly evolving world of cloud computing and container orchestration, Namespace-as-a-Service (NSaaS) is becoming part of the wide array of different ways you can host your application. Offering NSaaS is possible for companies that have already built years of experience running Kubernetes and container-based platforms. Although many organizations worldwide are still early in their container, Kubernetes, and cloud journeys, those companies that have been doing it for a while are now able to take things to the next level. These experienced organizations can leverage their deep expertise to provide NSaaS with greater stability, maturity, and innovation. As a result, they can offer a robust and reliable cloud service that caters to diverse needs while driving significant advancements in the industry.

This blog post will explore what NSaaS means, delve into its pros and cons, and highlight how its stability and maturity are now enabling its widespread adoption. With the well-established ecosystem and landscape of Kubernetes, NSaaS provides a stable and secure environment for managing application workloads within namespaces efficiently. Could this concept offer cost savings in terms of resources and operational overhead for your organization? What types of organizations and workloads are best suited for a NSaaS platform? Join us as we dive into the details of NSaaS, uncovering why it might be the ideal solution for your cloud computing needs.

What is Namespace-as-a-Service?

Namespace-as-a-Service (NSaaS) is a cloud service model that allows users to create, manage, and utilize namespaces in a Kubernetes environment with ease. In Kubernetes, a namespace is a form of isolation within a Kubernetes cluster, providing a way to divide cluster resources between multiple users. This isolation using namespaces can apply both to applications, user access (developers), storage volumes and network traffic. NSaaS abstracts the complexity of namespace management, offering a simplified and efficient way for users to leverage namespaces without needing in-depth Kubernetes knowledge. To the majority of users, this is no different from having a full Kubernetes cluster or having a Heroku-style PaaS. The advantage is that it’s a bit in between the two, you get the familiar Kubernetes API and environment definitions but without the overhead and complexity of managing a Kubernetes cluster.

Pros of Namespace-as-a-Service

  1. Lower operational overhead
    • NSaaS eliminates the need for users to manage the underlying infrastructure and complex Kubernetes configurations. This simplification allows developers to focus on application development rather than infrastructure management.
  2. Scalability
    • With NSaaS, users can easily scale their applications. As namespaces are lightweight, creating and managing multiple namespaces is efficient and allows for better resource utilization.
  3. Isolation and Security
    • Namespaces provide logical isolation within a Kubernetes cluster. NSaaS leverages this feature to ensure that applications running in different namespaces do not interfere with each other, enhancing security and stability. The platform provider who is running the Kubernetes clusters is responsible for the workload isolation from a security and workload optimization perspective between the different namespaces.
  4. Cost Efficiency
    • By optimizing resource allocation and utilization, NSaaS can reduce costs. Users only pay for the resources they use, and the efficient management of these resources can lead to significant savings.
  5. Simplified billing
    • NSaaS allows billing per namespace, keeping the usages of each namespace separate so that it is clear either which departments, application teams or even customers have what usage and in turn provide the bill outline for the namespace.

Cons of Namespace-as-a-Service

  1. Limited Customization (compared with running separate Kubernetes cluster)
    • While NSaaS simplifies many aspects of namespace management, it may also limit customization options. Advanced users who require fine-tuned control over their Kubernetes environments might find NSaaS restrictive. Fine-tuned control might include: deploying your own Kubernetes operators, defining the maintenance window and Kubernetes version
  2. Dependency on Cluster Provider (compared with running separate Kubernetes cluster)
    • Users are dependent on the cluster provider for managing not just the underlying infrastructure but also the wider cluster configuration and maintenance. Any issues or downtime at the cluster level managed by the provider can directly impact the user’s applications.
  3. Potential Security Risks (compared with a “Heroku style” PaaS)
    • Although namespaces offer both isolation and some customization powers, then improper configuration or vulnerabilities in the NSaaS implementation or the name space configuration can lead to security risks. It is crucial to ensure both that the service provider follows best practices for security, but also that any namespace configuration doesn’t open up unnecessarily in terms of for example network traffic.
  4. Learning Curve (compared with a “Heroku style” PaaS)
    • For users unfamiliar with Kubernetes concepts, there might still be a learning curve associated with understanding namespaces and how to utilize NSaaS effectively. Although the Kubernetes cli and concepts might be more complex and require a little extra learning from developers then it’s considered more of a standard today and therefore less lock-in. It is also possible for a Platform Engineering team to implement a fully automated CICD so that developer only need to know “git push” which is similar to the “cf push” that was popularised by cloudfoundry (that also came out from the team that originally built Heroku). The kubectl cli although more complex simple cli from other providers provides a standard interface and also provides more power to customise the deployment of applications into the namespace.

Implementation of Namespace-as-a-Service on APPUiO

For APPUiO, we have embraced Namespace-as-a-Service to provide our users with a seamless and efficient cloud experience. Here’s how we’ve implemented it:

  1. User-Friendly Interface
    • Our platform offers a user-friendly interface that abstracts the complexities of Kubernetes, allowing users to create and manage namespaces with just a few clicks.
  2. Automated Provisioning
    • We have automated the provisioning of namespaces, ensuring that users can instantly create namespaces without any delays. This automation extends to resource allocation and configuration management.
  3. Robust Security Measures
    • Security is a top priority at APPUiO. We have implemented stringent security measures to ensure isolation and protect user data. Our NSaaS implementation includes role-based access control (RBAC), network policies, and regular security audits. The tech stack behind APPUiO includes OpenShift, Isovalent Cilium Enterprise, Kyverno policies and our APPUiO agent that ensure security policies at all levels are adhered to.
  4. Scalability and Flexibility
    • Our platform is designed to scale with our users’ needs. Whether you are a small startup or a large enterprise, APPUiO can handle your workloads efficiently. Users can easily scale their applications within their namespaces as their requirements grow.
  5. Comprehensive Support
    • We provide comprehensive support to help users get the most out of our NSaaS offering. Our documentation, tutorials, and support team are always available to assist users in navigating any challenges they might encounter.

Conclusion

Namespace-as-a-Service represents a significant advancement in cloud computing, simplifying the management of Kubernetes environments and enhancing scalability, security, and cost efficiency. At APPUiO, we are proud to offer a robust NSaaS solution that empowers our users to focus on what they do best—developing great applications. By leveraging the power of namespaces, we provide a flexible, scalable, and secure cloud environment that meets the diverse needs of our users.

Whether you’re new to Kubernetes or an experienced user, APPUiO’s Namespace-as-a-Service can help you achieve your cloud goals with ease and efficiency.

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
OpenShift Tech

VSHN Managed OpenShift: Upgrade to OpenShift version 4.15

17. Jul 2024

As we start to prepare to rollout upgrades to OpenShift v4.15 across all our customers clusters it is a good opportunity to look again at what was in the Red Hat OpenShift 4.15 release. It brought Kubernetes 1.28 and CRI-O 1.28 and it was largely focused on small improvements in the core platform and enhancements to how OpenShift runs on underlying infrastructure including bare-metal and public cloud providers.

The Red Hat infographic highlights some of the key changes:

What’s New in Red Hat OpenShift 4.15 Infographic by Sunil Malagi

For our VSHN Managed OpenShift and APPUiO customers, we want to highlight the key changes in the release that are relevant for them.

Across all VSHN Managed OpenShift clusters – including APPUiO

Our summary highlights that apply are the following:

  • OpenShift 4.15 is based on Kubernetes 1.28 and CRI-O 1.28
  • Update to CoreDNS 1.11.1
  • There are some node enhancements (such faster builds for unprivileged pods, and compatibility of multiple image repository mirroring objects)
  • The release also brings updated versions for the monitoring stack (Alertmanager to 0.26.0, kube-state-metrics to 2.10.1, node-exporter to 1.7.0, Prometheus to 2.48.0, Prometheus Adapter to 0.11.2, Prometheus Operator to 0.70.0, Thanos Querier to 0.32.5)
  • It also includes some additional improvements and fixes to the monitoring stack
  • There are some changes to the Bare-Metal Operator so that it now automatically powers off any host that is removed from the cluster
  • There are some platform fixes including some security related ones like securing the cluster metrics port using TLS
  • OLM (Operator Lifecycle Management is being introduced as v1 and this brings three new life cycle classifications for cluster operators that are being introduced: Platform Aligned, for operators whose maintenance streams align with the OpenShift version; Platform Agnostic, for operators who make use of maintenance streams, but they don’t need to align with the OpenShift version; and Rolling Stream, for operators which use a single stream of rolling updates.

On VSHN Managed OpenShift clusters with optional features enabled

The changes that might relate to some VSHN Managed OpenShift customers who have optional features enabled would include:

  • OpenShift Service Mesh 2.5 based on Istio 1.18 and Kiali 1.73
  • Enhancements to RHOS Pipelines
  • Machine API – Defining a VMware vSphere failure domain for a control plane machine set (Technology Preview)
  • Updates to hosted control planes within OSCP
  • Bare-Metal hardware provisioning fixes

Changes not relevant to VSHN customers

There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. It is also interesting to note the deprecation of the OpenShift SDN network plugin, which means no new clusters can leverage that setup. Additionally, there are new features related to specific cloud providers (like Oracle Cloud Infrastructure) or specific hardware stacks (like IBM Z or IBM Power).

The changes to handling storage and in particular storage appliances is also not relevant for VSHN customers as none of the storage features affect how we handle our storage on cloud providers or on-prem.

Features in OpenShift open to customer PoCs before we enable for all VSHN customers

We do have an interesting customer PoC with Red Hat OpenShift Virtualization which is an interesting feature that continues to mature in OpenShift 4.15. We are excited to see the outcome of this PoC and to potentially making that available to all our customers looking to leverage VMs inside OpenShift. We know due to the pricing changes from Broadcom that this is an area many companies and organizations are looking at. Moving from OpenShift running on vSphere to running on bare metal and having VMs inside OpenShift is an exciting transformation, and we hope to be able to bring an update on this in an upcoming separate blog post.

Likewise, we are open to customers who would like to explore leveraging OpenShift Serverless (now based on Knative 1.11 in Openshift 4.15) or perhaps with the new OpenShift Distributed Tracing Platform that is now at version 3.2.1 in the OpenShift 4.15 release (this version includes both the new platform based on Tempo and the now deprecated version based on Jaeger). This can also be used together with the Red Hat Open Telemetry Collector in OpenShift 4.15. There are also new versions of OpenShift Developer Hub (based on Backspace), OpenShift Dev Spaces and OpenShift Local. These are all interesting tools, part of the Red Hat OpenShift Container Platform.

If any of the various platform features are interesting for any existing or new VSHN customers, we would encourage you to reach out so we can discuss potentially doing a PoC together.

Summary

Overall, OpenShift 4.15 brings lots of small improvements but no major groundbreaking features from the perspective of the clusters run by VSHN customers. For those interested in the nitty gritty details of the OpenShift 4.15 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.

VSHN customers will soon be notified about the upgrades to their specific clusters.

Interested in VSHN Managed OpenShift?

Head over to our product page VSHN Managed OpenShift to learn more about how VSHN can help you operate your own OpenShift cluster including setup, 24/7 operation, monitoring, backup and maintenance. Hosted in a public cloud of your choice or on-premises in your own data center. 

Markus Speth

Marketing, Communications, People

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Project Syn Tech

Rewriting a Python Library in Rust

20. Mar 2024

Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.

We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.

The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.

I’m also happy to report that this talk was picked up by the popular newsletter “This Week in Rust” for its 538th edition! You can find the recording of the talk, courtesy of the Rust Zürich meetup group organizers, on YouTube.

Simon Gerber

Simon Gerber is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Events Tech

Watch the Recording of “How to Keep Container Operations Steady and Cost-Effective in 2024”

1. Feb 2024

Yesterday took place the “How to Keep Container Operations Steady and Cost-Effective in 2024” event on LinkedIn Live, and for those who couldn’t attend live, you can watch the recording here.

In a rapidly evolving tech landscape, staying ahead of the curve is crucial. This event that will equip you with the knowledge and tools needed to navigate container operations effectively while keeping costs in check.

In this session, we’ll explore best practices, industry insights, and practical tips to ensure your containerized applications run smoothly without breaking the bank.

We will cover:

  • Current Trends: Discover the latest trends shaping container operations in 2024.
  • Operational Stability: Learn strategies to keep your containerized applications running seamlessly.
  • Cost-Effective Practices: Explore tips to optimize costs without compromising performance.
  • Industry Insights: Gain valuable insights from real-world experiences and success stories.

Schedule:

17:30 – 17:35 – Welcome and Opening Remarks
17:35 – 17:50 – Navigating the Container Landscape: 2024 Trends & Insights
17:50 – 17:55 – VSHN’s Impact: A Spotlight on Our Market Presence
17:55 – 18:10 – Guide to Ensuring Steady Operations in Containerized Environments
18:10 – 18:25 – Optimizing Costs without Compromising Performance: A Practical Guide
18:25 – 18:30 -Taking Action: Implementing Best Practices for Container Operations
18:30 -> Q&A

Don’t miss out on this opportunity to set a solid foundation for your containerized applications in 2024.

Aarno Aukia

Aarno is Co-Founder of VSHN AG and provides technical enthusiasm as a Service as CTO.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us