OpenShift Tech

VSHN Managed OpenShift: Upgrade to OpenShift version 4.15

17. Jul 2024

As we start to prepare to rollout upgrades to OpenShift v4.15 across all our customers clusters it is a good opportunity to look again at what was in the Red Hat OpenShift 4.15 release. It brought Kubernetes 1.28 and CRI-O 1.28 and it was largely focused on small improvements in the core platform and enhancements to how OpenShift runs on underlying infrastructure including bare-metal and public cloud providers.

The Red Hat infographic highlights some of the key changes:

What’s New in Red Hat OpenShift 4.15 Infographic by Sunil Malagi

For our VSHN Managed OpenShift and APPUiO customers, we want to highlight the key changes in the release that are relevant for them.

Across all VSHN managed OpenShift clusters – including APPUiO

Our summary highlights that apply are the following:

  • OpenShift 4.15 is based on Kubernetes 1.28 and CRI-O 1.28
  • Update to CoreDNS 1.11.1
  • There are some node enhancements (such faster builds for unprivileged pods, and compatibility of multiple image repository mirroring objects)
  • The release also brings updated versions for the monitoring stack (Alertmanager to 0.26.0, kube-state-metrics to 2.10.1, node-exporter to 1.7.0, Prometheus to 2.48.0, Prometheus Adapter to 0.11.2, Prometheus Operator to 0.70.0, Thanos Querier to 0.32.5)
  • It also includes some additional improvements and fixes to the monitoring stack
  • There are some changes to the Bare-Metal Operator so that it now automatically powers off any host that is removed from the cluster
  • There are some platform fixes including some security related ones like securing the cluster metrics port using TLS
  • OLM (Operator Lifecycle Management is being introduced as v1 and this brings three new life cycle classifications for cluster operators that are being introduced: Platform Aligned, for operators whose maintenance streams align with the OpenShift version; Platform Agnostic, for operators who make use of maintenance streams, but they don’t need to align with the OpenShift version; and Rolling Stream, for operators which use a single stream of rolling updates.

On VSHN Managed OpenShift clusters with optional features enabled

The changes that might relate to some VSHN Managed OpenShift customers who have optional features enabled would include:

  • OpenShift Service Mesh 2.5 based on Istio 1.18 and Kiali 1.73
  • Enhancements to RHOS Pipelines
  • Machine API – Defining a VMware vSphere failure domain for a control plane machine set (Technology Preview)
  • Updates to hosted control planes within OSCP
  • Bare-Metal hardware provisioning fixes

Changes not relevant to VSHN customers

There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. It is also interesting to note the deprecation of the OpenShift SDN network plugin, which means no new clusters can leverage that setup. Additionally, there are new features related to specific cloud providers (like Oracle Cloud Infrastructure) or specific hardware stacks (like IBM Z or IBM Power).

The changes to handling storage and in particular storage appliances is also not relevant for VSHN customers as none of the storage features affect how we handle our storage on cloud providers or on-prem.

Features in OpenShift open to customer PoCs before we enable for all VSHN customers

We do have an interesting customer PoC with Red Hat OpenShift Virtualization which is an interesting feature that continues to mature in OpenShift 4.15. We are excited to see the outcome of this PoC and to potentially making that available to all our customers looking to leverage VMs inside OpenShift. We know due to the pricing changes from Broadcom that this is an area many companies and organizations are looking at. Moving from OpenShift running on vSphere to running on bare metal and having VMs inside OpenShift is an exciting transformation, and we hope to be able to bring an update on this in an upcoming separate blog post.

Likewise, we are open to customers who would like to explore leveraging OpenShift Serverless (now based on Knative 1.11 in Openshift 4.15) or perhaps with the new OpenShift Distributed Tracing Platform that is now at version 3.2.1 in the OpenShift 4.15 release (this version includes both the new platform based on Tempo and the now deprecated version based on Jaeger). This can also be used together with the Red Hat Open Telemetry Collector in OpenShift 4.15. There are also new versions of OpenShift Developer Hub (based on Backspace), OpenShift Dev Spaces and OpenShift Local. These are all interesting tools, part of the Red Hat OpenShift Container Platform.

If any of the various platform features are interesting for any existing or new VSHN customers, we would encourage you to reach out so we can discuss potentially doing a PoC together.


Overall, OpenShift 4.15 brings lots of small improvements but no major groundbreaking features from the perspective of the clusters run by VSHN customers. For those interested in the nitty gritty details of the OpenShift 4.15 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.

VSHN customers will soon be notified about the upgrades to their specific clusters.

Markus Speth

Marketing, People, Strategy

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Project Syn Tech

Rewriting a Python Library in Rust

20. Mrz 2024

Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.

We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.

The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.

I’m also happy to report that this talk was picked up by the popular newsletter „This Week in Rust“ for its 538th edition! You can find the recording of the talk, courtesy of the Rust Zürich meetup group organizers, on YouTube.

Simon Gerber

Simon Gerber ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Event Tech

Watch the Recording of „How to Keep Container Operations Steady and Cost-Effective in 2024“

1. Feb 2024

Yesterday took place the „How to Keep Container Operations Steady and Cost-Effective in 2024“ event on LinkedIn Live, and for those who couldn’t attend live, you can watch the recording here.

In a rapidly evolving tech landscape, staying ahead of the curve is crucial. This event that will equip you with the knowledge and tools needed to navigate container operations effectively while keeping costs in check.

In this session, we’ll explore best practices, industry insights, and practical tips to ensure your containerized applications run smoothly without breaking the bank.

We will cover:

  • Current Trends: Discover the latest trends shaping container operations in 2024.
  • Operational Stability: Learn strategies to keep your containerized applications running seamlessly.
  • Cost-Effective Practices: Explore tips to optimize costs without compromising performance.
  • Industry Insights: Gain valuable insights from real-world experiences and success stories.


17:30 – 17:35 – Welcome and Opening Remarks
17:35 – 17:50 – Navigating the Container Landscape: 2024 Trends & Insights
17:50 – 17:55 – VSHN’s Impact: A Spotlight on Our Market Presence
17:55 – 18:10 – Guide to Ensuring Steady Operations in Containerized Environments
18:10 – 18:25 – Optimizing Costs without Compromising Performance: A Practical Guide
18:25 – 18:30 -Taking Action: Implementing Best Practices for Container Operations
18:30 -> Q&A

Don’t miss out on this opportunity to set a solid foundation for your containerized applications in 2024.

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.


Composition Functions in Production

9. Jan 2024

(This post was originally published on the Crossplane blog.)

Crossplane has recently celebrated its fifth birthday, but at VSHN, we’ve been using it in production for almost three years now. In particular, it has become a crucial component of one of our most popular products. We’ve invested a lot of time and effort on Crossplane, to the point that we’ve developed (and open-sourced) our own custom modules for various technologies and cloud providers, such as Exoscale,, or MinIO.

In this blog post, we will provide an introduction to a relatively new feature of Crossplane called Composition Functions, and show how the VSHN team uses it in a very specific product: the VSHN Application Catalog, also known as VSHN AppCat.

Crossplane Compositions

To understand Composition Functions, we need to understand what standard Crossplane Compositions are in the first place. Compositions, available in Crossplane since version 0.10.0, can be understood as templates that can be applied to Kubernetes clusters to modify their configuration. What sets them apart from other template technologies (such as Kustomize, OpenShift Template objects, or Helm charts) is their capacity to perform complex transformations, patch fields on Kubernetes manifests, following more advanced rules and with better reusability and maintainability. Crossplane compositions are usually referred to as „Patch and Transform“ compositions, or „PnT“ for short.

As powerful as standard Crossplane Compositions are, they have some limitations, which can be summarized in a very geeky yet technically appropriate phrase: they are not „Turing-complete“.

  • Compositions don’t support conditions, meaning that the transformations they provide are applied on an „all or nothing“ basis.
  • They also don’t support loops, which means that you cannot apply transformations iteratively.
  • Finally, advanced operations are not supported either, like checking for statuses in other systems, or performing dynamic data lookups at runtime.

To address these limitations, Crossplane 1.11 introduced a new Alpha feature called „Composition Functions“. Note that as of writing, Composition Functions are in Beta in 1.14.

Composition Functions

Composition functions complement and in some cases replace Crossplane „PnT“ Compositions entirely. Most importantly, DevOps engineers can create Composition Functions using any programming language; this is because they run as standard OCI containers, following a specific set of interface requirements. The result of applying a Composition Function is a new composite resource applied to a Kubernetes cluster.

Let’s look at an elementary „Hello World“ example of a Composition Function.

kind: Composition
  name: example-bucket-function
    kind: XBucket
  mode: Pipeline
  - step: handle-xbucket-xr
      name: function-xr-xbucket

The example above shows a standard Crossplane composition with a new field: „pipeline“ specifying an array of functions, referred to via their name.

As stated previously, the function itself can be written in any programming language, like Go.

func (f *Function) RunFunction(_ context.Context, req *fnv1beta1.RunFunctionRequest) (*fnv1beta1.RunFunctionResponse, error) {
    rsp := response.To(req, response.DefaultTTL)
    response.Normal(rsp, "Hello world!")
    return rsp, nil

The example above, borrowed from the official documentation, does just one thing: it reads a request object, modifies a value, and returns it to the caller. Needless to say, this example is for illustration purposes only, lacking error checking, logging, security, and more, and should not be used in production. Developers use the Crossplane CLI to create, test, build, and push functions.

Here are a few things to keep in mind when working with Composition Functions:

  • They run in order, as specified in the „pipeline“ array of the Composition object, from top to bottom.
  • The output of the previous Composition Function is used as input for the following one.
  • They can be combined with standard „PnT“ compositions by using the function-patch-and-transform function, allowing you to reuse your previous investment in standard Crossplane compositions.
  • In the Alpha release, if you combined „PnT“ compositions with Composition Functions, „PnT“ compositions ran first, and the output of the last one is fed to the first function; since the latest release, this is no longer the case, and „PnT“ compositions can now run at any step of the pipeline.
  • Composition Functions must be called using RunFunctionRequest objects, and return RunFunctionResponse objects.
  • In the Alpha release, these two objects were represented by a now deprecated „FunctionIO“ structure in YAML format.
  • RunFunctionRequest and RunFunctionResponse objects contain a full and coherent „desired state“ for your resources. This means that if an object is not explicitly specified in a request payload, it will be deleted. Developers must pass the full desired state of their resources at every invocation.

Practical Example: VSHN AppCat

Let’s look at a real-world use case for Crossplane Composition Functions: the VSHN Application Catalog, also known as AppCat. The AppCat is an application marketplace allowing DevOps engineers to self-provision different kinds of middleware products, such as databases, message queues, or object storage buckets, in various cloud providers. These products are managed by VSHN, which frees application developers from a non-negligible burden of maintenance and oversight.

Standard Crossplane „PnT“ Compositions proved limited very early in the development of VSHN AppCat, so we started using Composition Functions as soon as they became available. They have allowed us to do the following:

  • Composition Functions enable complex tasks, involving the verification of current deployment values and taking decisions before deploying services.
  • They can drive the deployment of services involving Helm charts, modifying values on-the-go as required by our customers, their selected cloud provider, and other parameters.
  • Conditionals allow us to script complex scenarios, involving various environmental decisions, and to reuse that knowledge.
  • Thanks to Composition Functions, the VSHN team can generalize many activities, like backup handling, automated maintenance, etc.

All things considered, it is difficult to overstate the many benefits that Composition Functions have brought to our workflow and to our VSHN AppCat product.

Learnings of the Alpha Version

We’ve learned a lot while experimenting with the Alpha version of Composition Functions, and we’ve documented our findings for everyone to learn from our mistakes.

  • Running Composition Functions in Red Hat OpenShift used to be impossible in Alpha because OpenShift uses crun, but this issue has now been solved in the Beta release.
  • In particular, when using the Alpha version of Composition Functions, we experienced slow execution speeds with crun but this is no longer the case.
  • We learned the hard way that missing resources on function requests were actually deleted!

Our experience with Composition Functions led us to build our own function runner. This feature uses another capability of Crossplane, which allows functions to specify their runner in the Composition definition:

kind: Composition
  - name: my-function
    type: Container
      image: my-function
        endpoint: grpc-server:9547

Functions run directly on the gRPC server, which, for security reasons, must run as a sidecar to the Crossplane pod. Just like everything we do at VSHN, the Composition Function gRPC Server runner (as well as its associated webhook and all of its code) is open-source, and you can find it on our GitHub. As of the composition function beta, we replaced the custom GRPC logic with the go-sdk. To improve the developer experience, we have created a proxy and enabled the gRPC server to run locally. The proxy runs in kind and redirects to the local gRPC server. This enables us to debug the code and to test changes more efficiently.

Moving to Beta

We recently finished migrating our infrastructure to the most recent Beta version of Composition Functions, released in Crossplane 1.14, and we have been able to do that without incidents. This release included various bits and pieces such as Function Pipelines, an ad-hoc gRPC server to execute functions in memory, and a Function CRD to deploy them directly to clusters.

We are also migrating all of our standard „PnT“ Crossplane Compositions to pure Composition Functions as we speak, thanks to the functions-go-sdk project, which has proven very helpful, even if we are missing typed objects. Managing the same objects with the „PnT“ and Composition Functions increases complexity dramatically. As it can be difficult to determine where an actual change happens.


In this blog post, we have seen how Crossplane Composition Functions compare to standard „PnT“ Crossplane compositions. We have provided a short example, highlighting their major characteristics and caveats, and we have outlined a real-world use case for them, specifically VSHN’s Application Catalog (or AppCat) product.

Crossplane Composition Functions provide an unprecedented level of flexibility and power to DevOps engineers. They enable the creation of complex transformations, with all the advantages of an Infrastructure as Code approach, and the flexibility of using the preferred programming language of each team.

Check out my talk at Control Plane Day with Crossplane, where I walk you through Composition Functions in Production in 15 minutes.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

APPUiO Cloud Tech

„Composition Functions in Production“ by Tobias Brunner at the Control Plane Day with Crossplane

17. Okt 2023

VSHN has been using Crossplane’s Composition Functions in production since its release. In this talk, Tobias Brunner, CTO of VSHN AG, explains what Composition Functions are and how they are used to power crucial parts of the VSHN Application Catalog or AppCat. He also introduces VSHNs custom open-source gRPC server which powers the execution of Composition Functions. Learn how to leverage Composition Functions to spice up your Compositions!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

APPUiO Cloud Tech

New OpenShift 4.13 Features for APPUiO Users

5. Sep 2023

We have just upgraded our APPUiO Cloud clusters from version 4.11 to version 4.13 of Red Hat OpenShift, and there are some interesting new features for our APPUiO Cloud and APPUiO Managed users we would like to share with you.

Kubernetes Beta APIs Removal

OpenShift 4.12 and 4.13 respectively updated their Kubernetes versions to 1.25 and 1.26, providing cumulative updates to various Beta APIs. If you are using objects with the CRDs below, please make sure to migrate your deployments accordingly.

HorizontalPodAutoscalerautoscaling/v2beta1 and autoscaling/v2beta2autoscaling/v2
PodSecurityPolicypolicy/v1beta1Pod Security Admission

As a reminder, the next minor revision of Red Hat OpenShift will update Kubernetes to version 1.27.

Web Console

APPUiO users will discover a neat new feature on the web console: resource quota alerts are displayed now on the Topology screen whenever any resource reaches its usage limits. The alert label link will take you directly to the corresponding ResourceQuotas list page.

Have any questions or comments about OpenShift 4.13? Contact us!

Christian Häusler

Christian Häusler ist ein Product Owner bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.


VSHN’s Response to Zenbleed CVE-2023-20593

25. Jul 2023

Yesterday evening, on Monday, July 24th, 2023, at around 21:15 CEST / 12:15 PDT, our security team received a notification about a critical security vulnerability called „Zenbleed“ potentially affecting the cloud providers where VSHN’s customers systems run on.

This blog post provides details about Zenbleed and the steps taken to mitigate its risks.

What is Zenbleed?

Zenbleed, also known as CVE-2023-20593, is a speculative execution bug discovered by Google, related to but somewhat different from side channel bugs like Meltdown or Spectre. It is a vulnerability affecting AMD processors based on the Zen2 microarchitecture, ranging from AMD’s EPYC datacenter processors to the Ryzen 3000 CPUs used in desktop & laptop computers. This flaw can be exploited to steal sensitive data stored in the CPU, including encryption keys and login credentials.

VSHN’s Response

VSHN immediately set up a task force to discuss this issue, including the team of one of our main cloud providers ( in a call to determine choices of action; among possible options, were contemplated ideas like isolating VSHN customers on dedicated nodes, or patching the affected systems directly.

At around 22:00 CEST, the cloud provider decided after a fruitful discussion with the task force that the best approach was to implement a microcode update. Since Zenbleed is caused by a bug in CPU hardware, the only possible direct fix (apart from the replacement of the CPU) consists of updating the CPU microcode. Such updates can be applied by updating the BIOS on affected systems, or applying an operating system kernel update, like the recently released new Linux kernel version that addresses this vulnerability.

Zenbleed isn’t limited to just one cloud provider, and may affect customers operating their own infrastructure as well. We acknowledged that addressing this vulnerability is primarily a responsibility of the cloud providers, as VSHN doesn’t own any infrastructure that could directly be affected.

The VSHN task force handed the monitoring over to VSHN Canada to test the update as it rolled out to production systems, who stayed in close contact to ensure there were no QoS degradations after the microcode update.

Aftermath successfully finished its work at 01:34 CEST / 16:34 PDT. All VSHN systems running on that provider have been patched accordingly, and the tests carried show that this specific vulnerability has been fixed as required. VSHN Canada confirmed that all systems were running without any problems.

We will continue to monitor this situation and to inform our customers accordingly. All impacted customers will be contacted by VSHN. Please do not hesitate to contact us for more information.

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Allgemein Tech

Hol dir den „DevOps in der Schweiz 2023“ Report

12. Mai 2023

Wir freuen uns, die vierte Ausgabe unseres Reports „DevOps in der Schweiz“ vorstellen zu dürfen!

Von Februar bis April 2022 haben wir eine Studie durchgeführt, um zu erfahren, wie Schweizer Unternehmen DevOps-Prinzipien umsetzen und anwenden.

Wir haben die Ergebnisse in einer PDF-Datei zusammengefasst (nur in englischer Sprache verfügbar) und wie in der vorherigen Ausgabe geben wir auf den ersten Seiten eine kurze Zusammenfassung unserer Ergebnisse.

Du kannst den Bericht hier herunterladen. Viel Spass beim Lesen und wir freuen uns auf dein Feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kanada Tech

VSHN Canada Hackday: A Tale of Tech Triumphs and Tasty Treats

24. Mrz 2023

The VSHN Canada Hackday turned into an epic two-day adventure, where excitement and productivity went hand in hand. Mätthu, Bigli, and Jay, our stellar team members, joined forces to level up VSHN as a company and expand their skill sets. In this blog post, we’re ecstatic to share the highlights and unforgettable moments from our very own Hackday event.

🏆 Notable Achievements

1️⃣ Revamping Backoffice Tools for VSHN Canada

Mätthu dove deep into several pressing matters, including:

  • Time tracking software that feels like a relic from the 2000s. With the Odoo 16 Project underway, we explored its impressive features and found a sleek solution for HR tasks like time and holiday tracking and expenses management. Now we just need to integrate it as a service for VSHN Canada.
  • Aligning the working environments of VSHN Switzerland and Canada. Although not identical, we documented the similarities and differences in our handbook to provide a seamless experience.
  • Tidying up our document storage in Vancouver. Previously scattered across Google Cloud and Nextcloud, a cleanup session finally brought order to the chaos.

📖 Documentation available in the Handbook.

2️⃣ Mastering SKS GitLab Runners

Bigli and Jay teamed up to craft fully managed SKS GitLab runners using Project Syn, aiming to automate GitLab CI processes and eliminate manual installation and updates. This collaboration also served as an invaluable learning experience for Jay, who delved into Project Syn’s architecture and VSHN’s inner workings. Hackday milestones included:

  • Synthesizing the GitLab-runner cluster
  • Updating the cluster to the latest supported version
  • Scheduling cluster maintenance during maintenance windows
  • Developing a component for the GitLab-runner
  • Implementing proper monitoring, time permitting

📖 Documentation available on our wiki.

🍻 Festive Fun

On Hackday’s opening day, we treated ourselves to a team outing at „Batch,“ our go-to local haunt nestled in Vancouver’s scenic harbor. Over unique beers and animated chatter, we toasted to our first-ever Canadian Hackday.

🎉 Wrapping Up

VSHN Canada’s Hackday was an exhilarating mix of productivity, learning, and amusement. Our team banded together to confront challenges, develop professionally, and forge lasting memories. We can hardly wait for future Hackday events and the continued growth of VSHN Canada and VSHN.

Jay Sim

Jay Sim ist ein DevOps-Ingenieur bei VSHN Kanada.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.


Stay Ahead of the Game with Kubernetes

13. Jan 2023

Kubernetes is a powerful platform for deploying and managing containerized applications at scale, and it has become increasingly popular in Switzerland in recent years.

One way to approach it is outsourcing. This can be a strategic and cost-effective option for organizations that do not have the in-house DevOps expertise, know-how, or resources to manage their infrastructure and application operations efficiently.

Not every tech company is in the business of building platforms and operating Kubernetes clusters. Thus by partnering with an experienced partner, companies can tap into a wealth of knowledge and expertise to help them run their applications.

Some companies adopt Kubernetes and look to leverage its capabilities themselves. It’s essential to consider time, effort, and possible implications while utilizing the latest developments and continually adding value to the core business.

In all cases, it will be helpful to align with fundamentals. For this reason, I have compiled a quick guide to Kubernetes in 2023 and best practices in Switzerland.

  1. Understand the basics: Before diving into Kubernetes, have a solid understanding of the reasoning and concepts. This could include cloud infrastructure, networking, containers, how they liaise with each other, and how they can be managed with Kubernetes.
  2. Plan your deployment carefully: When deploying applications with Kubernetes, you must plan thoroughly and consider your workloads‘ specific needs and requirements. This includes but is not limited to resource requirements, network connectivity, scalability, latency, and security considerations.
  3. Use appropriate resource limits: One of the critical benefits of Kubernetes is its ability to manage resources dynamically based on the needs of your applications. To take advantage of this, try to set appropriate resource limits for your application. This will help ensure that your application has the resources they need to run effectively while preventing them from consuming too many resources and impacting other applications.
  4. Monitor your application: It’s essential to monitor your applications and the underlying Kubernetes cluster to ensure they are running smoothly and to identify any issues that may arise. You want to analyze the alerts and react accordingly. You can use several tools and practices to monitor your applications, including log analysis, monitoring with tools like Prometheus and Grafana, and alerting systems.
  5. Use appropriate networking configurations: Networking is critical to any Kubernetes deployment, and choosing the proper network configuration is substantial. What about load balancing, service discovery, and network segmentation?
  6. Secure your application: Security is a top concern for many companies and organizations in Switzerland. You cannot proceed without ensuring that your Kubernetes deployment is secure. At this stage, your team is implementing network segmentation, using secure container runtime environments, and implementing advanced authentication and authorization systems.
  7. Consider using a managed Kubernetes service: For companies without the resources or needing DevOps expertise to manage their clusters, managed Kubernetes services can be a business-saving solution. With managed services, you can get a production-ready cluster, i.e., a fully-managed Kubernetes environment, allowing teams and software engineers to focus on developing new features and deploying their applications rather than managing the underlying infrastructure.
  8. Stay up-to-date with the latest developments: The Kubernetes ecosystem is constantly evolving, and it’s better to stay up-to-date with the latest developments and best practices. This may involve subscribing to newsletters like VSHN, VSHN.timer, or Digests, attending conferences and CNCF meetups, and following key players in the Kubernetes community.

By following best practices, IT leaders, stakeholders, and decision-makers can ensure that they use Kubernetes constructively and get the most out of the platform technology.

Oksana Horobinska

Oksana ist Business Development Specialist bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

APPUiO Cloud Tech

VSHN HackDay – Tailscale on APPUiO Cloud

21. Okt 2022

As part of the fourth VSHN HackDay taking place yesterday and today (October 20th and 21st, 2022), Simon Gerber and I (Tobias Brunner) worked on the idea to get Tailscale VPN running on APPUiO Cloud.

tailscale logo

Tailscale is a VPN service that makes the devices and applications you own accessible anywhere in the world, securely and effortlessly. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other.

The use cases we wanted to make possible are:

  • Access Kubernetes services easily from your laptop without the hassle of „[kubectl|oc] port-forward“. Engineers in charge of development or debugging need to securely access services running on APPUiO Cloud but not exposed to the Internet. That’s the job of a VPN, and Tailscale makes this scenario very easy.
  • Connect pods running on APPUiO Cloud to services that are not directly accessible, for example, behind a firewall or a NAT. Routing outbound connections from a Pod through a VPN on APPUiO Cloud is more complex because of the restricted multi-tenant environment.

We took the challenge and found a solution for both use cases. The result is an OpenShift template on APPUiO Cloud that deploys a pre-configured Tailscale pod and all needed settings into your namespace. You only need a Tailscale account and a Tailscale authorization key. Check the APPUiO Cloud documentation to know how to use this feature.

We developed two new utilities to make it easier to work with Tailscale on APPUiO Cloud (and on any other Kubernetes cluster):

  • tailscale-service-observer: A tool that lists Kubernetes services and posts updates to the Tailscale client HTTP API to expose Kubernetes services as routes in the VPN dynamically.
  • TCP over SOCKS5: A middleman to transport TCP packets over the Tailscale SOCKS5 proxy.

Let us know your use cases for Tailscale on APPUiO Cloud via our product board! Are you already a Tailscale user? Do you want to see deeper integration into APPUiO Cloud?

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

APPUiO Cloud Tech

Most Interesting New Features of OpenShift 4.11

13. Okt 2022

Red Hat OpenShift 4.11 brings a substantial amount of new features. We’ve teased a few of them in our latest VSHN.timer, but in this article, we’re going to dive deeper into those that have the highest impact on our work and on our customers.

Support for CSI generic ephemeral volumes

Container Storage Interface (CSI) generic ephemeral volumes are a cool new feature. We foresee two important use cases for them:

  • When users need an ephemeral volume that exceeds what the node’s file system provides;
  • When users need an ephemeral volume with prepopulated data: this could be done, for example, by creating the volume from a snapshot.

Route Subdomains

The Route API now provides subdomain support, something that was not possible before 4.11:

You can now specify the spec.subdomain field and omit the field of a route. The router deployment that exposes the route will use the spec.subdomain value to determine the host name.

Pod Security Admissions

Pod security admission now runs globally with restricted audit logging and API warnings. This means that, while everything should still run as it did before, you will most likely encounter warnings like these if you relied on security contexts being set by OpenShift’s Security Context Constraints:

Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false…

To solve this issue, users now need to explicitly set security contexts in manifests to avoid these warnings.

Developer Features

The Developer Perspective provides improved integration with GitHub Actions, allowing developers to trigger pipelines and run tasks following Git events such as pushes or tags. And not only that, but the OpenShift console now has a dark mode, too.

CLI Features

The following OpenShift CLI (oc) commands and flags for requesting tokens are deprecated; these include:

  • oc serviceaccounts create-kubeconfig
  • oc serviceaccounts get-token
  • oc serviceaccounts new-token
  • The --service-account/-z  flag for the oc registry login  command

Moreover, the oc create token command generates tokens with a limited lifetime, which can be controlled with the --duration  command line argument. The API server can return a token with a lifetime that is shorter or longer than the requested lifetime. The command apparently generates tokens with a lifetime of one hour by default. If users need a token that doesn’t expire (for example, for a CI/CD pipeline), they should create a ServiceAccount API token secret instead.

OpenShift 4.11 on APPUiO Cloud

At the time of this writing, Red Hat has not yet decided to promote 4.11 as an upgrade target, and for that reason, we have not upgraded APPUiO Cloud clusters yet. As soon as Red Hat enables this, we will update our APPUiO Cloud zones accordingly.

Christian Häusler

Christian Häusler ist ein Product Owner bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Project Syn Tech

Keeping Things Up to Date with Renovate

28. Jun 2022

Our customers trust us with their most precious resource: their information systems. Our job is to keep the underlying systems running, updated, and most importantly, secure.

Project Syn with its Commodore Components is one of the primary weapons in our arsenal to configure and thus protect those systems. Thanks to its GitOps approach, we can ensure that all Kubernetes clusters are always running the latest and (hopefully) most secure version possible.

But just like any other software package, Project Syn brings its complexity: we must keep it safe and sound, which means watching over its container images, its Helm charts, and all of the Commodore Components we use every day.

As you can imagine, juggling so many different software packages is a considerable task; now, think about all of their upstream dependencies (most of them are container images and helm charts, but also Go and Python are a part of the mix). The complexity of the task exponentially increases.

How do we cope with this? Well, as usual, standing on the shoulder of giants. In this case, Renovate.

Renovate has been created to manage this complexity, whether container images, Helm charts, or upstream dependencies. But understandably enough, Renovate per se does not know anything about Commodore Components (at least not yet!), and in particular, it does not know about the Project Syn configuration hierarchy and how to find dependencies within that hierarchy.

So, what’s an Open Source developer to do? We forked Renovate, of course, and adapted it to our needs. How?

  1. We added the Project Syn configuration hierarchy as a new Manager.
  2. We reused the existing datasource to detect new versions of our Commodore Components.

Then we configured our own Renovate fork on all the repositories holding our source code and started getting notified via pull requests whenever there was a new dependency version. Voilà!

With this approach, we have been able to automate much work and avoid using outdated software by automatically being notified of new versions. No more forgotten updates!

We also decided to use „golden files“ to test our Commodore Components; this, in turn, meant that we could not merge PRs created by Renovate in case of failure. For those cases, we also taught Renovate how to update those golden files if needed.

The pull request „Update dependency to v0.8.1 – autoclosed #29“ is a live example of this mechanism in action, and you’re most welcome to check it out.

Christian Häusler

Christian Häusler ist ein Product Owner bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Allgemein Tech

Hol dir den „DevOps in der Schweiz 2022“ Report

25. Mai 2022

Wir freuen uns, die dritte Ausgabe unseres Reports „DevOps in der Schweiz“ vorstellen zu dürfen!

Von Januar bis März 2022 haben wir eine Studie durchgeführt, um zu erfahren, wie Schweizer Unternehmen DevOps-Prinzipien umsetzen und anwenden.

Wir haben die Ergebnisse in einer PDF-Datei zusammengefasst (nur in englischer Sprache verfügbar) und wie in der vorherigen Ausgabe geben wir auf den ersten Seiten eine kurze Zusammenfassung unserer Ergebnisse.

Sie können den Bericht auf unserer Website herunterladen. Viel Spaß beim Lesen und wir freuen uns auf Ihr Feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.


How to Restrict Container Registries per Namespace with Kyverno

24. Mai 2022

We have recently received a request from a customer, asking us to restrict the container registries that could be used to deploy images from in their OpenShift 4 cluster.

We could have added such configuration directly at node level, as explained in Red Hat’s documentation; it’s indeed possible to whitelist registries on repository and tag level, but that would have forced us to keep all those whitelists updated with those our Project Syn components regularly use.

We have instead chosen to use Kyverno for this task: it allows us to enforce the limitations on a per-namespace level, with much more flexibility and maintanability.

This is a ClusterPolicy object for Kyverno, adapted from the solution we provided to our customer, showing how we can restrict the limitation to some namespaces, so that containers can be pulled only from some specific registries.

kind: ClusterPolicy
  name: restrict-registries
  annotations: Restrict Image Registries Pod >-
      Restrict image pulling only to whitelisted registries
  validationFailureAction: enforce
  background: true
  - name: validate-registries
      - resources:
          - Pod
          - "namespace-wildcard-*"
      message: "Image registry not whitelisted"
          - image: "* |*"

Andreas Tellenbach

Andreas Tellenbach ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.


Agent-less GitLab integration with OpenShift

20. Apr 2022

As you know (and if you didn’t, now you do) GitLab has deprecated the certificate-based integration with Kubernetes in version 14.5, and it is expected that version 15 will disable it completely.

The official replacement to the (now legacy) certificate-based integration mechanism is the GitLab Agent, to be installed in your Kubernetes cluster, and providing a tighter integration between our two beloved platforms.

Well, hold on; things aren’t that easy. For example in our case, if we wanted to offer the GitLab Agent to our APPUiO Cloud users, we would run into two issues:

  • On one side, installing the GitLab Agent is more complicated and expensive, because it would run as another pod in the same namespace, consuming resources.
  • And on the other, we cannot install it cluster-wide, because of the admin permissions we have configured in this multi-tenant shared platform. Users would have to manage each and every GitLab Agent on their own, possibly having multiple agents deployed in several namespaces and GitLab repositories.

So, what’s a DevOps engineer to do? Well, here’s a simple, very simple solution; so simple that you might have never thought of it before: create your own KUBECONFIG variable in GitLab with a dedicated service account!

Here’s how it works using the OpenShift oc tool in your own system:

Create a service account in your OpenShift project:

oc create serviceaccount gitlab-ci

Add an elevated role to this service account so it can manage your deployments:

oc policy add-role-to-user admin -z gitlab-ci --rolebinding-name gitlab-ci

Create a local KUBECONFIG variable and login to your OpenShift cluster using the gitlab-ci service account:

TOKEN=$(oc sa get-token gitlab-ci)
export KUBECONFIG=gitlab-ci.kubeconfig
oc login --server=$OPENSHIFT_API_URL --token=$TOKEN

You should now have a file named gitlab-ci.kubeconfig in your current working directory; copy its contents and create a variable named KUBECONFIG in the GitLab settings with the value of the file (that’s under „Settings“ > „CI/CD“ > „Variables“ > „Expand“ > „Add variable“). Remember to set the „environment“ scope for the variable and to disable the old Kubernetes integration, as the KUBECONFIG variable might collide with this new variable.

Tada! There are a few advantages to this approach:

  • It is certainly faster and simpler to setup for simple push-based deployments than by using the GitLab Agent.
  • It is also easier to get rid of deprecated features without having to change the pipeline or migrating to the GitLab Agent.

But there are a few drawbacks as well:

  • You don’t get all bells and whistles. It is reasonable to think that at some point the GitLab Agent will offer advanced features, such as access to pod logs from GitLab, monitoring alerts directly from the GitLab user interface, and many other things the old Kubernetes certificate-based integration could do. This approach does not provide anything like this.
  • The cluster’s API endpoint has to be publicly accessible, not behind any firewall or VPN; conveniently enough, the GitLab Agent solves exactly this problem.

If you are interested in GitLab and Kubernetes, join us next week in our GitLab Switzerland Meetup for more information about the GitLab Agent for Kubernetes, and afterwards for a nice apéro with like-minded GitLab enthusiasts!

Oh, and a final tip; if you find yourself having to log in to OpenShift integrated registry but you don’t have yq installed in the job’s image, you can use sed to extract the token from the $KUBECONFIG file:

sed -n 's/^\s*token:\s*\(.*\)/\1/ p' "${KUBECONFIG}" | docker login -u gitlab-ci --password-stdin "${OPENSHIFT_REGISTRY}"

Christian Сremer

Christian Cremer ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.


Serverless on Kubernetes: Knative

9. Mrz 2022

Back in 2019 we published a review of the most relevant serverless frameworks available for Kubernetes. That article became one of the most visited in our blog in the past two years, so we decided to return to this subject and provide an update for our readers.

TL;DR: Serverless in Kubernetes in 2022 means, to a large extent, Knative.

What’s in a Word

The „Serverless“ word is polarizing.

Robert Scoble, one of the first tech influencers, uttered it for the first time fourteen years ago, as James Potter reported recently. In those times, „Serverless“ just meant „not having servers and being an AWS EC2 customer“. Because, yes, companies used to have their own physical servers back then. Amazing, isn’t it?

Fast forward to 2022, and the CNCF Serverless Landscape has grown to such an extent that it can be very hard to figure out what „serverless“ truly means.

Even though for some it just represents the eternal return of 1970s style batch computing, in the past five years the „Serverless“ word has taken a different and very specific meaning.

The winning definition caters for the complete abstraction of the infrastructure required to run individual pieces of software, at scale, on a cloud infrastructure provider.

Or, in less buzzword-y terms, just upload your code, and let the platform figure out how to run it for you: the famous „FaaS“, also known as Function as a Service.

The IaaS Front

The three major Infrastructure as a Service providers in the world offer their own, more-or-less-proprietary version of the Serverless paradigm: AWS Lambda, Azure Functions, and Google Cloud Run (which complemented the previous Google Cloud Functions service). These are three different approaches to the subject of FaaS, each with its advantages and caveats.

Some companies, like A Cloud Guru, have successfully embraced the Serverless architecture from the very start (in this case, based on AWS Lambda), creating cost-effective solutions with incredible scalability.

But one of the aforementioned caveats, and not a small one for that matter, is platform lock-in. Portability has always been a major concern for enterprise computing: if building apps on AWS Lambda is an interesting prospect, could we move them to a different provider later on?

Well, we now have an answer to that question, thanks to our good old friend: the container.

The Debate Is Over

Almost exactly three years ago, Anthony Skipper wrote:

We will say it again… packaging code into containers should be considered a FaaS anti-pattern!

Containers or not? This was still a big debate at the time of our original article in 2019.

Some frameworks like Fission and IaaS services such as AWS Lambda and Google Cloud Functions did not require developers to package their apps as containers; just upload your code and watch it run. On the other hand, OpenFaaS and Knative-based offerings did require containers. Who would win this fight?

The world of Serverless solutions in 2022 has decided that wrapping functions in containers is the way to go. Even AWS Lambda started offering this option in December 2020. This is a huge move, allowing enterprises to run their code in whichever infrastructure they would like to.

In retrospect, the market has chosen wisely. Containers are now a common standard, allowing the same code to run unchanged, from a Raspberry Pi to your laptop to an IBM Mainframe. It is a natural choice, and it turned out that it was a matter of time before this happened.

Even better, with increased industry experience, container images got smaller and smaller, thanks to Alpine-based, scratch-based, and distroless-based images. Being lightweight allows containers to start and stop almost instantly, and makes scaling applications faster and easier than ever.

And this choice turned out to benefit one specific framework among all: Knative.

The Age of Knative

In the Kubernetes galaxy, Knative has slowly by steadily imposed its mark as the core infrastructure of Kubernetes serverless workloads.

In 2019, our article compared six different mechanisms to run serverless payloads on Kubernetes:

  1. OpenFaaS
  2. Fn Project
  3. Fission
  4. OpenWhisk
  5. Kubeless
  6. TriggerMesh

Of those, Fn Project and Kubeless have been simply abandoned. Other frameworks suffered the same fate: Riff has disappeared, just like Gestalt, including its parent company Galatic Fog. IronFunctions moved away from Kubernetes into its own PaaS product. Funktion has been sandboxed and abandoned; Leveros is abandoned too; BlueNimble does not show much activity.

On the other hand, new players have appeared in the serverless market: Rainbond, for example; or Nuclio, targeting the scientific computation market.

But many new contenders are based on Knative: apart from TriggerMesh, which we mentioned in 2019 already, we have now Kyma, Knix, and Red Hat’s OpenShift 4 serverless, all powered by Knative.

The interest in Knative is steadily growing these days: CERN uses it. IBM is talking about it. The Serverless Framework has a provider for it. Even Google Cloud Run is based on it! Which shouldn’t be surprising, knowing that Knative was originally created by Google, just like Kubernetes.

And now Knative has just been accepted as a CNCF incubating project!

Even though Knative is not exactly a FaaS per se, it deserves the top spot in our review of 2022 FaaS-on-K8s technologies, being the platform upon which other serverless services are built, receiving huge support from the major names of the cloud-native industry.

Getting Started with Knative

Want to see Knative in action? Getting started with Knative on your laptop now is easier than ever.

  1. Install kind.
  2. Run the following command on your terminal:

$ curl -sL | bash

To work with Knative objects on your cluster, install the kn command-line tool. Once you have launched your new Knative-powered Kind cluster, create a file called knative-service.yaml with the contents below:

kind: Service
  name: hello
      name: hello-world
        - image:
            - containerPort: 8080
            - name: TARGET
              value: "World"

And then just apply it: kubectl apply -f knative-service.yaml.

The kn service list command should now display your „helloworld“ service, which should become available after a few seconds. Once it’s ready, you can execute it simply with curl:

$ curl

(If you prefer to use Minikube, you can follow this tutorial instead.)

Thanks to Knative, developers can roll out new versions of their services (called „revisions“ in Knative terminology) while the old ones are still running, and even distribute traffic among them. This can be very useful in A/B testing sessions, for example. Knative services can be triggered by a large array of events, with great flexibility.

A full introduction to Knative is outside of the scope of this review, so here are some resources we recommend to learn everything about Knative serving and eventing:

  • The excellent and funny „Knative in Action“ (2021) book by Jacques Chester, available for free courtesy of VMWare.
  • A free, full introduction to Knative (July 2021) by Sebastian Goasguen, the founder of TriggerMesh; a video and its related source code are provided as well.
  • And to top it off, the „Knative Cookbook“ (April 2020) by Burr Suter and Kamesh Sampath, also available for free, courtesy of Red Hat.

Interested in Knative and Red Hat OpenShift Serverless? Get in touch with us and let us help you in your FaaS journey!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.