Project Syn Tech

Rewriting a Python Library in Rust

20. Mar 2024

Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.

We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.

The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.

I’m also happy to report that this talk was picked up by the popular newsletter “This Week in Rust” for its 538th edition! You can find the recording of the talk, courtesy of the Rust Zürich meetup group organizers, on YouTube.

Simon Gerber

Simon Gerber is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Events Tech

Watch the Recording of “How to Keep Container Operations Steady and Cost-Effective in 2024”

1. Feb 2024

Yesterday took place the “How to Keep Container Operations Steady and Cost-Effective in 2024” event on LinkedIn Live, and for those who couldn’t attend live, you can watch the recording here.

In a rapidly evolving tech landscape, staying ahead of the curve is crucial. This event that will equip you with the knowledge and tools needed to navigate container operations effectively while keeping costs in check.

In this session, we’ll explore best practices, industry insights, and practical tips to ensure your containerized applications run smoothly without breaking the bank.

We will cover:

  • Current Trends: Discover the latest trends shaping container operations in 2024.
  • Operational Stability: Learn strategies to keep your containerized applications running seamlessly.
  • Cost-Effective Practices: Explore tips to optimize costs without compromising performance.
  • Industry Insights: Gain valuable insights from real-world experiences and success stories.

Schedule:

17:30 – 17:35 – Welcome and Opening Remarks
17:35 – 17:50 – Navigating the Container Landscape: 2024 Trends & Insights
17:50 – 17:55 – VSHN’s Impact: A Spotlight on Our Market Presence
17:55 – 18:10 – Guide to Ensuring Steady Operations in Containerized Environments
18:10 – 18:25 – Optimizing Costs without Compromising Performance: A Practical Guide
18:25 – 18:30 -Taking Action: Implementing Best Practices for Container Operations
18:30 -> Q&A

Don’t miss out on this opportunity to set a solid foundation for your containerized applications in 2024.

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Composition Functions in Production

9. Jan 2024

(This post was originally published on the Crossplane blog.)

Crossplane has recently celebrated its fifth birthday, but at VSHN, we’ve been using it in production for almost three years now. In particular, it has become a crucial component of one of our most popular products. We’ve invested a lot of time and effort on Crossplane, to the point that we’ve developed (and open-sourced) our own custom modules for various technologies and cloud providers, such as Exoscale, cloudscale.ch, or MinIO.

In this blog post, we will provide an introduction to a relatively new feature of Crossplane called Composition Functions, and show how the VSHN team uses it in a very specific product: the VSHN Application Catalog, also known as VSHN AppCat.

Crossplane Compositions

To understand Composition Functions, we need to understand what standard Crossplane Compositions are in the first place. Compositions, available in Crossplane since version 0.10.0, can be understood as templates that can be applied to Kubernetes clusters to modify their configuration. What sets them apart from other template technologies (such as Kustomize, OpenShift Template objects, or Helm charts) is their capacity to perform complex transformations, patch fields on Kubernetes manifests, following more advanced rules and with better reusability and maintainability. Crossplane compositions are usually referred to as “Patch and Transform” compositions, or “PnT” for short.

As powerful as standard Crossplane Compositions are, they have some limitations, which can be summarized in a very geeky yet technically appropriate phrase: they are not “Turing-complete”.

  • Compositions don’t support conditions, meaning that the transformations they provide are applied on an “all or nothing” basis.
  • They also don’t support loops, which means that you cannot apply transformations iteratively.
  • Finally, advanced operations are not supported either, like checking for statuses in other systems, or performing dynamic data lookups at runtime.

To address these limitations, Crossplane 1.11 introduced a new Alpha feature called “Composition Functions”. Note that as of writing, Composition Functions are in Beta in 1.14.

Composition Functions

Composition functions complement and in some cases replace Crossplane “PnT” Compositions entirely. Most importantly, DevOps engineers can create Composition Functions using any programming language; this is because they run as standard OCI containers, following a specific set of interface requirements. The result of applying a Composition Function is a new composite resource applied to a Kubernetes cluster.

Let’s look at an elementary “Hello World” example of a Composition Function.

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
  name: example-bucket-function
spec:
  compositeTypeRef:
    apiVersion: example.crossplane.io/v1
    kind: XBucket
  mode: Pipeline
  pipeline:
  - step: handle-xbucket-xr
    functionRef:
      name: function-xr-xbucket

The example above shows a standard Crossplane composition with a new field: “pipeline” specifying an array of functions, referred to via their name.

As stated previously, the function itself can be written in any programming language, like Go.

func (f *Function) RunFunction(_ context.Context, req *fnv1beta1.RunFunctionRequest) (*fnv1beta1.RunFunctionResponse, error) {
    rsp := response.To(req, response.DefaultTTL)
    response.Normal(rsp, "Hello world!")
    return rsp, nil
}

The example above, borrowed from the official documentation, does just one thing: it reads a request object, modifies a value, and returns it to the caller. Needless to say, this example is for illustration purposes only, lacking error checking, logging, security, and more, and should not be used in production. Developers use the Crossplane CLI to create, test, build, and push functions.

Here are a few things to keep in mind when working with Composition Functions:

  • They run in order, as specified in the “pipeline” array of the Composition object, from top to bottom.
  • The output of the previous Composition Function is used as input for the following one.
  • They can be combined with standard “PnT” compositions by using the function-patch-and-transform function, allowing you to reuse your previous investment in standard Crossplane compositions.
  • In the Alpha release, if you combined “PnT” compositions with Composition Functions, “PnT” compositions ran first, and the output of the last one is fed to the first function; since the latest release, this is no longer the case, and “PnT” compositions can now run at any step of the pipeline.
  • Composition Functions must be called using RunFunctionRequest objects, and return RunFunctionResponse objects.
  • In the Alpha release, these two objects were represented by a now deprecated “FunctionIO” structure in YAML format.
  • RunFunctionRequest and RunFunctionResponse objects contain a full and coherent “desired state” for your resources. This means that if an object is not explicitly specified in a request payload, it will be deleted. Developers must pass the full desired state of their resources at every invocation.

Practical Example: VSHN AppCat

Let’s look at a real-world use case for Crossplane Composition Functions: the VSHN Application Catalog, also known as AppCat. The AppCat is an application marketplace allowing DevOps engineers to self-provision different kinds of middleware products, such as databases, message queues, or object storage buckets, in various cloud providers. These products are managed by VSHN, which frees application developers from a non-negligible burden of maintenance and oversight.

Standard Crossplane “PnT” Compositions proved limited very early in the development of VSHN AppCat, so we started using Composition Functions as soon as they became available. They have allowed us to do the following:

  • Composition Functions enable complex tasks, involving the verification of current deployment values and taking decisions before deploying services.
  • They can drive the deployment of services involving Helm charts, modifying values on-the-go as required by our customers, their selected cloud provider, and other parameters.
  • Conditionals allow us to script complex scenarios, involving various environmental decisions, and to reuse that knowledge.
  • Thanks to Composition Functions, the VSHN team can generalize many activities, like backup handling, automated maintenance, etc.

All things considered, it is difficult to overstate the many benefits that Composition Functions have brought to our workflow and to our VSHN AppCat product.

Learnings of the Alpha Version

We’ve learned a lot while experimenting with the Alpha version of Composition Functions, and we’ve documented our findings for everyone to learn from our mistakes.

  • Running Composition Functions in Red Hat OpenShift used to be impossible in Alpha because OpenShift uses crun, but this issue has now been solved in the Beta release.
  • In particular, when using the Alpha version of Composition Functions, we experienced slow execution speeds with crun but this is no longer the case.
  • We learned the hard way that missing resources on function requests were actually deleted!

Our experience with Composition Functions led us to build our own function runner. This feature uses another capability of Crossplane, which allows functions to specify their runner in the Composition definition:

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
[...]
  functions:
  - name: my-function
    type: Container
    container:
      image: my-function
      runner:
        endpoint: grpc-server:9547

Functions run directly on the gRPC server, which, for security reasons, must run as a sidecar to the Crossplane pod. Just like everything we do at VSHN, the Composition Function gRPC Server runner (as well as its associated webhook and all of its code) is open-source, and you can find it on our GitHub. As of the composition function beta, we replaced the custom GRPC logic with the go-sdk. To improve the developer experience, we have created a proxy and enabled the gRPC server to run locally. The proxy runs in kind and redirects to the local gRPC server. This enables us to debug the code and to test changes more efficiently.

Moving to Beta

We recently finished migrating our infrastructure to the most recent Beta version of Composition Functions, released in Crossplane 1.14, and we have been able to do that without incidents. This release included various bits and pieces such as Function Pipelines, an ad-hoc gRPC server to execute functions in memory, and a Function CRD to deploy them directly to clusters.

We are also migrating all of our standard “PnT” Crossplane Compositions to pure Composition Functions as we speak, thanks to the functions-go-sdk project, which has proven very helpful, even if we are missing typed objects. Managing the same objects with the “PnT” and Composition Functions increases complexity dramatically. As it can be difficult to determine where an actual change happens.

Conclusion

In this blog post, we have seen how Crossplane Composition Functions compare to standard “PnT” Crossplane compositions. We have provided a short example, highlighting their major characteristics and caveats, and we have outlined a real-world use case for them, specifically VSHN’s Application Catalog (or AppCat) product.

Crossplane Composition Functions provide an unprecedented level of flexibility and power to DevOps engineers. They enable the creation of complex transformations, with all the advantages of an Infrastructure as Code approach, and the flexibility of using the preferred programming language of each team.

Check out my talk at Control Plane Day with Crossplane, where I walk you through Composition Functions in Production in 15 minutes.

Tobias Brunner

Tobias Brunner is working since over 20 years in IT and more than 15 years with Internet technology. New technology has to be tried and written about.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

“Composition Functions in Production” by Tobias Brunner at the Control Plane Day with Crossplane

17. Oct 2023

VSHN has been using Crossplane’s Composition Functions in production since its release. In this talk, Tobias Brunner, CTO of VSHN AG, explains what Composition Functions are and how they are used to power crucial parts of the VSHN Application Catalog or AppCat. He also introduces VSHNs custom open-source gRPC server which powers the execution of Composition Functions. Learn how to leverage Composition Functions to spice up your Compositions!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

New OpenShift 4.13 Features for APPUiO Users

5. Sep 2023

We have just upgraded our APPUiO Cloud clusters from version 4.11 to version 4.13 of Red Hat OpenShift, and there are some interesting new features for our APPUiO Cloud and APPUiO Managed users we would like to share with you.

Kubernetes Beta APIs Removal

OpenShift 4.12 and 4.13 respectively updated their Kubernetes versions to 1.25 and 1.26, providing cumulative updates to various Beta APIs. If you are using objects with the CRDs below, please make sure to migrate your deployments accordingly.

CronJobbatch/v1beta1batch/v1
EndpointSlicediscovery.k8s.io/v1beta1discovery.k8s.io/v1
Eventevents.k8s.io/v1beta1events.k8s.io/v1
FlowSchemaflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
HorizontalPodAutoscalerautoscaling/v2beta1 and autoscaling/v2beta2autoscaling/v2
PodDisruptionBudgetpolicy/v1beta1policy/v1
PodSecurityPolicypolicy/v1beta1Pod Security Admission
PriorityLevelConfigurationflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
RuntimeClassnode.k8s.io/v1beta1node.k8s.io/v1

As a reminder, the next minor revision of Red Hat OpenShift will update Kubernetes to version 1.27.

Web Console

APPUiO users will discover a neat new feature on the web console: resource quota alerts are displayed now on the Topology screen whenever any resource reaches its usage limits. The alert label link will take you directly to the corresponding ResourceQuotas list page.

Have any questions or comments about OpenShift 4.13? Contact us!

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

VSHN’s Response to Zenbleed CVE-2023-20593

25. Jul 2023

Yesterday evening, on Monday, July 24th, 2023, at around 21:15 CEST / 12:15 PDT, our security team received a notification about a critical security vulnerability called “Zenbleed” potentially affecting the cloud providers where VSHN’s customers systems run on.

This blog post provides details about Zenbleed and the steps taken to mitigate its risks.

What is Zenbleed?

Zenbleed, also known as CVE-2023-20593, is a speculative execution bug discovered by Google, related to but somewhat different from side channel bugs like Meltdown or Spectre. It is a vulnerability affecting AMD processors based on the Zen2 microarchitecture, ranging from AMD’s EPYC datacenter processors to the Ryzen 3000 CPUs used in desktop & laptop computers. This flaw can be exploited to steal sensitive data stored in the CPU, including encryption keys and login credentials.

VSHN’s Response

VSHN immediately set up a task force to discuss this issue, including the team of one of our main cloud providers (cloudscale.ch) in a call to determine choices of action; among possible options, were contemplated ideas like isolating VSHN customers on dedicated nodes, or patching the affected systems directly.

At around 22:00 CEST, the cloud provider decided after a fruitful discussion with the task force that the best approach was to implement a microcode update. Since Zenbleed is caused by a bug in CPU hardware, the only possible direct fix (apart from the replacement of the CPU) consists of updating the CPU microcode. Such updates can be applied by updating the BIOS on affected systems, or applying an operating system kernel update, like the recently released new Linux kernel version that addresses this vulnerability.

Zenbleed isn’t limited to just one cloud provider, and may affect customers operating their own infrastructure as well. We acknowledged that addressing this vulnerability is primarily a responsibility of the cloud providers, as VSHN doesn’t own any infrastructure that could directly be affected.

The VSHN task force handed the monitoring over to VSHN Canada to test the update as it rolled out to production systems, who stayed in close contact to ensure there were no QoS degradations after the microcode update.

Aftermath

cloudscale.ch successfully finished its work at 01:34 CEST / 16:34 PDT. All VSHN systems running on that provider have been patched accordingly, and the tests carried show that this specific vulnerability has been fixed as required. VSHN Canada confirmed that all systems were running without any problems.

We will continue to monitor this situation and to inform our customers accordingly. All impacted customers will be contacted by VSHN. Please do not hesitate to contact us for more information.

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Get the “DevOps in Switzerland 2023” Report

12. May 2023

We are thrilled to announce the fourth edition of our “DevOps in Switzerland” report!

From February to April 2023 we conducted a study to learn how Swiss companies implement and apply DevOps principles.

We compiled the results into a PDF file, and just like in the previous edition, we provided a short summary of our findings in the first pages.

You can get the report here. Enjoy reading and we look forward to your feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Canada Tech

VSHN Canada Hackday: A Tale of Tech Triumphs and Tasty Treats

24. Mar 2023

The VSHN Canada Hackday turned into an epic two-day adventure, where excitement and productivity went hand in hand. Mätthu, Bigli, and Jay, our stellar team members, joined forces to level up VSHN as a company and expand their skill sets. In this blog post, we’re ecstatic to share the highlights and unforgettable moments from our very own Hackday event.

🏆 Notable Achievements

1️⃣ Revamping Backoffice Tools for VSHN Canada

Mätthu dove deep into several pressing matters, including:

  • Time tracking software that feels like a relic from the 2000s. With the Odoo 16 Project underway, we explored its impressive features and found a sleek solution for HR tasks like time and holiday tracking and expenses management. Now we just need to integrate it as a service for VSHN Canada.
  • Aligning the working environments of VSHN Switzerland and Canada. Although not identical, we documented the similarities and differences in our handbook to provide a seamless experience.
  • Tidying up our document storage in Vancouver. Previously scattered across Google Cloud and Nextcloud, a cleanup session finally brought order to the chaos.

📖 Documentation available in the Handbook.

2️⃣ Mastering SKS GitLab Runners

Bigli and Jay teamed up to craft fully managed SKS GitLab runners using Project Syn, aiming to automate GitLab CI processes and eliminate manual installation and updates. This collaboration also served as an invaluable learning experience for Jay, who delved into Project Syn’s architecture and VSHN’s inner workings. Hackday milestones included:

  • Synthesizing the GitLab-runner cluster
  • Updating the cluster to the latest supported version
  • Scheduling cluster maintenance during maintenance windows
  • Developing a component for the GitLab-runner
  • Implementing proper monitoring, time permitting

📖 Documentation available on our wiki.

🍻 Festive Fun

On Hackday’s opening day, we treated ourselves to a team outing at “Batch,” our go-to local haunt nestled in Vancouver’s scenic harbor. Over unique beers and animated chatter, we toasted to our first-ever Canadian Hackday.

🎉 Wrapping Up

VSHN Canada’s Hackday was an exhilarating mix of productivity, learning, and amusement. Our team banded together to confront challenges, develop professionally, and forge lasting memories. We can hardly wait for future Hackday events and the continued growth of VSHN Canada and VSHN.

Jay Sim

Jay Sim is a DevOps engineer in VSHN Canada.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Stay Ahead of the Game with Kubernetes

13. Jan 2023

Kubernetes is a powerful platform for deploying and managing containerized applications at scale, and it has become increasingly popular in Switzerland in recent years.

One way to approach it is outsourcing. This can be a strategic and cost-effective option for organizations that do not have the in-house DevOps expertise, know-how, or resources to manage their infrastructure and application operations efficiently.

Not every tech company is in the business of building platforms and operating Kubernetes clusters. Thus by partnering with an experienced partner, companies can tap into a wealth of knowledge and expertise to help them run their applications.

Some companies adopt Kubernetes and look to leverage its capabilities themselves. It’s essential to consider time, effort, and possible implications while utilizing the latest developments and continually adding value to the core business.

In all cases, it will be helpful to align with fundamentals. For this reason, I have compiled a quick guide to Kubernetes in 2023 and best practices in Switzerland.

  1. Understand the basics: Before diving into Kubernetes, have a solid understanding of the reasoning and concepts. This could include cloud infrastructure, networking, containers, how they liaise with each other, and how they can be managed with Kubernetes.
  2. Plan your deployment carefully: When deploying applications with Kubernetes, you must plan thoroughly and consider your workloads’ specific needs and requirements. This includes but is not limited to resource requirements, network connectivity, scalability, latency, and security considerations.
  3. Use appropriate resource limits: One of the critical benefits of Kubernetes is its ability to manage resources dynamically based on the needs of your applications. To take advantage of this, try to set appropriate resource limits for your application. This will help ensure that your application has the resources they need to run effectively while preventing them from consuming too many resources and impacting other applications.
  4. Monitor your application: It’s essential to monitor your applications and the underlying Kubernetes cluster to ensure they are running smoothly and to identify any issues that may arise. You want to analyze the alerts and react accordingly. You can use several tools and practices to monitor your applications, including log analysis, monitoring with tools like Prometheus and Grafana, and alerting systems.
  5. Use appropriate networking configurations: Networking is critical to any Kubernetes deployment, and choosing the proper network configuration is substantial. What about load balancing, service discovery, and network segmentation?
  6. Secure your application: Security is a top concern for many companies and organizations in Switzerland. You cannot proceed without ensuring that your Kubernetes deployment is secure. At this stage, your team is implementing network segmentation, using secure container runtime environments, and implementing advanced authentication and authorization systems.
  7. Consider using a managed Kubernetes service: For companies without the resources or needing DevOps expertise to manage their clusters, managed Kubernetes services can be a business-saving solution. With managed services, you can get a production-ready cluster, i.e., a fully-managed Kubernetes environment, allowing teams and software engineers to focus on developing new features and deploying their applications rather than managing the underlying infrastructure.
  8. Stay up-to-date with the latest developments: The Kubernetes ecosystem is constantly evolving, and it’s better to stay up-to-date with the latest developments and best practices. This may involve subscribing to newsletters like VSHN, VSHN.timer, or Digests, attending conferences and CNCF meetups, and following key players in the Kubernetes community.

By following best practices, IT leaders, stakeholders, and decision-makers can ensure that they use Kubernetes constructively and get the most out of the platform technology.

Oksana Horobinska

Oksana is Business Development Specialist at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

VSHN HackDay – Tailscale on APPUiO Cloud

21. Oct 2022

As part of the fourth VSHN HackDay taking place yesterday and today (October 20th and 21st, 2022), Simon Gerber and I (Tobias Brunner) worked on the idea to get Tailscale VPN running on APPUiO Cloud.

tailscale logo

Tailscale is a VPN service that makes the devices and applications you own accessible anywhere in the world, securely and effortlessly. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other.

The use cases we wanted to make possible are:

  • Access Kubernetes services easily from your laptop without the hassle of “[kubectl|oc] port-forward”. Engineers in charge of development or debugging need to securely access services running on APPUiO Cloud but not exposed to the Internet. That’s the job of a VPN, and Tailscale makes this scenario very easy.
  • Connect pods running on APPUiO Cloud to services that are not directly accessible, for example, behind a firewall or a NAT. Routing outbound connections from a Pod through a VPN on APPUiO Cloud is more complex because of the restricted multi-tenant environment.

We took the challenge and found a solution for both use cases. The result is an OpenShift template on APPUiO Cloud that deploys a pre-configured Tailscale pod and all needed settings into your namespace. You only need a Tailscale account and a Tailscale authorization key. Check the APPUiO Cloud documentation to know how to use this feature.

We developed two new utilities to make it easier to work with Tailscale on APPUiO Cloud (and on any other Kubernetes cluster):

  • tailscale-service-observer: A tool that lists Kubernetes services and posts updates to the Tailscale client HTTP API to expose Kubernetes services as routes in the VPN dynamically.
  • TCP over SOCKS5: A middleman to transport TCP packets over the Tailscale SOCKS5 proxy.

Let us know your use cases for Tailscale on APPUiO Cloud via our product board! Are you already a Tailscale user? Do you want to see deeper integration into APPUiO Cloud?

Tobias Brunner

Tobias Brunner is working since over 20 years in IT and more than 15 years with Internet technology. New technology has to be tried and written about.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

Most Interesting New Features of OpenShift 4.11

13. Oct 2022

Red Hat OpenShift 4.11 brings a substantial amount of new features. We’ve teased a few of them in our latest VSHN.timer, but in this article, we’re going to dive deeper into those that have the highest impact on our work and on our customers.

Support for CSI generic ephemeral volumes

Container Storage Interface (CSI) generic ephemeral volumes are a cool new feature. We foresee two important use cases for them:

  • When users need an ephemeral volume that exceeds what the node’s file system provides;
  • When users need an ephemeral volume with prepopulated data: this could be done, for example, by creating the volume from a snapshot.

Route Subdomains

The Route API now provides subdomain support, something that was not possible before 4.11:

You can now specify the spec.subdomain field and omit the spec.host field of a route. The router deployment that exposes the route will use the spec.subdomain value to determine the host name.

Pod Security Admissions

Pod security admission now runs globally with restricted audit logging and API warnings. This means that, while everything should still run as it did before, you will most likely encounter warnings like these if you relied on security contexts being set by OpenShift’s Security Context Constraints:

Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false…

To solve this issue, users now need to explicitly set security contexts in manifests to avoid these warnings.

Developer Features

The Developer Perspective provides improved integration with GitHub Actions, allowing developers to trigger pipelines and run tasks following Git events such as pushes or tags. And not only that, but the OpenShift console now has a dark mode, too.

CLI Features

The following OpenShift CLI (oc) commands and flags for requesting tokens are deprecated; these include:

  • oc serviceaccounts create-kubeconfig
  • oc serviceaccounts get-token
  • oc serviceaccounts new-token
  • The --service-account/-z  flag for the oc registry login  command

Moreover, the oc create token command generates tokens with a limited lifetime, which can be controlled with the --duration  command line argument. The API server can return a token with a lifetime that is shorter or longer than the requested lifetime. The command apparently generates tokens with a lifetime of one hour by default. If users need a token that doesn’t expire (for example, for a CI/CD pipeline), they should create a ServiceAccount API token secret instead.

OpenShift 4.11 on APPUiO Cloud

At the time of this writing, Red Hat has not yet decided to promote 4.11 as an upgrade target, and for that reason, we have not upgraded APPUiO Cloud clusters yet. As soon as Red Hat enables this, we will update our APPUiO Cloud zones accordingly.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Project Syn Tech

Keeping Things Up to Date with Renovate

28. Jun 2022

Our customers trust us with their most precious resource: their information systems. Our job is to keep the underlying systems running, updated, and most importantly, secure.

Project Syn with its Commodore Components is one of the primary weapons in our arsenal to configure and thus protect those systems. Thanks to its GitOps approach, we can ensure that all Kubernetes clusters are always running the latest and (hopefully) most secure version possible.

But just like any other software package, Project Syn brings its complexity: we must keep it safe and sound, which means watching over its container images, its Helm charts, and all of the Commodore Components we use every day.

As you can imagine, juggling so many different software packages is a considerable task; now, think about all of their upstream dependencies (most of them are container images and helm charts, but also Go and Python are a part of the mix). The complexity of the task exponentially increases.

How do we cope with this? Well, as usual, standing on the shoulder of giants. In this case, Renovate.

Renovate has been created to manage this complexity, whether container images, Helm charts, or upstream dependencies. But understandably enough, Renovate per se does not know anything about Commodore Components (at least not yet!), and in particular, it does not know about the Project Syn configuration hierarchy and how to find dependencies within that hierarchy.

So, what’s an Open Source developer to do? We forked Renovate, of course, and adapted it to our needs. How?

  1. We added the Project Syn configuration hierarchy as a new Manager.
  2. We reused the existing datasource to detect new versions of our Commodore Components.

Then we configured our own Renovate fork on all the repositories holding our source code and started getting notified via pull requests whenever there was a new dependency version. Voilà!

With this approach, we have been able to automate much work and avoid using outdated software by automatically being notified of new versions. No more forgotten updates!

We also decided to use “golden files” to test our Commodore Components; this, in turn, meant that we could not merge PRs created by Renovate in case of failure. For those cases, we also taught Renovate how to update those golden files if needed.

The pull request “Update dependency ghcr.io/appuio/control-api to v0.8.1 – autoclosed #29” is a live example of this mechanism in action, and you’re most welcome to check it out.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Get the “DevOps in Switzerland 2022” Report

25. May 2022

We are thrilled to announce the third edition of our “DevOps in Switzerland” report!

From January to March 2022 we conducted a study to learn how Swiss companies implement and apply DevOps principles.

We compiled the results into a PDF file (only available in English), and just like in the previous edition, we provided a short summary of our findings in the first pages.

You can get the report on our website. Enjoy reading and we look forward to your feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

How to Restrict Container Registries per Namespace with Kyverno

24. May 2022

We have recently received a request from a customer, asking us to restrict the container registries that could be used to deploy images from in their OpenShift 4 cluster.

We could have added such configuration directly at node level, as explained in Red Hat’s documentation; it’s indeed possible to whitelist registries on repository and tag level, but that would have forced us to keep all those whitelists updated with those our Project Syn components regularly use.

We have instead chosen to use Kyverno for this task: it allows us to enforce the limitations on a per-namespace level, with much more flexibility and maintanability.

This is a ClusterPolicy object for Kyverno, adapted from the solution we provided to our customer, showing how we can restrict the limitation to some namespaces, so that containers can be pulled only from some specific registries.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restrict-registries
  annotations:
    policies.kyverno.io/title: Restrict Image Registries
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      Restrict image pulling only to whitelisted registries
spec:
  validationFailureAction: enforce
  background: true
  rules:
  - name: validate-registries
    match:
      all:
      - resources:
          kinds:
          - Pod
          namespaces:
          - "namespace-wildcard-*"
    validate:
      message: "Image registry not whitelisted"
      pattern:
        spec:
          containers:
          - image: "registry.example.com/* | hub.docker.com/some-username/*"

Andreas Tellenbach

Andreas Tellenbach is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Agent-less GitLab integration with OpenShift

20. Apr 2022

As you know (and if you didn’t, now you do) GitLab has deprecated the certificate-based integration with Kubernetes in version 14.5, and it is expected that version 15 will disable it completely.

The official replacement to the (now legacy) certificate-based integration mechanism is the GitLab Agent, to be installed in your Kubernetes cluster, and providing a tighter integration between our two beloved platforms.

Well, hold on; things aren’t that easy. For example in our case, if we wanted to offer the GitLab Agent to our APPUiO Cloud users, we would run into two issues:

  • On one side, installing the GitLab Agent is more complicated and expensive, because it would run as another pod in the same namespace, consuming resources.
  • And on the other, we cannot install it cluster-wide, because of the admin permissions we have configured in this multi-tenant shared platform. Users would have to manage each and every GitLab Agent on their own, possibly having multiple agents deployed in several namespaces and GitLab repositories.

So, what’s a DevOps engineer to do? Well, here’s a simple, very simple solution; so simple that you might have never thought of it before: create your own KUBECONFIG variable in GitLab with a dedicated service account!

Here’s how it works using the OpenShift oc tool in your own system:

Create a service account in your OpenShift project:

oc create serviceaccount gitlab-ci

Add an elevated role to this service account so it can manage your deployments:

oc policy add-role-to-user admin -z gitlab-ci --rolebinding-name gitlab-ci

Create a local KUBECONFIG variable and login to your OpenShift cluster using the gitlab-ci service account:

TOKEN=$(oc sa get-token gitlab-ci)
export KUBECONFIG=gitlab-ci.kubeconfig
oc login --server=$OPENSHIFT_API_URL --token=$TOKEN
unset KUBECONFIG

You should now have a file named gitlab-ci.kubeconfig in your current working directory; copy its contents and create a variable named KUBECONFIG in the GitLab settings with the value of the file (that’s under “Settings” > “CI/CD” > “Variables” > “Expand” > “Add variable”). Remember to set the “environment” scope for the variable and to disable the old Kubernetes integration, as the KUBECONFIG variable might collide with this new variable.

Tada! There are a few advantages to this approach:

  • It is certainly faster and simpler to setup for simple push-based deployments than by using the GitLab Agent.
  • It is also easier to get rid of deprecated features without having to change the pipeline or migrating to the GitLab Agent.

But there are a few drawbacks as well:

  • You don’t get all bells and whistles. It is reasonable to think that at some point the GitLab Agent will offer advanced features, such as access to pod logs from GitLab, monitoring alerts directly from the GitLab user interface, and many other things the old Kubernetes certificate-based integration could do. This approach does not provide anything like this.
  • The cluster’s API endpoint has to be publicly accessible, not behind any firewall or VPN; conveniently enough, the GitLab Agent solves exactly this problem.

If you are interested in GitLab and Kubernetes, join us next week in our GitLab Switzerland Meetup for more information about the GitLab Agent for Kubernetes, and afterwards for a nice apéro with like-minded GitLab enthusiasts!

Oh, and a final tip; if you find yourself having to log in to OpenShift integrated registry but you don’t have yq installed in the job’s image, you can use sed to extract the token from the $KUBECONFIG file:

sed -n 's/^\s*token:\s*\(.*\)/\1/ p' "${KUBECONFIG}" | docker login -u gitlab-ci --password-stdin "${OPENSHIFT_REGISTRY}"

Christian Сremer

Christian Cremer is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Serverless on Kubernetes: Knative

9. Mar 2022

Back in 2019 we published a review of the most relevant serverless frameworks available for Kubernetes. That article became one of the most visited in our blog in the past two years, so we decided to return to this subject and provide an update for our readers.

TL;DR: Serverless in Kubernetes in 2022 means, to a large extent, Knative.

What’s in a Word

The “Serverless” word is polarizing.

Robert Scoble, one of the first tech influencers, uttered it for the first time fourteen years ago, as James Potter reported recently. In those times, “Serverless” just meant “not having servers and being an AWS EC2 customer”. Because, yes, companies used to have their own physical servers back then. Amazing, isn’t it?

Fast forward to 2022, and the CNCF Serverless Landscape has grown to such an extent that it can be very hard to figure out what “serverless” truly means.

Even though for some it just represents the eternal return of 1970s style batch computing, in the past five years the “Serverless” word has taken a different and very specific meaning.

The winning definition caters for the complete abstraction of the infrastructure required to run individual pieces of software, at scale, on a cloud infrastructure provider.

Or, in less buzzword-y terms, just upload your code, and let the platform figure out how to run it for you: the famous “FaaS”, also known as Function as a Service.

The IaaS Front

The three major Infrastructure as a Service providers in the world offer their own, more-or-less-proprietary version of the Serverless paradigm: AWS Lambda, Azure Functions, and Google Cloud Run (which complemented the previous Google Cloud Functions service). These are three different approaches to the subject of FaaS, each with its advantages and caveats.

Some companies, like A Cloud Guru, have successfully embraced the Serverless architecture from the very start (in this case, based on AWS Lambda), creating cost-effective solutions with incredible scalability.

But one of the aforementioned caveats, and not a small one for that matter, is platform lock-in. Portability has always been a major concern for enterprise computing: if building apps on AWS Lambda is an interesting prospect, could we move them to a different provider later on?

Well, we now have an answer to that question, thanks to our good old friend: the container.

The Debate Is Over

Almost exactly three years ago, Anthony Skipper wrote:

We will say it again… packaging code into containers should be considered a FaaS anti-pattern!

Containers or not? This was still a big debate at the time of our original article in 2019.

Some frameworks like Fission and IaaS services such as AWS Lambda and Google Cloud Functions did not require developers to package their apps as containers; just upload your code and watch it run. On the other hand, OpenFaaS and Knative-based offerings did require containers. Who would win this fight?

The world of Serverless solutions in 2022 has decided that wrapping functions in containers is the way to go. Even AWS Lambda started offering this option in December 2020. This is a huge move, allowing enterprises to run their code in whichever infrastructure they would like to.

In retrospect, the market has chosen wisely. Containers are now a common standard, allowing the same code to run unchanged, from a Raspberry Pi to your laptop to an IBM Mainframe. It is a natural choice, and it turned out that it was a matter of time before this happened.

Even better, with increased industry experience, container images got smaller and smaller, thanks to Alpine-based, scratch-based, and distroless-based images. Being lightweight allows containers to start and stop almost instantly, and makes scaling applications faster and easier than ever.

And this choice turned out to benefit one specific framework among all: Knative.

The Age of Knative

In the Kubernetes galaxy, Knative has slowly by steadily imposed its mark as the core infrastructure of Kubernetes serverless workloads.

In 2019, our article compared six different mechanisms to run serverless payloads on Kubernetes:

  1. OpenFaaS
  2. Fn Project
  3. Fission
  4. OpenWhisk
  5. Kubeless
  6. TriggerMesh

Of those, Fn Project and Kubeless have been simply abandoned. Other frameworks suffered the same fate: Riff has disappeared, just like Gestalt, including its parent company Galatic Fog. IronFunctions moved away from Kubernetes into its own PaaS product. Funktion has been sandboxed and abandoned; Leveros is abandoned too; BlueNimble does not show much activity.

On the other hand, new players have appeared in the serverless market: Rainbond, for example; or Nuclio, targeting the scientific computation market.

But many new contenders are based on Knative: apart from TriggerMesh, which we mentioned in 2019 already, we have now Kyma, Knix, and Red Hat’s OpenShift 4 serverless, all powered by Knative.

The interest in Knative is steadily growing these days: CERN uses it. IBM is talking about it. The Serverless Framework has a provider for it. Even Google Cloud Run is based on it! Which shouldn’t be surprising, knowing that Knative was originally created by Google, just like Kubernetes.

And now Knative has just been accepted as a CNCF incubating project!

Even though Knative is not exactly a FaaS per se, it deserves the top spot in our review of 2022 FaaS-on-K8s technologies, being the platform upon which other serverless services are built, receiving huge support from the major names of the cloud-native industry.

Getting Started with Knative

Want to see Knative in action? Getting started with Knative on your laptop now is easier than ever.

  1. Install kind.
  2. Run the following command on your terminal:

$ curl -sL install.konk.dev | bash

To work with Knative objects on your cluster, install the kn command-line tool. Once you have launched your new Knative-powered Kind cluster, create a file called knative-service.yaml with the contents below:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
spec:
  template:
    metadata:
      name: hello-world
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: "World"

And then just apply it: kubectl apply -f knative-service.yaml.

The kn service list command should now display your “helloworld” service, which should become available after a few seconds. Once it’s ready, you can execute it simply with curl:

$ curl http://hello.default.127.0.0.1.sslip.io

(If you prefer to use Minikube, you can follow this tutorial instead.)

Thanks to Knative, developers can roll out new versions of their services (called “revisions” in Knative terminology) while the old ones are still running, and even distribute traffic among them. This can be very useful in A/B testing sessions, for example. Knative services can be triggered by a large array of events, with great flexibility.

A full introduction to Knative is outside of the scope of this review, so here are some resources we recommend to learn everything about Knative serving and eventing:

  • The excellent and funny “Knative in Action” (2021) book by Jacques Chester, available for free courtesy of VMWare.
  • A free, full introduction to Knative (July 2021) by Sebastian Goasguen, the founder of TriggerMesh; a video and its related source code are provided as well.
  • And to top it off, the “Knative Cookbook” (April 2020) by Burr Suter and Kamesh Sampath, also available for free, courtesy of Red Hat.

Interested in Knative and Red Hat OpenShift Serverless? Get in touch with us and let us help you in your FaaS journey!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Changes in OpenShift 4.9

23. Nov 2021

As part of our ongoing improvement process, we evaluate the requirements and new features of each version of Red Hat OpenShift, not only for our customers, but also for our internal use. Version 4.9 of Red Hat’s flagship container platform was announced October 18th, 2021 and it included some very important changes, some of which are potentially breaking ones.

In this article, targeted towards DevOps engineers and maintenance crews, I will provide a short overview of the most important points to take care before upgrading to this new version.

Kubernetes 1.22

The most important change in OpenShift 4.9 is the update to Kubernetes 1.22. This release of Kubernetes has completely removed APIs marked as v1beta1. The complete list is available in the Red Hat documentation website, but suffice to say that common objects such as Role or RoleBinding (rbac.authorization.k8s.io/v1beta1) and even Ingress (networking.k8s.io/v1beta1) are no more.

This is a major change, and it needs to be taken care of for all users of your clusters before an upgrade takes place, and all manifest files using those resources should be updated accordingly. Red Hat’s Knowledge Base includes a special article explaining all the steps required for upgrading to OpenShift 4.9.

And of course, if you need any help regarding the upgrade process towards OpenShift 4.9, just contact us, we will be glad to help you and your teams.

Mutual TLS Authentication

Mutual TLS is a strong way to secure an application running in the cloud, allowing a server application to authenticate any client connecting to it. It comes with some complexity to setup and manage, as it requires a certificate authority. But thanks to its inclusion as a feature in OpenShift 4.9, its usage is much simpler now.

Please check the release notes section related to Mutual TLS for more information.

Registry Multiple Logins

In previous versions of OpenShift, you could only list one repository from a given registry per project. OpenShift 4.9 includes multiple logins to the same registry, which allows pods to pull images from specific repositories in the same registry, each with different credentials. You can even define a registry with a specific namespace. The documentation contains examples of manifests to use this feature.

Changes to lastTriggeredImageID Field Update

And finally, here’s a change that can cause unforeseen headaches to your teams: OpenShift 4.9 does not update anymore the buildConfig.spec.triggers[].imageChange.lastTriggeredImageID field when the ImageStreamTag changes and references a new image. This subtle change in behavior is easy to overlook, and if your team depends on this feature, beware for trouble.

Learn More about OpenShift 4.9

If you’re interested in knowing what else changed in OpenShift 4.9, here are some selected resources published by Red Hat to help you:

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us