APPUiO Cloud Tech

New OpenShift 4.13 Features for APPUiO Users

5. Sep 2023

We have just upgraded our APPUiO Cloud clusters from version 4.11 to version 4.13 of Red Hat OpenShift, and there are some interesting new features for our APPUiO Cloud and APPUiO Managed users we would like to share with you.

Kubernetes Beta APIs Removal

OpenShift 4.12 and 4.13 respectively updated their Kubernetes versions to 1.25 and 1.26, providing cumulative updates to various Beta APIs. If you are using objects with the CRDs below, please make sure to migrate your deployments accordingly.

CronJobbatch/v1beta1batch/v1
EndpointSlicediscovery.k8s.io/v1beta1discovery.k8s.io/v1
Eventevents.k8s.io/v1beta1events.k8s.io/v1
FlowSchemaflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
HorizontalPodAutoscalerautoscaling/v2beta1 and autoscaling/v2beta2autoscaling/v2
PodDisruptionBudgetpolicy/v1beta1policy/v1
PodSecurityPolicypolicy/v1beta1Pod Security Admission
PriorityLevelConfigurationflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
RuntimeClassnode.k8s.io/v1beta1node.k8s.io/v1

As a reminder, the next minor revision of Red Hat OpenShift will update Kubernetes to version 1.27.

Web Console

APPUiO users will discover a neat new feature on the web console: resource quota alerts are displayed now on the Topology screen whenever any resource reaches its usage limits. The alert label link will take you directly to the corresponding ResourceQuotas list page.

Have any questions or comments about OpenShift 4.13? Contact us!

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud

OpenShift 4.11 and User Workload Monitoring

20. Dec 2022

APPUiO Cloud users are now able to monitor their applications on their own!

APPUiO Cloud has been upgraded to OpenShift 4.11 and thanks to User Workload Monitoring, users are now able to monitor their applications on their own and get notified when something needs attention.

APPUiO Cloud now runs on OpenShift version 4.11. Both zones – cloudscale.ch LPG 2 and Exoscale CH-GVA-2 0 – have been updated in the past two weeks.

Users might notice some warnings when deploying things to APPUiO Cloud or when updating existing workloads. Those warnings come from Pod Security Admission. The Kubernetes community is changing how pod security is handled, and Pod Security Admission got introduced with Kubernetes 1.24, which is part of OpenShift 4.11. You will find more on the subject at our Pod Security Admissions documentation page.

As far as we understand, future versions of Kubernetes will change again and those warnings will likely become errors. Therefore we suggest to look into them sooner rather than later.

…and now that the administrative stuff has been taken care of, let us continue with the exciting stuff:

Thanks to functionality introduced in OpenShift 4.11, we are now able to offer you User Workload Monitoring. This feature allows APPUiO Cloud users to monitor their applications on their own.

Users can now collect metrics of applications and write alerts based on the collected metrics. A how-to on how to do so can be found at Monitor Application With Prometheus. Additionaly, users can now access metrics relevant to their application, which are collected by the cluster monitoring, and thus so far were not accessible to end users. Monitor PVC Disk Usage explains how this can be used to monitor disk usage of persistent volumes. It is also possible to access metrics on memory, CPU usage and many things more.

Users who are interested in nice graphs can install an instance of Grafana for themselves and build informative and nice looking dashboards depicting the metrics of their applications. See the Custom Grafana documentation for details how this can be achieved.

Others do not want to constantly look at dashboards to see whether something is broken, chosing to get notified when this is the case instead. This is also possible, as it’s now an option to not only write your own alerts but also individual alert routes and get email or chat messages when something is in need of attention. Configure Alert Receivers teaches how to do this.

User Workload Monitoring is a long-awaited feature, and we hope that it will make an impact for many APPUiO Cloud users.

What’s next?

We are currently working towards user self registration for APPUiO Cloud. This allows new users to register for APPUiO Cloud all by themselves without having to talk to one of our sales representatives. A milestone within that initiative will be User invitation.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud

Choose Between Performance or Lower Cost with Node Classes on APPUiO Cloud

1. Dec 2022

We built APPUiO Cloud to support performance-sensitive production workloads. But that performance also came along with a price tag, one that users had to pay even for non-production workloads — like your development environments — or non-time critical workloads — like your background cron jobs.

We have heard this concern from our customers, and we are pleased to introduce today the concept of Node Classes. They allow you to schedule workloads on nodes with different specifications and costs. See the APPUiO Cloud documentation for more details about node classes.

On the cloudscale.ch – LPG 2 zone, you can now choose between “Flex” and “Plus” classes. “Plus” nodes have the same performance-optimized characteristics all nodes on that cluster had so far. “Flex” nodes will be a less capable but more affordable option. A more detailed explanation of these two classes is available on the documentation website, where you can also find the price table.

Workloads in our existing namespaces will continue to run on the more performant “Plus” node class. The default for all new namespaces will be “Flex.” Check the documentation to learn how to schedule your workloads on the desired node class.

Would your workload profit from nodes optimized for it? Please let us know through our roadmap. Some nodes with GPU support, maybe?

What’s next?

Soon we will release User Workload Monitoring on APPUiO Cloud. This allows users to monitor their applications, write alerts, and receive notifications when those alerts fire. For instance, when your persistent volume runs out of space.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud

Vertical Pod Autoscaler on APPUiO Cloud

27. Oct 2022

Setting appropriate limit and request values in your deployments is not an easy task. To help our customers, we have enabled the use of the VerticalPodAutoscaler object on all APPUiO Cloud zones. This blog post will show you how to use it.

To use the VerticalPodAutoscaler object, you need an APPUiO Cloud project with some payload deployed and running. The Vertical Pod Autoscaler project on GitHub contains some sample YAML files that deploy a very simple demo application to your project.

Here is the YAML required to create a VerticalPodAutoscaler that will analyze the resource consumption of the vpa-example deployment:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: vpa-recommender
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind:       Deployment
    name:       vpa-example
  updatePolicy:
    updateMode: "Off"

The VPA requires a few moments to gather data and provide recommendations from it. After some time, during which your deployment should have been running in order to gather meaningful data, run the following command and you should see an output similar to this in your terminal:

$ oc get vpa vpa-recommender --output yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  annotations: …
# …
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: vpa-example
  updatePolicy:
    updateMode: Auto
status:
  conditions:
  - status: "True"
    type: RecommendationProvided
  recommendation:
    containerRecommendations:
    - containerName: fortune-container
      lowerBound:
        cpu: 25m
        memory: 262144k
      target:
        cpu: 203m
        memory: 262144k
      uncappedTarget:
        cpu: 203m
        memory: 262144k
      upperBound:
        cpu: 71383m
        memory: "6813174422"

You should analyze with care the values provided by the autoscaler for your deployment. Don’t blindly apply its recommendations; let your application run for a while and study the numbers closely.

Some tips for your analysis, all inside the status.recommendation part of the response:

  • The .containerRecommendations[*].target value could be considered indicative for request values.
  • The .containerRecommendations[*].upperBound value could be used as an indication to set limit values.

For more hints, the OpenShift dashboard shown on this page of the APPUiO Cloud documentation shows utilization numbers for both CPU and memory limits. Those values function as a suitable supplementary information source and must be taken into consideration.

The APPUiO Cloud documentation has more information about the VerticalPodAutoscaler object.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

Most Interesting New Features of OpenShift 4.11

13. Oct 2022

Red Hat OpenShift 4.11 brings a substantial amount of new features. We’ve teased a few of them in our latest VSHN.timer, but in this article, we’re going to dive deeper into those that have the highest impact on our work and on our customers.

Support for CSI generic ephemeral volumes

Container Storage Interface (CSI) generic ephemeral volumes are a cool new feature. We foresee two important use cases for them:

  • When users need an ephemeral volume that exceeds what the node’s file system provides;
  • When users need an ephemeral volume with prepopulated data: this could be done, for example, by creating the volume from a snapshot.

Route Subdomains

The Route API now provides subdomain support, something that was not possible before 4.11:

You can now specify the spec.subdomain field and omit the spec.host field of a route. The router deployment that exposes the route will use the spec.subdomain value to determine the host name.

Pod Security Admissions

Pod security admission now runs globally with restricted audit logging and API warnings. This means that, while everything should still run as it did before, you will most likely encounter warnings like these if you relied on security contexts being set by OpenShift’s Security Context Constraints:

Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false…

To solve this issue, users now need to explicitly set security contexts in manifests to avoid these warnings.

Developer Features

The Developer Perspective provides improved integration with GitHub Actions, allowing developers to trigger pipelines and run tasks following Git events such as pushes or tags. And not only that, but the OpenShift console now has a dark mode, too.

CLI Features

The following OpenShift CLI (oc) commands and flags for requesting tokens are deprecated; these include:

  • oc serviceaccounts create-kubeconfig
  • oc serviceaccounts get-token
  • oc serviceaccounts new-token
  • The --service-account/-z  flag for the oc registry login  command

Moreover, the oc create token command generates tokens with a limited lifetime, which can be controlled with the --duration  command line argument. The API server can return a token with a lifetime that is shorter or longer than the requested lifetime. The command apparently generates tokens with a lifetime of one hour by default. If users need a token that doesn’t expire (for example, for a CI/CD pipeline), they should create a ServiceAccount API token secret instead.

OpenShift 4.11 on APPUiO Cloud

At the time of this writing, Red Hat has not yet decided to promote 4.11 as an upgrade target, and for that reason, we have not upgraded APPUiO Cloud clusters yet. As soon as Red Hat enables this, we will update our APPUiO Cloud zones accordingly.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Project Syn Tech

Keeping Things Up to Date with Renovate

28. Jun 2022

Our customers trust us with their most precious resource: their information systems. Our job is to keep the underlying systems running, updated, and most importantly, secure.

Project Syn with its Commodore Components is one of the primary weapons in our arsenal to configure and thus protect those systems. Thanks to its GitOps approach, we can ensure that all Kubernetes clusters are always running the latest and (hopefully) most secure version possible.

But just like any other software package, Project Syn brings its complexity: we must keep it safe and sound, which means watching over its container images, its Helm charts, and all of the Commodore Components we use every day.

As you can imagine, juggling so many different software packages is a considerable task; now, think about all of their upstream dependencies (most of them are container images and helm charts, but also Go and Python are a part of the mix). The complexity of the task exponentially increases.

How do we cope with this? Well, as usual, standing on the shoulder of giants. In this case, Renovate.

Renovate has been created to manage this complexity, whether container images, Helm charts, or upstream dependencies. But understandably enough, Renovate per se does not know anything about Commodore Components (at least not yet!), and in particular, it does not know about the Project Syn configuration hierarchy and how to find dependencies within that hierarchy.

So, what’s an Open Source developer to do? We forked Renovate, of course, and adapted it to our needs. How?

  1. We added the Project Syn configuration hierarchy as a new Manager.
  2. We reused the existing datasource to detect new versions of our Commodore Components.

Then we configured our own Renovate fork on all the repositories holding our source code and started getting notified via pull requests whenever there was a new dependency version. Voilà!

With this approach, we have been able to automate much work and avoid using outdated software by automatically being notified of new versions. No more forgotten updates!

We also decided to use “golden files” to test our Commodore Components; this, in turn, meant that we could not merge PRs created by Renovate in case of failure. For those cases, we also taught Renovate how to update those golden files if needed.

The pull request “Update dependency ghcr.io/appuio/control-api to v0.8.1 – autoclosed #29” is a live example of this mechanism in action, and you’re most welcome to check it out.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud

About the APPUiO Cloud API

28. Apr 2022

In a previous blog post we talked about how we handle billing for APPUiO Cloud. This time we’re going to talk about how we have extended the Kubernetes API to provide a custom API for our product.

Just like in the aforementioned blog post, the links in this text point to pages in the APPUiO Cloud for System Engineers documentation, with more technical details.

As a reminder, APPUiO Cloud consists of several independent OpenShift 4 clusters called zones. They can be individually operated upon through the Kubernetes API and Console.

Being a shared platform, a precondition for users to use APPUiO Cloud is to belong to an organization, that is, the entity we can send invoices to. This organization must be the same for all zones; which means that users get a single invoice, no matter which zone their applications are running in.

Managing organizations individually on each zone would be tricky, and would quickly lead to issues of bi-directional synchronization; this is commonly known as a hard problem in IT, and of course we would rather like to avoid it. And this problem would get worse in the future, every time we open a new APPUiO Cloud zone; so, thanks but no thanks.

We needed a tool to manage organizations, independent from (yet intimately related to) any and each APPUiO Cloud zones. And since we are strong believers of both “API First” and “Kubernetes First”, we decided to extend Kubernetes’ own API to do this.

We created Custom Resource Definitions (CRDs) defining organizations and teams, placing them at the core of our APPUiO Cloud API. This gave us access control features for free, and as a huge bonus, we got a free client to talk to said API: kubectl itself (as well as its OpenShift counterpart, oc!)

This single design decision made the APPUiO Cloud API compatible with the larger Kubernetes ecosystem. We used this API to build a whole web application on top of it, and this turned out to be quite easy, largely thanks to said ecosystem.

For example, based on Kubernetes’ own objects, rules, and groups, we can make the user interface of our web application reactive to the permissions of the current user; this is entirely handled by the API thanks to RBAC. Which means, no need to touch the UI code when permissions change!

The important thing to keep in mind is that the CRDs above are only used to represent data stored in a separate system; we developed adapters (in the shape of native Kubernetes controllers and operators) in charge of synchronizing the organizations and teams data between the API and the systems where those objects are stored.

Specifically, organizations and teams are groups and subgroups stored in a Keycloak instance; but this could be replaced by any other identity provider (IdP) in the future. This means that the identity data model of APPUiO Cloud is completely independent of the underlying store used at any time.

As it is generaly known from Kubernetes, the spec fields in those Custom Resources represent the desired state to be synced to the connected system; status fields represent the actual state on the connected system.

Thanks to the pre-existence of CRDs, the development of the UI and the adapters thereof could progress independently from one another, working as a common de facto contract between different parts of APPUiO Cloud. Even better, during development the behavior of the adapters could be stubbed, by directly manipulating the custom resources.

And to thank you for reading until the end, here’s a bonus detail: we’re using vcluster in our quality management process. What is vcluster? It’s a way to create virtual Kubernetes clusters in another cluster. It allows us to run multiple instance of the API on one Kubernetes or OpenShift cluster, without risking to run into CRD version conflicts or other issues. We use this approach to quickly spin integration and testing environments through our test automation procedures (driven by GitHub Actions), including feature branch testing. This single tool has boosted our productivity tremendously.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud

APPUiO Cloud Billing

12. Apr 2022

APPUiO Cloud is the simplest, most secure, and fastest way for DevOps teams to deploy apps on an OpenShift cluster. Developers all over Europe are enjoying the benefits of quick access to a secure platform for their applications. APPUiO Cloud provides almost instant access to clusters in various zones, with a convenient pay-per-use model which makes it particularly interesting for startups, education, and many other kinds of organizations.

Speaking about APPUiO Cloud’s pay-per-use model, we would like to explain in this blog post how our customers are billed for their usage of APPUiO Cloud resources. In a nutshell, APPUiO Cloud is supported in the background by a series of mechanisms that transform usage data metrics into actual invoices.

This multi-step process can be summarized in three steps: data collection, processing, and invoice generation.

By the way, following our commitment to openness and transparency, the links in the text point to various pages in our APPUiO Cloud for System Engineers documentation, geared for a technical audience. Also don’t hesitate to ask questions in our Community Discussions or the APPUiO Community Chat; we will be delighted to tell you more.

Data Collection

The first step of the invoicing process is based in continuous monitoring of APPUiO Cloud, using Prometheus. The metrics of interest, which are used as the basis of all invoicing calculations, are:

  • The amount of effective and requested memory used;
  • The amount of allocated persistent storage;
  • And the amount of allocated services of type LoadBalancer.

These metrics are sampled regularly from all APPUiO Cloud zones.

Our cluster monitoring systems have a limited retention time, averaging from days to weeks. This means that we need to use long term storage should we need to recalculate invoices for a whole fiscal year, and we use Thanos for that.

Processing

Before we transfer this information into our billing system, a secondary process matches and augments metrics with various metadata. First, we must map the collected metrics to our customers in the billing system. We also need to correlate that data to our products, including their prices, and any eventual applicable discount or promotion.

This secondary process consists of an ETL mechanism extracting metrics through the Thanos API, transforming them as required, and storing results in an intermediate data warehouse system, based on a standard star schema. This ETL process is complemented by an enrichment step, which fills missing data that needs to be collected from other sources as required.

Invoice Generation

The generation of invoices is handled by yet another automated mechanism. This process, running at the end of each billing cycle (in our case, a month) uses the aforementioned data warehouse as an API and data source, and it actually generates the final invoices in our ERP billing system.

This loosely coupled architecture gives us the flexibility to replace the invoice system (our ERP) with minimal effort. It also allows us to pull in data from any sources we could think of in the future.

Following our love for DevOps, all of the processes described above are fully automated, tested, and running in a fault-tolerant way.

But there’s more: the ERP adapter for APPUiO Cloud and the Reporting for APPUiO Cloud projects in GitHub are open source! Check them out and see how this all fits together nicely.

Are you interested in APPUiO Cloud? Get your account and start deploying applications on an OpenShift cluster today.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Changes in OpenShift 4.9

23. Nov 2021

As part of our ongoing improvement process, we evaluate the requirements and new features of each version of Red Hat OpenShift, not only for our customers, but also for our internal use. Version 4.9 of Red Hat’s flagship container platform was announced October 18th, 2021 and it included some very important changes, some of which are potentially breaking ones.

In this article, targeted towards DevOps engineers and maintenance crews, I will provide a short overview of the most important points to take care before upgrading to this new version.

Kubernetes 1.22

The most important change in OpenShift 4.9 is the update to Kubernetes 1.22. This release of Kubernetes has completely removed APIs marked as v1beta1. The complete list is available in the Red Hat documentation website, but suffice to say that common objects such as Role or RoleBinding (rbac.authorization.k8s.io/v1beta1) and even Ingress (networking.k8s.io/v1beta1) are no more.

This is a major change, and it needs to be taken care of for all users of your clusters before an upgrade takes place, and all manifest files using those resources should be updated accordingly. Red Hat’s Knowledge Base includes a special article explaining all the steps required for upgrading to OpenShift 4.9.

And of course, if you need any help regarding the upgrade process towards OpenShift 4.9, just contact us, we will be glad to help you and your teams.

Mutual TLS Authentication

Mutual TLS is a strong way to secure an application running in the cloud, allowing a server application to authenticate any client connecting to it. It comes with some complexity to setup and manage, as it requires a certificate authority. But thanks to its inclusion as a feature in OpenShift 4.9, its usage is much simpler now.

Please check the release notes section related to Mutual TLS for more information.

Registry Multiple Logins

In previous versions of OpenShift, you could only list one repository from a given registry per project. OpenShift 4.9 includes multiple logins to the same registry, which allows pods to pull images from specific repositories in the same registry, each with different credentials. You can even define a registry with a specific namespace. The documentation contains examples of manifests to use this feature.

Changes to lastTriggeredImageID Field Update

And finally, here’s a change that can cause unforeseen headaches to your teams: OpenShift 4.9 does not update anymore the buildConfig.spec.triggers[].imageChange.lastTriggeredImageID field when the ImageStreamTag changes and references a new image. This subtle change in behavior is easy to overlook, and if your team depends on this feature, beware for trouble.

Learn More about OpenShift 4.9

If you’re interested in knowing what else changed in OpenShift 4.9, here are some selected resources published by Red Hat to help you:

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Internal

Willkommen im Team, Christian H.!

19. Sep 2018

Tief in meinem Inneren bin ich ein Entwickler. Bereits zu meiner Schulzeit habe ich erste Gehversuche gemacht. Nach einer abgeschlossenen Berufslehre als Automatiker und viel Selbststudium habe ich während meinem Wirtschaftsinformatikerstudium angefangen als Webentwickler zu arbeiten.
Im Zuge dessen habe ich aus Notwendigkeit und Interesse angefangen, mich mit System Administration zu beschäftigen. Dies stand ganz im Zeichen der Automatisierung. Aus Entwicklersicht war ich jedoch immer frustriert, wie direkt der Betrieb einer Applikation von der darunter liegenden Infrastruktur abhängig ist. Unterbrüche in der Verfügbarkeit waren nicht selten Ursache der Infrastruktur und nicht von der Applikation selbst.
Heute steht uns mit Docker/Kubernetes/OpenShift, AWS, OpenStack und all den anderen Cloud-Technologien die Möglichkeit offen, diese Abhängigkeit aufzubrechen und Applikationen können damit theoretisch unterbruchsfrei betrieben werden.
Von der Theorie zur Praxis ist es in der Regel jedoch nicht selten ein weiter Weg und es gibt viele Gründe, warum ein Entwickler oder Entwicklerteam das Potential von Cloud-Technologien nicht nutzen kann oder will. Die Hauptgründe – wie ich glaube – sind dann auch die Komplexität und / oder das Preisschild, welches damit einhergeht. So bin ich auf der Suche nach Möglichkeiten, die Anwendung dieser Technologien zu vereinfachen. Die Frage ob eine Applikation hochverfügbar betrieben werden soll/kann, darf sich meiner Meinung nach gar nicht erst stellen. Die dazu notwendigen Methoden und Werkzeuge sollen allgemein zugänglich und trivial in der Anwendung sein.
Diese Suche hat mich dann auch zur VSHN AG gebracht. Hier kann ich Teil von einem Team sein, welches daran arbeitet, Entwicklerteams das zu ermöglichen, was ich selber als Entwickler bisher vermisst habe.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us