Allgemein Tech

How we used Crossplane for the things we should not have

30. Sep. 2025

At Swiss Cloud Native Day 2025 in Bern, our colleague Liene Luksika shared an honest and entertaining story about VSHN’s journey with Crossplane. What started as a simple use case evolved into a complex architecture, full of learnings, mishaps, and valuable lessons for anyone building managed services on Kubernetes.

From healthcare to cloud native

Liene comes from the healthcare sector, so when she joined the cloud native world at VSHN, she had to quickly get used to Kubernetes lingo – namespaces, instances, and of course, the obsession with laptop stickers. Luckily, VSHN has been around for more than 10 years, providing 24/7 managed services and building cloud native platforms for customers in Switzerland, Germany, and beyond.

Why Crossplane?

As customers increasingly asked VSHN to run their software as a service – databases, Nextcloud, and other critical apps – we needed a solid way to provision and manage infrastructure across private and public clouds. Crossplane seemed like the perfect fit:

  • It lets engineers define desired state vs. observed state
  • It automatically reconciles the two – like making coffee appear if that is your desired state
  • It provides flexible building blocks to expose clean APIs for managed services on Kubernetes

VSHN has used Crossplane in production since early 2021 (around v0.14) and runs the Crossplane Competence Center in Switzerland.

The evolution: from simple to complex

Our first use case was straightforward: a customer wanted two types of databases (Redis and MariaDB), T-shirt sized, no extras. Crossplane handled this beautifully.

Then reality hit. Customers wanted backups and restores, logs and metrics, alerting, maintenance and upgrades, scaling and user management, special features like Collabora for Nextcloud, and the freedom to choose infrastructure. To serve this, we adopted a split architecture:

  • A control cluster for all Crossplane logic
  • Separate service clusters for customer workloads

This runs today with customers like health organizations in Gesundheitsamt Frankfurt and HIN in Switzerland, on providers such as Exoscale and Cloudscale, keeping data sovereign and operations reliable.

When things go wrong

Building complex platforms means learning in production:

  • Deletion protection surprise: a minor Crossplane change removed labels before deletion, wiping our safeguard. Backups saved the day
  • Race conditions: a split approach to connection details occasionally made apps unreachable until we cleaned up code
  • The big one: during a planned „no-downtime“ maintenance for a fleet with 1’300+ databases, objects hit an invalid state and Kubernetes garbage collection deleted 230 database objects. Some restores were fresh, some older. We pulled in 20 people overnight, communicated openly, and recovered together with the customer

Key lessons: test at realistic scale and keep recent, tested backups. Also, practice the restore path, not just the backup.

Crossplane 2.0 – where next?

Crossplane 2.0 introduces major breaking changes. Staying put is not an option, but migrating means real effort, especially for our split control plane architecture. We are evaluating whether Crossplane 2.0 fits our needs or if alternatives are a better match. As always, we will document our decisions openly in VSHN’s Architecture Decision Records.

Final thoughts

Cloud native success is not just about tools. It is about learning fast, designing for failure, and communicating clearly with customers. Crossplane has enabled a lot of innovation for us, and it has also tested us. Whether we proceed with Crossplane 2.0 or chart a different course, we will keep building sovereign, reliable, open managed services for our customers.

👉 Watch the whole video on YouTube:

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Tech

Jetzt verfügbar: DevOps in der Schweiz Report 2025 🚀

25. Juni 2025

Wir freuen uns riesig, die sechste Ausgabe unseres „DevOps in der Schweiz“ Reports zu veröffentlichen – und diesmal mit einem ganz besonderen Fokus: Platform Engineering und künstliche Intelligenz (AI)! 🤖

Von Januar bis April 2025 haben wir eine Studie mit Fachleuten aus der Schweizer Tech-Community durchgeführt. Das Ergebnis: ein aufschlussreicher Einblick, wie DevOps-Teams in der Schweiz heute arbeiten – welche Tools sie einsetzen, wie ihre Teams aufgebaut sind, welche Herausforderungen sie beschäftigen und wo AI bereits konkret eingesetzt wird.

💡 Sneak Peek gefällig?

  • 💡 Schweizer Unternehmen fragen nicht mehr ob sie DevOps einführen sollen – sondern wie sie es skalieren können.
  • 📈 Platform Engineering und KI verändern, wie Teams Software schneller, sicherer und intelligenter ausliefern.
  • 💡 Jede dritte Schweizer DevOps-Organisation nutzt bereits KI in der Produktion – etwa für Code-Reviews, CI/CD-Optimierung und Architekturunterstützung.
  • Ein weiteres Drittel steht kurz davor, nachzuziehen.
  • 💡 54 % der Schweizer Unternehmen haben inzwischen dedizierte Platform-Engineering-Teams.
  • Internal Developer Platforms (IDPs) werden zur Geheimwaffe, um Autonomie zu ermöglichen und Komplexität zu reduzieren.
  • 💡 Devs sagen Ja zu KI!
  • 79 % der Schweizer Entwickler:innen fühlen sich wohl dabei, KI in ihren Workflows einzusetzen – aber nur 20 % halten sie für vollständig bereit.
  • Der Report zeigt: KI ist vielversprechend, braucht aber bessere Messbarkeit und mehr Vertrauen, um skalieren zu können.

All das (und noch viel mehr!) findest du kompakt zusammengefasst im PDF-Report (nur auf Englisch). Wie schon im Vorjahr enthält der Report eine Executive Summary zu Beginn – perfekt für alle mit wenig Zeit.

📥 Jetzt herunterladen

Viel Spass beim Lesen – wir freuen uns sehr auf dein Feedback! 🙌

Du kannst den DevOps Report 2025 hier herunterladen. Viel Spass beim Lesen und wir freuen uns auf dein Feedback!

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Servala Tech

Redis 8 jetzt im VSHN Application Catalog verfügbar und Redis ist wieder Open Source!

11. Juni 2025

Redis 8 ist ab sofort im VSHN Application Catalog verfügbar und Redis ist als Open Source zurück!
Wir freuen uns riesig, dir mitzuteilen: Redis 8 ist ab sofort im VSHN Application Catalog verfügbar – und dieses Release ist etwas ganz Besonderes: Redis ist offiziell wieder Open Source!

Aber das ist noch nicht alles: Redis ist jetzt auch auf Servala verfügbar – dem offenen, cloud-nativen Service Hub von VSHN, der Entwickler:innen, Softwareanbieter und Cloud Provider über verschiedene Infrastrukturen hinweg verbindet.

Warum das so wichtig ist

Redis gehört seit Jahren zu den beliebtesten In-Memory-Datenbanken für Entwickler:innen und DevOps-Teams. Doch Lizenzänderungen in den letzten Versionen haben für viele Cloud-native Nutzer:innen und offene Ökosysteme Probleme verursacht. Mit Version 8 ändert sich das endlich: Redis kehrt zu seinen Open-Source-Wurzeln zurück und steht jetzt unter der GNU AGPLv3-Lizenz.

„Redis 8 bringt Redis zurück zu seinen Open-Source-Wurzeln. Die zukünftige Entwicklung von Redis wird unter der AGPLv3-Lizenz stattfinden.“
– Redis-Team, offizielle Ankündigung

Das bedeutet: mehr Transparenz, bessere Zusammenarbeit und langfristige Nachhaltigkeit für alle, die Redis als festen Bestandteil ihres Tech-Stacks einsetzen.

Redis 8 mit VSHN und Servala: Vollständig gemanagt & hochverfügbar

Mit Redis 8 im VSHN Application Catalog und auf Servala bekommst du mehr als nur die neueste Open-Source-Version:

  • Produktionsreife Deployments auf Kubernetes und OpenShift
  • Garantierte Verfügbarkeit, Monitoring und automatisches Failover
  • Lifecycle-Management inkl. Updates und Sicherheitspatches
  • Flexibilität beim Cloud-Anbieter – in deiner Infrastruktur oder über unsere Partner
  • Self-Service-Provisionierung über Servala mit integrierter Automatisierung

Egal ob du Redis intern nutzt oder als Service für Teams und Kund:innen bereitstellst – wir kümmern uns darum.

Unterstützte Versionen

Wir unterstützen weiterhin die meistgenutzten Redis-Versionen – ab sofort gehört auch Redis 8 offiziell zu unserem gepflegten Portfolio.
👉 Schau dir die vollständige Liste auf unserer VSHN Redis Produktseite oder der Servala Redis Seite an.

Warum Redis 8 über VSHN oder Servala nutzen?

✅ Wieder vollständig Open Source und Community-getrieben
✅ Kubernetes-native, GitOps-fähige Deployments
✅ Hochverfügbar mit Failover- und Backup-Strategien
✅ Integriert in deine Infrastruktur oder als Managed Service
✅ Unterstützt von VSHN – den DevOps-Experten hinter Servala

Redis 8 ist ein Meilenstein für die Open-Source-Welt – und wir bringen es in deine produktive Umgebung!

🔧 Du brauchst Unterstützung bei der Integration von Redis 8?
📬 Kontaktiere uns für eine kostenlose Beratung!
📩 Abonniere unsere Newsletter, um über Open-Source-News und neue Services informiert zu bleiben.

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Servala Tech

Die technischen Herausforderungen hinter Servala: Standardisierung der Applikationsbereitstellung

14. Apr. 2025

In diesem Nachfolgeartikel zu unserer Einführung in Servala werfen wir einen Blick auf die technischen Herausforderungen, Managed Services für Cloud Provider überall verfügbar zu machen. Erfahre, wie die wiederholte und inkonsistente Natur von Applikationspaketierung, -bereitstellung und -betrieb unsere Vision für Standardisierung inspiriert hat.

Wir gehen auf die Probleme ein, mit denen Platform Engineers heute konfrontiert sind – darunter inkonsistentes Verhalten von Containern, unvorhersehbare Helm Charts und das Chaos des Day-2-Betriebs im Hinblick auf Sicherheit, Konfiguration und Abhängigkeiten.

Lerne, wie Servalas vorgeschlagene Open Standards die Landschaft für folgende Gruppen verändern können:

  • Softwareanbieter – Schnellere Markteinführung und grössere Reichweite ohne operativen Aufwand
  • Cloud Provider – Erweiterung der Servicekataloge mit Enterprise-fähigen Managed Services
  • Endnutzer – Selbstbedienung mit konsistenten, sicheren und konformen Applikationen

Begleite uns auf dieser Reise, um Applikationsbereitstellung zu vereinfachen und Managed Services für alle zugänglich zu machen.

In unserer Einführung in Servala haben wir die technischen Herausforderungen erwähnt, Softwareanbieter dazu zu befähigen, sich selbst auf unserer Plattform zu integrieren. Während wir Servala 2025 weiter aufbauen, widmen wir uns der grundlegendsten Herausforderung: Einen standardisierten Ansatz für Applikationsbereitstellung zu schaffen. Lass uns diese Herausforderungen und unsere vorgeschlagene Lösung im Detail anschauen.

Die wiederholende Natur des Applikationsmanagements

In den letzten Jahren haben wir bei VSHN unzählige Applikationen im Rahmen unseres Managed Services-Angebots betreut – dem Fundament von Servala. Für jede einzelne Applikation mussten wir immer wieder dieselben mühsamen Aufgaben erledigen:

Packaging: Die Applikation in ein deploybares Format bringen, typischerweise durch Erstellen eines OCI-konformen Container-Images, das mit Docker, Podman, Kubernetes und OpenShift kompatibel ist. Der Packaging-Prozess wird automatisiert, sodass er bei jeder neuen Version ausgelöst wird.

Deployment: Die verpackte Applikation automatisiert auf das Zielsystem ausrollen – nicht manuell. Die meisten Deployments umfassen mehrere Umgebungen wie Test, Staging, Pre-Prod und Produktion oder ermöglichen Self-Service-Bereitstellung für SaaS. Oft ist dies mit dem Erstellen von Helm Charts und dem Aufsetzen von Automatisierungspipelines oder APIs verbunden, z.B. über Kubernetes-Operatoren (wie Crossplane).

Day-2-Betrieb: Nach dem Deployment kommen Aufgaben wie das Sammeln von Metriken, Einrichten von Alerts, Updaten der Applikation, Skalierung bei Performanceproblemen, Backup und Restore von Daten, Log-Analyse, 24/7-Support und die Einhaltung von Compliance-Vorgaben dazu – sowie viele weitere Betriebsaufgaben.

Die aktuelle Herausforderung für Servala

Diese Schritte immer und immer wieder durchzuführen, wird ermüdend. Die gleichen Probleme zu lösen, sobald wir uns um eine neue Applikation kümmern, fühlt sich nicht wirklich wertstiftend an. In der Realität müssen wir mit einer Vielzahl unterschiedlicher Herangehensweisen umgehen. Es ist eine grosse Belastung für Engineers, all diese Varianten zu unterstützen. Einzelne Teile der oben erwähnten Funktionen sind meist schon erledigt – zum Beispiel gibt es bereits Container-Images, aber jedes davon verhält sich anders. Das bedeutet, dass wir müssen jedes Mal herausfinden müssen, wie es in den nächsten Schritt integriert werden kann. Dasselbe gilt für die verschiedenen Helm Charts da draussen. Standardisierung würde uns diese Last abnehmen, den Prozess effizienter und weniger repetitiv machen.

Das Kernproblem liegt in der Flexibilität der verwendeten Tools. Container-Images unterscheiden sich massiv in ihrer Bauweise und ihrem Verhalten. Helm Charts akzeptieren Parameter in inkonsistenten Formaten. Zum Beispiel kann ein Container-Image in einem Chart als img, image oder image-registry bezeichnet sein – je nachdem, wer es geschrieben hat.

Security Scans und Compliance Reports sind von Applikation zu Applikation unterschiedlich. Manche bringen SBOMs (Software Bill of Materials) mit, bei anderen müssen diese manuell erstellt werden. Die Konfigurationshandhabung ist genauso inkonsistent: Manche Applikationen setzen auf Umgebungsvariablen, andere erwarten Konfigurationsdateien an bestimmten Orten, wieder andere benötigen eigene Konfigurations-APIs.

Auch der Day-2-Betrieb unterscheidet sich stark. Manche Applikationen exportieren Metriken im Prometheus-Format, andere gar nicht. Identische Metriken haben unterschiedliche Namen, und Logging-Formate reichen von strukturiertem JSON bis zu einfachem Plaintext. Abhängigkeitsmanagement wird oft vernachlässigt – es gibt kaum Infos zu benötigten Services oder Komponenten. Das Betreiben dieser Applikationen wird zum nervenaufreibenden „Whack-a-Mole“-Spiel.

Wir müssen diese grundlegenden Inkonsistenzen lösen, damit Servala skalieren kann und Softwareanbieter ihre Applikationen einfacher onboarden können.

Unsere vorgeschlagene Lösung: Standardisierung

Wie lassen sich diese Hindernisse überwinden? Unser Vorschlag: eine Sammlung von Dokumenten definieren, die Muster für alle notwendigen Teile der Applikationsbereitstellung über Servala beschreiben. Du kannst diese Dokumente auch als Spezifikationen, Golden Paths, Patterns, Standards, Konventionen oder Defaults verstehen. Ziel ist es, einen gemeinsam abgestimmten Weg zu dokumentieren, um die genannten Aufgaben zu lösen – sodass wir sie nicht immer wieder neu durchdenken müssen.

Aber das nur für uns zu machen, fühlt sich nicht richtig an. Wir bei VSHN leben Open Source und Open Standards und wollen gemeinsam mit anderen einen definierten Weg finden. Daher schlagen wir vor, eine Gruppe von Menschen aus verschiedenen Unternehmen zusammenzubringen, um diese Muster gemeinsam zu dokumentieren und sich darauf zu einigen.

Unsere Vision: Eine transformierte Landschaft der Applikationsbereitstellung

Wie wird Applikationsbereitstellung aussehen, wenn die Servala-Spezifikationen weit verbreitet sind? Die Vorteile wären für alle Beteiligten transformativ:

Für Softwareanbieter:

  • Schnellere Markteinführung: Anstatt Monate in den Aufbau von Deployment-, Monitoring- und Wartungssystemen zu investieren, können Anbieter sich auf ihr Kernprodukt konzentrieren und auf Servalas standardisierte Delivery-Mechanismen setzen, um global Cloud Provider zu erreichen.
  • Weniger operativer Aufwand: Wer sich an die Servala-Spezifikation hält, übernimmt automatisch bewährte Praktiken wie Monitoring, Metriken, Logs, Backups etc. – ganz ohne eigenes Operations-Team.
  • Grössere Marktreichweite: Die Möglichkeit, auf jedem Servala-kompatiblen Cloud Provider zu deployen, eröffnet neue Märkte ohne zusätzlichen Engineering-Aufwand.
  • Bessere Sicherheitslage: Standardisierte Security-Scans, Compliance-Berichte und Konfigurationsmanagement senken Risiken drastisch – selbst ohne dediziertes internes Security-Know-how.

Für Cloud Provider:

  • Erweiterte Servicekataloge: Provider können sofort Dutzende Managed Services mit einheitlichen Betriebsstandards anbieten – ein enormer Mehrwert.
  • Operationale Konsistenz: Alle Services folgen denselben Mustern für Monitoring, Alerts und Wartung – das reduziert die Komplexität beim Betrieb verschiedenster Third-Party-Applikationen.
  • Wettbewerbsdifferenzierung: Kleinere Cloud Provider können mit Hyperscalern mithalten, indem sie vergleichbare Kataloge an Managed Services anbieten.

Für EndnutzerInnen:

  • Dank Servalas standardisierten Bereitstellungsmechanismen kannst du komplexe Managed Services mit Vertrauen deployen – in dem Wissen, dass sie konsistenten betrieblichen Mustern folgen. Dieses Empowerment gibt dir Kontrolle und Sicherheit im Betrieb.
  • Die operativen Schnittstellen bleiben unabhängig von der jeweiligen Applikation konsistent, was dir ein vorhersehbares und sicheres Nutzungserlebnis bietet. Diese Vorhersehbarkeit stärkt das Vertrauen in Stabilität und Zuverlässigkeit des Systems.
  • Enterprise-Readiness: Alle Services beinhalten automatisch Sicherheitsfunktionen, Backup/Restore, Monitoring und andere Enterprise-Features – ganz ohne individuellen Integrationsaufwand.
  • Vereinfachte Compliance: Standardisierte Security-Scans und Compliance-Reports machen regulatorische Audits einfacher und weniger ressourcenintensiv.
  • Klarheit bei Abhängigkeiten: Durch transparente Darstellung von Service-Abhängigkeiten und Kompatibilitätsanforderungen werden Deployment-Fehler und Konfigurationsprobleme reduziert.

Die Spezifikationsbereiche von Servala

Wir planen, Muster für folgende Bereiche zu dokumentieren:

Verhalten von Container-Images: Wo werden Daten gespeichert? Wie werden Ports freigegeben? Wie verhält sich der Entry Point? Mit welchen Berechtigungen läuft die Applikation?

Helm Chart „API“: Wie verhalten sich die Standardwerte? Wie sieht die Struktur der Konfiguration aus?

Einheitlicher Betriebsrahmen:

  • Backup und Restore: Standardisierte Schnittstellen für konsistente Backups von Applikationen und Daten, mit klar definierten Wiederherstellungspfaden und Verifikationsmethoden
  • Metriken: Klar definierte Endpunkte für Applikationsmetriken zur Alarmierung, Überwachung und Performance-Einblicke
  • Alerting und Monitoring: Gemeinsame Alarmdefinitionen, Schweregrade und Reaktionserwartungen über alle Applikationen hinweg
  • Logging-Standards: Einheitliche Logformate, Aufbewahrungsrichtlinien und Suchmöglichkeiten zur Vereinfachung der Fehleranalyse
  • SLA-Definitionen: Standardisierte Metriken zur Messung und Berichterstattung von Verfügbarkeit, Performance und Zuverlässigkeit
  • Wartungsfenster: Klare Protokolle zur Koordination und Kommunikation von Wartungen mit minimaler Unterbrechung
  • Abrechnung: Einheitliche Art der Abrechnung der Service-Nutzung
  • Security Scanning und Compliance: Standardisierte Ansätze für Schwachstellenmanagement, Sicherheitsrichtlinien und Compliance-Berichterstattung über alle Applikationen hinweg
  • Konfigurationsmanagement: Einheitliche Muster für Konfigurationshandling, Secret Management und Laufzeit-Rekonfiguration
  • Abhängigkeitsmanagement: Klare Deklaration und Handhabung von Service-Abhängigkeiten, inklusive Versionierungsanforderungen und Kompatibilitätsmatrizen

Self-Service API-Architektur: Standardisierte Strukturen für Kubernetes-Ressourcen vorschlagen, um vorhersehbare Schnittstellen für Applikationsmanagement in verschiedenen Umgebungen zu schaffen

Bisherige Arbeiten, auf denen wir aufbauen möchten

Es gibt bereits erfolgreiche Standardisierungsinitiativen, auf die wir aufbauen wollen:

OCI Image Format

Nach Jahren der Fragmentierung im Container-Bereich hat das Open Container Initiative (OCI) einen einheitlichen Image-Standard geschaffen, der von Tools wie Docker und Podman genutzt wird. Dieser Standard definiert Dateisystempfade (z. B. /var/lib/docker/), Image-Layering und Interoperabilität mit Registries wie Docker Hub, GHCR und Quay.

Kubernetes als Container-Orchestrator

Kubernetes hat sich als De-facto-Standard zur Verwaltung von Containerflotten etabliert. Es bietet eine einheitliche API zur Steuerung von Rechenleistung, Netzwerk und Storage – unabhängig vom Infrastrukturanbieter.

Pod- und Container-Lifecycle-Konventionen in Kubernetes

Die Community hat das Verhalten von Applikationen bei Lifecycle-Events wie Start, Shutdown und Health Checks standardisiert. Anwendungen reagieren nun vorhersehbar auf Restarts und Node Drainings, was den Alltag von Platform Engineers stark vereinfacht. Lifecycle Hooks sind zum Standard geworden.

Prometheus-Metriken

Viele Applikationen exportieren Metriken im Prometheus/OpenMetrics-Format über einen /metrics-Endpunkt. Es gibt etablierte Namenskonventionen (z. B. http_requests_total). Obwohl nicht alles einheitlich ist, ist dies einer der am weitesten akzeptierten inoffiziellen Standards – genutzt von Applikationen, Exportern, Sidecars und Service Monitors.

SBOM (Software Bill of Materials)

Dank offener SBOM-Standards und Unterstützung durch Tools wie GitHub, GitLab und Docker ist das Erstellen und Konsumieren von SBOMs heute Best Practice. Die EU Cyber Resilience Act (CRA) verpflichtet mittlerweile zur Nutzung von SBOMs – sowohl für proprietäre als auch für Open-Source-Software.

12-Factor App

Auch wenn es noch Unterschiede gibt, verdient das 12factor.net-Manifest eine lobende Erwähnung. Es legte 2011 den Grundstein für Cloud-Native Applikationen und beeinflusst Architektur und Plattformdesign bis heute: Konfiguration über Umgebungsvariablen, Statelessness, Logging auf stdout – vieles davon ist heute Best Practice und wird von Plattformen oft indirekt erzwungen.

Helm Chart Best Practices

Die Helm-Community hat auf die strukturelle Inkonsistenz von Charts reagiert und Best Practices, Guidelines sowie Tools wie helm lint und helm create bereitgestellt. Auch wenn diese nicht überall angewendet werden, setzen Projekte wie Bitnami, KubeApps und Backstage zunehmend auf diese Konventionen – ein solides Fundament für Servalas Standardisierungsbemühungen.

OpenAPI / Swagger

Die OpenAPI-Initiative hat die API-Standardisierung stark vorangetrieben. Sie erlaubt maschinenlesbare API-Definitionen, automatisierte SDK-Generierung, Tests, Mocks und lesbare Dokumentation. Sie ist breit akzeptiert – von Kubernetes CRDs bis hin zu GitHub APIs – und bringt Konsistenz und Interoperabilität in API-Design und -Nutzung.

OpenServiceBroker API

Wir haben diese API bereits implementiert und nutzen sie mit einigen Kunden. Sie bietet jedoch keinen deklarativen, Cloud-Nativen Ansatz für Service Listing und Provisionierung.

Crossplane

Wir sind Fans von Crossplanes deklarativem Ansatz für Service-Definitionen und -Instanziierung. Die „Composite Resource Definitions“ (XRDs) erlauben das Erstellen von benutzerdefinierten CRDs mit konsistenter Struktur. Servala ersetzt Crossplane nicht – wir nutzen es unter der Haube.

Open Application Model (OAM)

OAM unterstützt Servalas Mission, indem es einen plattformagnostischen, standardisierten Weg bietet, Cloud-Native Applikationen zu definieren. Es trennt sauber Kernlogik (Components), Betriebsfeatures (Traits) und Abhängigkeiten (Scopes). Mit wiederverwendbaren Definitionen für Metriken, Backups und Autoscaling hilft OAM, die Fragmentierung von Helm Charts und Container Images zu überwinden – eine interessante Grundlage für Servalas Spezifikation.

Platform Specification

Die Platform Spec-Initiative passt perfekt zu Servalas Vision: Sie definiert eine standardisierte YAML-basierte Schnittstelle zwischen Entwicklern und Platform Engineers für Deployment und Management – inklusive Container Images, Umgebungsvariablen, Secrets, Service Bindings und Regeln fürs Ausrollen. Damit lassen sich viele der Inkonsistenzen in Helm Charts und beim Laufzeitverhalten lösen. Durch die Übernahme oder Anlehnung an Platform Spec kann Servala Onboarding vereinfachen, Integrationsaufwand reduzieren und Plattformunabhängigkeit fördern.

Cloud Native Application Bundle (CNAB)

Die CNAB-Spezifikation unterstützt Servalas Ziele durch ein portables, standardisiertes Format für das Packaging und die Verteilung komplexer Applikationen – inklusive Kubernetes-Manifeste, Helm Charts, Terraform-Pläne und Skripte. CNAB definiert ein konsistentes Format für Code, Konfiguration und Lifecycle-Operationen (Install, Upgrade, Uninstall). Damit lässt sich ein einheitliches, wiederholbares Deployment ermöglichen – ideal für komplexe Managed Services über verschiedene Umgebungen hinweg.

Der Weg nach vorn

Servala will das Onboarding von Applikationen um den Faktor 10 beschleunigen – von Wochen auf Stunden oder Tage – und gleichzeitig die Zuverlässigkeit durch bewährte, konsistente Muster für Deployment und Betrieb dramatisch steigern. Wir haben begonnen, diese Standards in unsere Entwicklungs-Roadmap zu integrieren, doch für den Erfolg ist eine breite Zusammenarbeit in der Branche entscheidend.

Wir laden Softwareanbieter, Cloud Provider und Platform Engineers ein, gemeinsam mit uns an der offenen und kollaborativen Gestaltung dieser Standards zu arbeiten. Mit einem soliden Fundament kann Servala die Art und Weise, wie Managed Services bereitgestellt werden, neu definieren: Cloud Provider können ihre Kataloge erweitern und Softwareanbieter werden zu SaaS-Providern – ohne grossen operativen Aufwand.

Bleib auf dem Laufenden

Erfahre immer als Erste oder Erster die neuesten Nachrichten zu Servala. Gib einfach deine E-Mail-Adresse unten ein:

Was kommt als Nächstes?

Im Jahr 2025 werden wir uns darauf konzentrieren, Softwareanbietern die Möglichkeit zu geben, sich selbst in die Servala-Service-Plattform einzubinden. Mehr Informationen zu Warum wir Servala gestartet haben findest du auch in unserem Servala Launch Announcement.

Kontaktiere uns

Interessiert, mehr zu erfahren? Buche ein Meeting oder schreib uns und finde heraus, wie Servala dir helfen kann. Erlebe die Zukunft von Cloud Native Services auf servala.com. 🚀

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Why VSHN Managed OpenShift Customers Are Safe from the Recent Ingress NGINX Vulnerability

26. März 2025

A recently disclosed set of vulnerabilities, known as IngressNightmare, has raised alarms for Kubernetes users relying on the Ingress NGINX Controller. These vulnerabilities (CVE-2025-24513, CVE-2025-24514, CVE-2025-1097, CVE-2025-1098, and CVE-2025-1974), with a critical CVSS score of 9.8, could allow attackers to gain unauthorized access to a Kubernetes cluster, potentially leading to remote code execution and full cluster compromise. However, OpenShift 4.x customers are not affected by this exploit, as OpenShift uses the OpenShift Ingress Operator, based on HAProxy, as the default ingress controller.

The vulnerabilities affect the Ingress NGINX Controller, which is responsible for managing external traffic and routing it to internal services in a Kubernetes cluster. Specifically, they target the admission controller, which, if exposed without authentication, allows attackers to inject malicious configurations, resulting in remote code execution. Since OpenShift 4.x uses the OpenShift Ingress Operator (based on HAProxy) as the ingress controller, customers are not exposed to these risks.

OpenShift 4.x further enhances security by restricting permissions and not permitting the default ingress controller to access sensitive data, such as secrets stored across Kubernetes namespaces. This design decision helps protect OpenShift customers from potential exploits by preventing unauthorized access to critical cluster resources.

As a result, VSHN Managed OpenShift users can be confident that their clusters remain secure without having to worry about this specific vulnerability.

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Announcing Redis by VSHN – Enhance Your Containerized Workloads

6. Nov. 2024

We are thrilled to announce the general availability of Redis by VSHN, now available in OpenShift through the VSHN Application Marketplace. This powerful, in-memory data structure store, known for its blazing-fast performance and versatility, is now optimized for containerized environments on OpenShift. Whether you’re building microservices, real-time analytics, or caching layers, Redis by VSHN offers the reliability and scalability your applications need.

Why Redis on OpenShift?

Redis has been a favorite among developers for its simplicity, performance, and robustness. By integrating Redis into OpenShift, we are enabling seamless deployment and management of Redis instances within your containerized infrastructure. This means you can now leverage Redis’s capabilities while enjoying the benefits of OpenShift’s orchestration and container management.

Key Features

  1. Containerized Deployment: Effortlessly deploy and manage Redis instances in your OpenShift environment.
  2. Scalability: Scale your Redis instances up or down based on your application needs.
  3. High Availability: Ensure your data is always available with Redis’s built-in replication and persistence mechanisms.
  4. Integrated Monitoring: Utilize OpenShift’s monitoring tools to keep an eye on your Redis performance and health.
  5. Security: Benefit from OpenShift’s security features to protect your Redis instances and data.

Benefits for Your Containerized Applications

  1. Performance: Redis’s in-memory data structure ensures lightning-fast read and write operations, ideal for real-time applications.
  2. Flexibility: Support for a variety of data structures, including strings, hashes, lists, sets, and more.
  3. Compatibility: Seamlessly integrate with your existing OpenShift applications and services.
  4. Developer Productivity: Simplified deployment and management allow developers to focus on building features rather than infrastructure.

Getting Started

To get started with Redis by VSHN, visit our Redis product page on the OpenShift Application Marketplace. Our comprehensive documentation will guide you through the setup process, ensuring you can quickly and efficiently integrate Redis into your workflows.

Do you want to see all our open documentation for how to create or use any of the services in the marketplace? You can find them all openly available here VSHN AppCat User Documentation

Be on the lookout for more services as we continue expanding our marketplace. Let’s keep those containers humming!

Ready to roll?

Contact us now!

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
OpenShift Tech

VSHN Managed OpenShift: What you need to know about OpenShift 4.16

16. Okt. 2024

Upgrade to OpenShift version 4.16

As we start to prepare the upgrade to OpenShift v4.16 for all our customers clusters, it is a good opportunity to look again at what’s new in the Red Hat OpenShift 4.16 release. The release is based on Kubernetes 1.29 and CRI-O 1.29 and brings a handful of exciting new features which will make VSHN Managed OpenShift even more robust. Additionally, the new release also deprecates some legacy features which may require changes in your applications.

The Red Hat infographic highlights some of the key changes:

Red Hat OpenShift 4.16: What you need to know Infographic by Ju Lim

Changes which may require user action across all VSHN Managed OpenShift, including APPUiO

For VSHN Managed OpenShift, we’re highlighting the following changes which may require user action in our Release notes summary

Clusters which use OpenShift SDN as the network plugin can’t be upgraded to OpenShift 4.17+

This doesn’t affect most of the VSHN Managed OpenShift clusters since we’ve switched to Cilium as the default network (CNI) plugin a while ago and most of our older managed clusters have been migrated from OpenShift SDN to Cilium over the last couple of months.

The proxy service for the cluster monitoring stack components is changed from OpenShift OAuth to kube-rbac-proxy

Users who use custom integrations with the monitoring stack (such as a Grafana instance which is connected to the OpenShift monitoring stack) may need to update the RBAC configuration for the integration. If necessary, we’ll reach out to individual VSHN Managed OpenShift customers once we know more.

The ingress controller HAProxy is updated to 2.8

HAProxy 2.8 provides multiple options to disallow insecure cryptography. OpenShift 4.16 enables the option which disallows SHA-1 certificates for the ingress controller HAProxy. If you’re using Let’s Encrypt certificates for your applications no action is needed. If you’re using manually managed certificates for your Routes or Ingresses, you’ll need to ensure that you’re not using SHA-1 certificates.

Legacy service account API token secrets are no longer generated

In previous OpenShift releases, a legacy API token secret was created for each service account to enable access to the integrated OpenShift image registry. Starting with this release, these legacy API token secrets aren’t generated anymore. Instead, each service account’s image pull secret for the integrated image registry uses a bound service account token which is automatically refreshed before it expires.

If you’re using a service account token to access the OpenShift image registry from outside the cluster, you should create a long-lived token for the service account. See the Kubernetes documentation for details.

Linux control groups version 1 (cgroupv1) deprecated

The default cgroup version has been v2 for the last couple OpenShift releases. Starting from OpenShift 4.16, cgroup v1 is deprecated and it will be removed in a future release. The underlying reason for the pending removal is that Red Hat Enterprise Linux (RHEL) 10 and therefore also Red Hat CoreOS (RHCOS) 10 won’t support booting into cgroup v1 anymore.

If you’re running Java applications, we recommend that you make sure that you’re using a Java Runtime version which supports cgroup v2.

Warning for iptables usage

OpenShift 4.16 will generate warning event messages for pods which use the legacy IPTables kernel API, since the IPTables API will be removed in RHEL 10 and RHCOS 10.

If your software still uses IPTables, please make sure to update your software to use nftables or eBPF. If you are seeing these events for third-party software that isn’t managed by VSHN, please check with your vendor to ensure they will have an nftables or eBPF version available soon.

Other changes

Additionally, we’re highlighting the following changes:

RWOP with SELinux context mount is generally available

OpenShift 4.16 makes the ReadWriteOncePod access mode for PVs and PVCs generally available. In contrast to RWO where a PVC can be used by many pods on a single node, RWOP PVCs can only be used by a single pod on a single node. For CSI drivers which support RWOP, the SELinux context mount from the pod or container is used to mount the volume directly with the correct SELinux labels. This eliminates the need to recursively relabel the volume and can make pod startup significantly faster.

However, please note that VSHN Managed OpenShift doesn’t yet support the ReadWriteOncePod access mode on all supported infrastructure providers. Please reach out to us if you’re interested in this feature.

Monitoring stack replaces prometheus-adapter with metrics-server

OpenShift 4.16 removes prometheus-adapter and introduces metrics-server to provide the metrics.k8s.io API. This should reduce load on the cluster monitoring Prometheus stack.

Exciting upcoming features

We’re also excited about multiple upcoming features which aren’t yet generally available in OpenShift 4.16:

Node disruption policies

We’re looking forward to the “Node disruption policy” feature which will allow us to deploy some node-level configuration changes without node reboots. This should reduce the need for scheduling node-level changes to be rolled out during maintenance, and will enable us to say confidently whether a node-level change requires a reboot or not.

Route with externally managed certificates

OpenShift 4.16 introduces support for routes with externally managed certificates as a tech preview feature. We’re planning to evaluate this feature and make it available in VSHN Managed OpenShift once it reaches general availability.

This feature will allow users to request certificates with cert-manager (for example from Let’s Encrypt) and reference the cert-manager managed secret which contains the certificate directly in the Route instead of having to create an Ingress resource (that’s then translated to an OpenShift Route) which references the cert-manager certificate.

Changes not relevant to VSHN customers

There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. In particular, OVNKubernetes gains support for AdminNetworkPolicy resources, which provide a mechanism to deploy cluster-wide network policies. Please note that similar results should be achievable with Cilium’s CiliumClusterWideNetworkPolicy resources, and Cilium is actively working on implementing support for AdminNetworkPolicy.

Summary

OpenShift 4.16 brings deprecates some features which may require changes to your applications in order to make future upgrades as smooth as possible. Additionally, OpenShift 4.16 is the last release that supports OpenShift SDN as the network plugin and disables support for SHA-1 certificates in the ingress controller. For those interested in the nitty gritty details of the OpenShift 4.16 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.

VSHN customers will be notified about the upgrades to their specific clusters in the near future.

Interested in VSHN Managed OpenShift?

Head over to our product page VSHN Managed OpenShift to learn more about how VSHN can help you operate your own OpenShift cluster including setup, 24/7 operation, monitoring, backup and maintenance. Hosted in a public cloud of your choice or on-premises in your own data center. 

Simon Gerber

Simon Gerber ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Announcing General Availability of PostgreSQL by VSHN – On OpenShift

3. Okt. 2024

We have some fantastic news – our PostgreSQL service is now generally available on OpenShift in our Application Catalog through the VSHN Application Marketplace. After seeing our container-based database solution work wonders for a few lucky customers, we’re excited to open it up for all to enjoy!

Why You’ll Love It

  • Always On: our high availability setup keeps your data accessible.
  • Safety First: top-notch security features to keep your data safe.
  • Grow As You Go: easily scale with your business needs.
  • Hands-Free Maintenance: automatic updates and backups? Yes, please!
  • Expert Help: our team is always here to support you.

Do you have an application that you run in containers or are moving to containers that uses PostgreSQL? Do you not want the complexity of running your database within a Kubernetes cluster?

Then check out PostgreSQL by VSHN to dive into the details and get started today.

Do you want to see all our open documentation for how to create or use any of the services in the marketplace? You can find them all openly available here VSHN AppCat User Documentation

Be on the lookout for more services as we continue expanding our marketplace. Let’s keep those containers humming!

Ready to roll?

Contact us now!

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Tech

Jetzt verfügbar: DevOps in der Schweiz Report 2024

12. Sep. 2024

Wir freuen uns, die fünfte Ausgabe unseres Reports „DevOps in der Schweiz“ vorstellen zu dürfen!

Von Januar bis April 2024 haben wir eine Studie durchgeführt, um zu erfahren, wie Schweizer Unternehmen DevOps-Prinzipien umsetzen und anwenden.

Wir haben die Ergebnisse in einer PDF-Datei zusammengefasst (nur in englischer Sprache verfügbar) und wie in der vorherigen Ausgabe geben wir auf den ersten Seiten eine kurze Zusammenfassung unserer Ergebnisse.

Du kannst den Bericht hier herunterladen. Viel Spass beim Lesen und wir freuen uns auf dein Feedback!

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Announcing Keycloak by VSHN: Your Ultimate Open Source IAM Solution

4. Sep. 2024

Hey there! We’re thrilled to introduce Keycloak by VSHN – your new go-to for robust, open-source identity and access management (IAM). Bringing together the expertise of VSHN and Inventage, designed to simplify authentication and boost security, our managed service is here to make your life easier and your apps more secure. Let’s dive into what makes Keycloak awesome, and why the VSHN managed version is even better!

What’s the Buzz About Keycloak?

Keycloak is an open-source powerhouse for managing identities and access. With it, you can integrate authentication across your services with little fuss. It’s packed with features like user federation, strong authentication, comprehensive user management, and finely tuned authorization controls. In short, it’s all about making secure access as straightforward as possible.

Why Should Your Enterprise Use Keycloak?

Here are just a few reasons:

  • Single Sign-On (SSO) Magic: Log in once and access all your apps without breaking a sweat. Plus, logging out from one logs you out from all – neat, right?
  • Integration Ease: Keycloak plays nicely with OpenID Connect, OAuth 2.0, and SAML 2.0, sliding seamlessly into your existing setup.
  • Empowering User Federation: Whether it’s LDAP or Active Directory, Keycloak connects smoothly, ensuring all your bases are covered.
  • Granular Control: With its intuitive admin console, managing complex authorization scenarios is a breeze.
  • Scalable Performance: From startups to large enterprises, Keycloak scales with your needs without skipping a beat.

Open Source vs. Proprietary: Why Go Open?

Choosing Keycloak means embracing benefits like:

  • Cost Efficiency: Forget about expensive proprietary license fees; open-source is wallet-friendly.
  • Total Transparency: Open-source means anyone can check the code, which helps in keeping things secure and up to snuff.
  • Community Driven: Benefit from the innovations and support of a global community.
  • Ultimate Flexibility: Adapt and extend Keycloak however you see fit. You’re in control!

What Makes Keycloak by VSHN Special?

  • All the Keycloak Goodies: Everything Keycloak offers, we provide, managed and tuned by experts.
  • Swiss-Based Hosting: Enjoy top-tier privacy, security, and adherence to Swiss regulations.
  • Expert Support: The Keycloak wizards from Inventage and the Kubernetes experts from VSHN are here to help you every step of the way. To learn more visit the Keycloak Competence Center Switzerland (Keycloak Competence Center Switzerland ) run by Inventage.
  • Transparent Pricing: What you see is what you get. No surprises here!
  • Solid SLAs and High Availability: We promise uptime and smooth operations, come rain or shine.

We Love Open Source!

With VSHN, you’re not just getting a service; you’re tapping into a philosophy. Backed by Red Hat and supported by Inventage, we bring you unparalleled expertise right from the heart of Switzerland.

Why Choose Keycloak?

It’s not just stable and feature-rich; it’s a part of the open-source legacy of Red Hat, enhanced by VSHN’s partnership with Inventage – making us a powerhouse of knowledge and reliability in the container and cloudnative IAM domain.

Ready to Jump In?

Dive into seamless identity management with Keycloak by VSHN. Curious for more? Visit our Keycloak product definition page (Keycloak by VSHN) for all the juicy details and kickstart your journey towards streamlined application security.

Ready to roll? Contact us now!

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO Tech

Exploring Namespace-as-a-Service: A Deep Dive into APPUiO’s Implementation

30. Aug. 2024

In the rapidly evolving world of cloud computing and container orchestration, Namespace-as-a-Service (NSaaS) is becoming part of the wide array of different ways you can host your application. Offering NSaaS is possible for companies that have already built years of experience running Kubernetes and container-based platforms. Although many organizations worldwide are still early in their container, Kubernetes, and cloud journeys, those companies that have been doing it for a while are now able to take things to the next level. These experienced organizations can leverage their deep expertise to provide NSaaS with greater stability, maturity, and innovation. As a result, they can offer a robust and reliable cloud service that caters to diverse needs while driving significant advancements in the industry.

This blog post will explore what NSaaS means, delve into its pros and cons, and highlight how its stability and maturity are now enabling its widespread adoption. With the well-established ecosystem and landscape of Kubernetes, NSaaS provides a stable and secure environment for managing application workloads within namespaces efficiently. Could this concept offer cost savings in terms of resources and operational overhead for your organization? What types of organizations and workloads are best suited for a NSaaS platform? Join us as we dive into the details of NSaaS, uncovering why it might be the ideal solution for your cloud computing needs.

What is Namespace-as-a-Service?

Namespace-as-a-Service (NSaaS) is a cloud service model that allows users to create, manage, and utilize namespaces in a Kubernetes environment with ease. In Kubernetes, a namespace is a form of isolation within a Kubernetes cluster, providing a way to divide cluster resources between multiple users. This isolation using namespaces can apply both to applications, user access (developers), storage volumes and network traffic. NSaaS abstracts the complexity of namespace management, offering a simplified and efficient way for users to leverage namespaces without needing in-depth Kubernetes knowledge. To the majority of users, this is no different from having a full Kubernetes cluster or having a Heroku-style PaaS. The advantage is that it’s a bit in between the two, you get the familiar Kubernetes API and environment definitions but without the overhead and complexity of managing a Kubernetes cluster.

Pros of Namespace-as-a-Service

  1. Lower operational overhead
    • NSaaS eliminates the need for users to manage the underlying infrastructure and complex Kubernetes configurations. This simplification allows developers to focus on application development rather than infrastructure management.
  2. Scalability
    • With NSaaS, users can easily scale their applications. As namespaces are lightweight, creating and managing multiple namespaces is efficient and allows for better resource utilization.
  3. Isolation and Security
    • Namespaces provide logical isolation within a Kubernetes cluster. NSaaS leverages this feature to ensure that applications running in different namespaces do not interfere with each other, enhancing security and stability. The platform provider who is running the Kubernetes clusters is responsible for the workload isolation from a security and workload optimization perspective between the different namespaces.
  4. Cost Efficiency
    • By optimizing resource allocation and utilization, NSaaS can reduce costs. Users only pay for the resources they use, and the efficient management of these resources can lead to significant savings.
  5. Simplified billing
    • NSaaS allows billing per namespace, keeping the usages of each namespace separate so that it is clear either which departments, application teams or even customers have what usage and in turn provide the bill outline for the namespace.

Cons of Namespace-as-a-Service

  1. Limited Customization (compared with running separate Kubernetes cluster)
    • While NSaaS simplifies many aspects of namespace management, it may also limit customization options. Advanced users who require fine-tuned control over their Kubernetes environments might find NSaaS restrictive. Fine-tuned control might include: deploying your own Kubernetes operators, defining the maintenance window and Kubernetes version
  2. Dependency on Cluster Provider (compared with running separate Kubernetes cluster)
    • Users are dependent on the cluster provider for managing not just the underlying infrastructure but also the wider cluster configuration and maintenance. Any issues or downtime at the cluster level managed by the provider can directly impact the user’s applications.
  3. Potential Security Risks (compared with a “Heroku style” PaaS)
    • Although namespaces offer both isolation and some customization powers, then improper configuration or vulnerabilities in the NSaaS implementation or the name space configuration can lead to security risks. It is crucial to ensure both that the service provider follows best practices for security, but also that any namespace configuration doesn’t open up unnecessarily in terms of for example network traffic.
  4. Learning Curve (compared with a “Heroku style” PaaS)
    • For users unfamiliar with Kubernetes concepts, there might still be a learning curve associated with understanding namespaces and how to utilize NSaaS effectively. Although the Kubernetes cli and concepts might be more complex and require a little extra learning from developers then it’s considered more of a standard today and therefore less lock-in. It is also possible for a Platform Engineering team to implement a fully automated CICD so that developer only need to know “git push” which is similar to the “cf push” that was popularised by cloudfoundry (that also came out from the team that originally built Heroku). The kubectl cli although more complex simple cli from other providers provides a standard interface and also provides more power to customise the deployment of applications into the namespace.

Implementation of Namespace-as-a-Service on APPUiO

For APPUiO, we have embraced Namespace-as-a-Service to provide our users with a seamless and efficient cloud experience. Here’s how we’ve implemented it:

  1. User-Friendly Interface
    • Our platform offers a user-friendly interface that abstracts the complexities of Kubernetes, allowing users to create and manage namespaces with just a few clicks.
  2. Automated Provisioning
    • We have automated the provisioning of namespaces, ensuring that users can instantly create namespaces without any delays. This automation extends to resource allocation and configuration management.
  3. Robust Security Measures
    • Security is a top priority at APPUiO. We have implemented stringent security measures to ensure isolation and protect user data. Our NSaaS implementation includes role-based access control (RBAC), network policies, and regular security audits. The tech stack behind APPUiO includes OpenShift, Isovalent Cilium Enterprise, Kyverno policies and our APPUiO agent that ensure security policies at all levels are adhered to.
  4. Scalability and Flexibility
    • Our platform is designed to scale with our users‘ needs. Whether you are a small startup or a large enterprise, APPUiO can handle your workloads efficiently. Users can easily scale their applications within their namespaces as their requirements grow.
  5. Comprehensive Support
    • We provide comprehensive support to help users get the most out of our NSaaS offering. Our documentation, tutorials, and support team are always available to assist users in navigating any challenges they might encounter.

Conclusion

Namespace-as-a-Service represents a significant advancement in cloud computing, simplifying the management of Kubernetes environments and enhancing scalability, security, and cost efficiency. At APPUiO, we are proud to offer a robust NSaaS solution that empowers our users to focus on what they do best—developing great applications. By leveraging the power of namespaces, we provide a flexible, scalable, and secure cloud environment that meets the diverse needs of our users.

Whether you’re new to Kubernetes or an experienced user, APPUiO’s Namespace-as-a-Service can help you achieve your cloud goals with ease and efficiency.

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
OpenShift Tech

VSHN Managed OpenShift: Upgrade to OpenShift version 4.15

17. Juli 2024

As we start to prepare to rollout upgrades to OpenShift v4.15 across all our customers clusters it is a good opportunity to look again at what was in the Red Hat OpenShift 4.15 release. It brought Kubernetes 1.28 and CRI-O 1.28 and it was largely focused on small improvements in the core platform and enhancements to how OpenShift runs on underlying infrastructure including bare-metal and public cloud providers.

The Red Hat infographic highlights some of the key changes:

What’s New in Red Hat OpenShift 4.15 Infographic by Sunil Malagi

For our VSHN Managed OpenShift and APPUiO customers, we want to highlight the key changes in the release that are relevant for them.

Across all VSHN Managed OpenShift clusters – including APPUiO

Our summary highlights that apply are the following:

  • OpenShift 4.15 is based on Kubernetes 1.28 and CRI-O 1.28
  • Update to CoreDNS 1.11.1
  • There are some node enhancements (such faster builds for unprivileged pods, and compatibility of multiple image repository mirroring objects)
  • The release also brings updated versions for the monitoring stack (Alertmanager to 0.26.0, kube-state-metrics to 2.10.1, node-exporter to 1.7.0, Prometheus to 2.48.0, Prometheus Adapter to 0.11.2, Prometheus Operator to 0.70.0, Thanos Querier to 0.32.5)
  • It also includes some additional improvements and fixes to the monitoring stack
  • There are some changes to the Bare-Metal Operator so that it now automatically powers off any host that is removed from the cluster
  • There are some platform fixes including some security related ones like securing the cluster metrics port using TLS
  • OLM (Operator Lifecycle Management is being introduced as v1 and this brings three new life cycle classifications for cluster operators that are being introduced: Platform Aligned, for operators whose maintenance streams align with the OpenShift version; Platform Agnostic, for operators who make use of maintenance streams, but they don’t need to align with the OpenShift version; and Rolling Stream, for operators which use a single stream of rolling updates.

On VSHN Managed OpenShift clusters with optional features enabled

The changes that might relate to some VSHN Managed OpenShift customers who have optional features enabled would include:

  • OpenShift Service Mesh 2.5 based on Istio 1.18 and Kiali 1.73
  • Enhancements to RHOS Pipelines
  • Machine API – Defining a VMware vSphere failure domain for a control plane machine set (Technology Preview)
  • Updates to hosted control planes within OSCP
  • Bare-Metal hardware provisioning fixes

Changes not relevant to VSHN customers

There are a number of network related changes in this release, but these are not relevant for VSHN managed clusters as these are mostly running Cilium. It is also interesting to note the deprecation of the OpenShift SDN network plugin, which means no new clusters can leverage that setup. Additionally, there are new features related to specific cloud providers (like Oracle Cloud Infrastructure) or specific hardware stacks (like IBM Z or IBM Power).

The changes to handling storage and in particular storage appliances is also not relevant for VSHN customers as none of the storage features affect how we handle our storage on cloud providers or on-prem.

Features in OpenShift open to customer PoCs before we enable for all VSHN customers

We do have an interesting customer PoC with Red Hat OpenShift Virtualization which is an interesting feature that continues to mature in OpenShift 4.15. We are excited to see the outcome of this PoC and to potentially making that available to all our customers looking to leverage VMs inside OpenShift. We know due to the pricing changes from Broadcom that this is an area many companies and organizations are looking at. Moving from OpenShift running on vSphere to running on bare metal and having VMs inside OpenShift is an exciting transformation, and we hope to be able to bring an update on this in an upcoming separate blog post.

Likewise, we are open to customers who would like to explore leveraging OpenShift Serverless (now based on Knative 1.11 in Openshift 4.15) or perhaps with the new OpenShift Distributed Tracing Platform that is now at version 3.2.1 in the OpenShift 4.15 release (this version includes both the new platform based on Tempo and the now deprecated version based on Jaeger). This can also be used together with the Red Hat Open Telemetry Collector in OpenShift 4.15. There are also new versions of OpenShift Developer Hub (based on Backspace), OpenShift Dev Spaces and OpenShift Local. These are all interesting tools, part of the Red Hat OpenShift Container Platform.

If any of the various platform features are interesting for any existing or new VSHN customers, we would encourage you to reach out so we can discuss potentially doing a PoC together.

Summary

Overall, OpenShift 4.15 brings lots of small improvements but no major groundbreaking features from the perspective of the clusters run by VSHN customers. For those interested in the nitty gritty details of the OpenShift 4.15 release, we refer you to the detailed Red Hat release notes, which go through everything in detail.

VSHN customers will soon be notified about the upgrades to their specific clusters.

Interested in VSHN Managed OpenShift?

Head over to our product page VSHN Managed OpenShift to learn more about how VSHN can help you operate your own OpenShift cluster including setup, 24/7 operation, monitoring, backup and maintenance. Hosted in a public cloud of your choice or on-premises in your own data center. 

Markus Speth

Marketing, Communications, People

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

Rewriting a Python Library in Rust

20. März 2024

Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.

We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.

The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.

I’m also happy to report that this talk was picked up by the popular newsletter „This Week in Rust“ for its 538th edition! You can find the recording of the talk, courtesy of the Rust Zürich meetup group organizers, on YouTube.

Simon Gerber

Simon Gerber ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Event Tech

Watch the Recording of „How to Keep Container Operations Steady and Cost-Effective in 2024“

1. Feb. 2024

Yesterday took place the „How to Keep Container Operations Steady and Cost-Effective in 2024“ event on LinkedIn Live, and for those who couldn’t attend live, you can watch the recording here.

In a rapidly evolving tech landscape, staying ahead of the curve is crucial. This event that will equip you with the knowledge and tools needed to navigate container operations effectively while keeping costs in check.

In this session, we’ll explore best practices, industry insights, and practical tips to ensure your containerized applications run smoothly without breaking the bank.

We will cover:

  • Current Trends: Discover the latest trends shaping container operations in 2024.
  • Operational Stability: Learn strategies to keep your containerized applications running seamlessly.
  • Cost-Effective Practices: Explore tips to optimize costs without compromising performance.
  • Industry Insights: Gain valuable insights from real-world experiences and success stories.

Schedule:

17:30 – 17:35 – Welcome and Opening Remarks
17:35 – 17:50 – Navigating the Container Landscape: 2024 Trends & Insights
17:50 – 17:55 – VSHN’s Impact: A Spotlight on Our Market Presence
17:55 – 18:10 – Guide to Ensuring Steady Operations in Containerized Environments
18:10 – 18:25 – Optimizing Costs without Compromising Performance: A Practical Guide
18:25 – 18:30 -Taking Action: Implementing Best Practices for Container Operations
18:30 -> Q&A

Don’t miss out on this opportunity to set a solid foundation for your containerized applications in 2024.

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Composition Functions in Production

9. Jan. 2024

(This post was originally published on the Crossplane blog.)

Crossplane has recently celebrated its fifth birthday, but at VSHN, we’ve been using it in production for almost three years now. In particular, it has become a crucial component of one of our most popular products. We’ve invested a lot of time and effort on Crossplane, to the point that we’ve developed (and open-sourced) our own custom modules for various technologies and cloud providers, such as Exoscale, cloudscale.ch, or MinIO.

In this blog post, we will provide an introduction to a relatively new feature of Crossplane called Composition Functions, and show how the VSHN team uses it in a very specific product: the VSHN Application Catalog, also known as VSHN AppCat.

Crossplane Compositions

To understand Composition Functions, we need to understand what standard Crossplane Compositions are in the first place. Compositions, available in Crossplane since version 0.10.0, can be understood as templates that can be applied to Kubernetes clusters to modify their configuration. What sets them apart from other template technologies (such as Kustomize, OpenShift Template objects, or Helm charts) is their capacity to perform complex transformations, patch fields on Kubernetes manifests, following more advanced rules and with better reusability and maintainability. Crossplane compositions are usually referred to as „Patch and Transform“ compositions, or „PnT“ for short.

As powerful as standard Crossplane Compositions are, they have some limitations, which can be summarized in a very geeky yet technically appropriate phrase: they are not „Turing-complete“.

  • Compositions don’t support conditions, meaning that the transformations they provide are applied on an „all or nothing“ basis.
  • They also don’t support loops, which means that you cannot apply transformations iteratively.
  • Finally, advanced operations are not supported either, like checking for statuses in other systems, or performing dynamic data lookups at runtime.

To address these limitations, Crossplane 1.11 introduced a new Alpha feature called „Composition Functions“. Note that as of writing, Composition Functions are in Beta in 1.14.

Composition Functions

Composition functions complement and in some cases replace Crossplane „PnT“ Compositions entirely. Most importantly, DevOps engineers can create Composition Functions using any programming language; this is because they run as standard OCI containers, following a specific set of interface requirements. The result of applying a Composition Function is a new composite resource applied to a Kubernetes cluster.

Let’s look at an elementary „Hello World“ example of a Composition Function.

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
  name: example-bucket-function
spec:
  compositeTypeRef:
    apiVersion: example.crossplane.io/v1
    kind: XBucket
  mode: Pipeline
  pipeline:
  - step: handle-xbucket-xr
    functionRef:
      name: function-xr-xbucket

The example above shows a standard Crossplane composition with a new field: „pipeline“ specifying an array of functions, referred to via their name.

As stated previously, the function itself can be written in any programming language, like Go.

func (f *Function) RunFunction(_ context.Context, req *fnv1beta1.RunFunctionRequest) (*fnv1beta1.RunFunctionResponse, error) {
    rsp := response.To(req, response.DefaultTTL)
    response.Normal(rsp, "Hello world!")
    return rsp, nil
}

The example above, borrowed from the official documentation, does just one thing: it reads a request object, modifies a value, and returns it to the caller. Needless to say, this example is for illustration purposes only, lacking error checking, logging, security, and more, and should not be used in production. Developers use the Crossplane CLI to create, test, build, and push functions.

Here are a few things to keep in mind when working with Composition Functions:

  • They run in order, as specified in the „pipeline“ array of the Composition object, from top to bottom.
  • The output of the previous Composition Function is used as input for the following one.
  • They can be combined with standard „PnT“ compositions by using the function-patch-and-transform function, allowing you to reuse your previous investment in standard Crossplane compositions.
  • In the Alpha release, if you combined „PnT“ compositions with Composition Functions, „PnT“ compositions ran first, and the output of the last one is fed to the first function; since the latest release, this is no longer the case, and „PnT“ compositions can now run at any step of the pipeline.
  • Composition Functions must be called using RunFunctionRequest objects, and return RunFunctionResponse objects.
  • In the Alpha release, these two objects were represented by a now deprecated „FunctionIO“ structure in YAML format.
  • RunFunctionRequest and RunFunctionResponse objects contain a full and coherent „desired state“ for your resources. This means that if an object is not explicitly specified in a request payload, it will be deleted. Developers must pass the full desired state of their resources at every invocation.

Practical Example: VSHN AppCat

Let’s look at a real-world use case for Crossplane Composition Functions: the VSHN Application Catalog, also known as AppCat. The AppCat is an application marketplace allowing DevOps engineers to self-provision different kinds of middleware products, such as databases, message queues, or object storage buckets, in various cloud providers. These products are managed by VSHN, which frees application developers from a non-negligible burden of maintenance and oversight.

Standard Crossplane „PnT“ Compositions proved limited very early in the development of VSHN AppCat, so we started using Composition Functions as soon as they became available. They have allowed us to do the following:

  • Composition Functions enable complex tasks, involving the verification of current deployment values and taking decisions before deploying services.
  • They can drive the deployment of services involving Helm charts, modifying values on-the-go as required by our customers, their selected cloud provider, and other parameters.
  • Conditionals allow us to script complex scenarios, involving various environmental decisions, and to reuse that knowledge.
  • Thanks to Composition Functions, the VSHN team can generalize many activities, like backup handling, automated maintenance, etc.

All things considered, it is difficult to overstate the many benefits that Composition Functions have brought to our workflow and to our VSHN AppCat product.

Learnings of the Alpha Version

We’ve learned a lot while experimenting with the Alpha version of Composition Functions, and we’ve documented our findings for everyone to learn from our mistakes.

  • Running Composition Functions in Red Hat OpenShift used to be impossible in Alpha because OpenShift uses crun, but this issue has now been solved in the Beta release.
  • In particular, when using the Alpha version of Composition Functions, we experienced slow execution speeds with crun but this is no longer the case.
  • We learned the hard way that missing resources on function requests were actually deleted!

Our experience with Composition Functions led us to build our own function runner. This feature uses another capability of Crossplane, which allows functions to specify their runner in the Composition definition:

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
[...]
  functions:
  - name: my-function
    type: Container
    container:
      image: my-function
      runner:
        endpoint: grpc-server:9547

Functions run directly on the gRPC server, which, for security reasons, must run as a sidecar to the Crossplane pod. Just like everything we do at VSHN, the Composition Function gRPC Server runner (as well as its associated webhook and all of its code) is open-source, and you can find it on our GitHub. As of the composition function beta, we replaced the custom GRPC logic with the go-sdk. To improve the developer experience, we have created a proxy and enabled the gRPC server to run locally. The proxy runs in kind and redirects to the local gRPC server. This enables us to debug the code and to test changes more efficiently.

Moving to Beta

We recently finished migrating our infrastructure to the most recent Beta version of Composition Functions, released in Crossplane 1.14, and we have been able to do that without incidents. This release included various bits and pieces such as Function Pipelines, an ad-hoc gRPC server to execute functions in memory, and a Function CRD to deploy them directly to clusters.

We are also migrating all of our standard „PnT“ Crossplane Compositions to pure Composition Functions as we speak, thanks to the functions-go-sdk project, which has proven very helpful, even if we are missing typed objects. Managing the same objects with the „PnT“ and Composition Functions increases complexity dramatically. As it can be difficult to determine where an actual change happens.

Conclusion

In this blog post, we have seen how Crossplane Composition Functions compare to standard „PnT“ Crossplane compositions. We have provided a short example, highlighting their major characteristics and caveats, and we have outlined a real-world use case for them, specifically VSHN’s Application Catalog (or AppCat) product.

Crossplane Composition Functions provide an unprecedented level of flexibility and power to DevOps engineers. They enable the creation of complex transformations, with all the advantages of an Infrastructure as Code approach, and the flexibility of using the preferred programming language of each team.

Check out my talk at Control Plane Day with Crossplane, where I walk you through Composition Functions in Production in 15 minutes.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO Tech

„Composition Functions in Production“ by Tobias Brunner at the Control Plane Day with Crossplane

17. Okt. 2023

VSHN has been using Crossplane’s Composition Functions in production since its release. In this talk, Tobias Brunner, CTO of VSHN AG, explains what Composition Functions are and how they are used to power crucial parts of the VSHN Application Catalog or AppCat. He also introduces VSHNs custom open-source gRPC server which powers the execution of Composition Functions. Learn how to leverage Composition Functions to spice up your Compositions!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO Tech

New OpenShift 4.13 Features for APPUiO Users

5. Sep. 2023

We have just upgraded our APPUiO Cloud clusters from version 4.11 to version 4.13 of Red Hat OpenShift, and there are some interesting new features for our APPUiO Cloud and APPUiO Managed users we would like to share with you.

Kubernetes Beta APIs Removal

OpenShift 4.12 and 4.13 respectively updated their Kubernetes versions to 1.25 and 1.26, providing cumulative updates to various Beta APIs. If you are using objects with the CRDs below, please make sure to migrate your deployments accordingly.

CronJobbatch/v1beta1batch/v1
EndpointSlicediscovery.k8s.io/v1beta1discovery.k8s.io/v1
Eventevents.k8s.io/v1beta1events.k8s.io/v1
FlowSchemaflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
HorizontalPodAutoscalerautoscaling/v2beta1 and autoscaling/v2beta2autoscaling/v2
PodDisruptionBudgetpolicy/v1beta1policy/v1
PodSecurityPolicypolicy/v1beta1Pod Security Admission
PriorityLevelConfigurationflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
RuntimeClassnode.k8s.io/v1beta1node.k8s.io/v1

As a reminder, the next minor revision of Red Hat OpenShift will update Kubernetes to version 1.27.

Web Console

APPUiO users will discover a neat new feature on the web console: resource quota alerts are displayed now on the Topology screen whenever any resource reaches its usage limits. The alert label link will take you directly to the corresponding ResourceQuotas list page.

Have any questions or comments about OpenShift 4.13? Contact us!

Aarno Aukia

Aarno ist Mitgründer der VSHN AG und als CTO für die technische Begeisterung zuständig.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

VSHN’s Response to Zenbleed CVE-2023-20593

25. Juli 2023

Yesterday evening, on Monday, July 24th, 2023, at around 21:15 CEST / 12:15 PDT, our security team received a notification about a critical security vulnerability called „Zenbleed“ potentially affecting the cloud providers where VSHN’s customers systems run on.

This blog post provides details about Zenbleed and the steps taken to mitigate its risks.

What is Zenbleed?

Zenbleed, also known as CVE-2023-20593, is a speculative execution bug discovered by Google, related to but somewhat different from side channel bugs like Meltdown or Spectre. It is a vulnerability affecting AMD processors based on the Zen2 microarchitecture, ranging from AMD’s EPYC datacenter processors to the Ryzen 3000 CPUs used in desktop & laptop computers. This flaw can be exploited to steal sensitive data stored in the CPU, including encryption keys and login credentials.

VSHN’s Response

VSHN immediately set up a task force to discuss this issue, including the team of one of our main cloud providers (cloudscale.ch) in a call to determine choices of action; among possible options, were contemplated ideas like isolating VSHN customers on dedicated nodes, or patching the affected systems directly.

At around 22:00 CEST, the cloud provider decided after a fruitful discussion with the task force that the best approach was to implement a microcode update. Since Zenbleed is caused by a bug in CPU hardware, the only possible direct fix (apart from the replacement of the CPU) consists of updating the CPU microcode. Such updates can be applied by updating the BIOS on affected systems, or applying an operating system kernel update, like the recently released new Linux kernel version that addresses this vulnerability.

Zenbleed isn’t limited to just one cloud provider, and may affect customers operating their own infrastructure as well. We acknowledged that addressing this vulnerability is primarily a responsibility of the cloud providers, as VSHN doesn’t own any infrastructure that could directly be affected.

The VSHN task force handed the monitoring over to VSHN Canada to test the update as it rolled out to production systems, who stayed in close contact to ensure there were no QoS degradations after the microcode update.

Aftermath

cloudscale.ch successfully finished its work at 01:34 CEST / 16:34 PDT. All VSHN systems running on that provider have been patched accordingly, and the tests carried show that this specific vulnerability has been fixed as required. VSHN Canada confirmed that all systems were running without any problems.

We will continue to monitor this situation and to inform our customers accordingly. All impacted customers will be contacted by VSHN. Please do not hesitate to contact us for more information.

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Tech

Hol dir den „DevOps in der Schweiz 2023“ Report

12. Mai 2023

Wir freuen uns, die vierte Ausgabe unseres Reports „DevOps in der Schweiz“ vorstellen zu dürfen!

Von Februar bis April 2022 haben wir eine Studie durchgeführt, um zu erfahren, wie Schweizer Unternehmen DevOps-Prinzipien umsetzen und anwenden.

Wir haben die Ergebnisse in einer PDF-Datei zusammengefasst (nur in englischer Sprache verfügbar) und wie in der vorherigen Ausgabe geben wir auf den ersten Seiten eine kurze Zusammenfassung unserer Ergebnisse.

Du kannst den Bericht hier herunterladen. Viel Spass beim Lesen und wir freuen uns auf dein Feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Kanada Tech

VSHN Canada Hackday: A Tale of Tech Triumphs and Tasty Treats

24. März 2023

The VSHN Canada Hackday turned into an epic two-day adventure, where excitement and productivity went hand in hand. Mätthu, Bigli, and Jay, our stellar team members, joined forces to level up VSHN as a company and expand their skill sets. In this blog post, we’re ecstatic to share the highlights and unforgettable moments from our very own Hackday event.

🏆 Notable Achievements

1️⃣ Revamping Backoffice Tools for VSHN Canada

Mätthu dove deep into several pressing matters, including:

  • Time tracking software that feels like a relic from the 2000s. With the Odoo 16 Project underway, we explored its impressive features and found a sleek solution for HR tasks like time and holiday tracking and expenses management. Now we just need to integrate it as a service for VSHN Canada.
  • Aligning the working environments of VSHN Switzerland and Canada. Although not identical, we documented the similarities and differences in our handbook to provide a seamless experience.
  • Tidying up our document storage in Vancouver. Previously scattered across Google Cloud and Nextcloud, a cleanup session finally brought order to the chaos.

📖 Documentation available in the Handbook.

2️⃣ Mastering SKS GitLab Runners

Bigli and Jay teamed up to craft fully managed SKS GitLab runners using Project Syn, aiming to automate GitLab CI processes and eliminate manual installation and updates. This collaboration also served as an invaluable learning experience for Jay, who delved into Project Syn’s architecture and VSHN’s inner workings. Hackday milestones included:

  • Synthesizing the GitLab-runner cluster
  • Updating the cluster to the latest supported version
  • Scheduling cluster maintenance during maintenance windows
  • Developing a component for the GitLab-runner
  • Implementing proper monitoring, time permitting

📖 Documentation available on our wiki.

🍻 Festive Fun

On Hackday’s opening day, we treated ourselves to a team outing at „Batch,“ our go-to local haunt nestled in Vancouver’s scenic harbor. Over unique beers and animated chatter, we toasted to our first-ever Canadian Hackday.

🎉 Wrapping Up

VSHN Canada’s Hackday was an exhilarating mix of productivity, learning, and amusement. Our team banded together to confront challenges, develop professionally, and forge lasting memories. We can hardly wait for future Hackday events and the continued growth of VSHN Canada and VSHN.

Jay Sim

Jay Sim ist ein DevOps-Ingenieur bei VSHN Kanada.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt