Project Syn Tech

Rewriting a Python Library in Rust

20. Mrz 2024

Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.

We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.

The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.

I’m also happy to report that this talk was picked up by the popular newsletter „This Week in Rust“ for its 538th edition! You can find the recording of the talk, courtesy of the Rust Zürich meetup group organizers, on YouTube.

Simon Gerber

Simon Gerber ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn

Reducing the Cost of Kubernetes Management

29. Mrz 2023

Back in 2019 we started working on Project Syn to streamline, standardize, and reduce the amount of work required by our teams to manage the hundreds of Kubernetes clusters from our customers.

Those management tasks ranged from regular backup (thanks to our K8up operator) to synchronizing security settings, to deploying applications, and storing secrets securely across a huge number of clusters. Doing so manually was not only error-prone, but it also increased the risk of security issues, not to mention being a source of stress for our teams.

We were among the first companies in Europe that faced such a challenge, and Project Syn is now a mature component of our workflows and infrastructure. Thanks to its GitOps-based architecture, Project Syn can be extended via components to deploy and support a variety of tools and systems: from Kyverno policies, to Argo CD pipelines, to storage systems such as CockroachDB or Minio, Project Syn can be extended in almost any direction you can imagine. You can see the almost 100 components available for Project Syn at the Project Syn Commodore Component Hub; all of them are battle tested and free to use!

And even better, Project Syn is compatible with all Kubernetes distributions, including Red Hat OpenShift, SUSE Rancher, AKS, EKS, GKE, ROSA, and more.

Check this brief introduction to Project Syn for technical teams on our YouTube channel VSHN.TV. As more and more companies run applications on an ever-increasing number of clusters, we’re confident Project Syn can help you manage your Kubernetes deployments and settings from a central location with lower costs and more efficiently. And if you need help with those clusters or Project Syn, don’t hesitate to contact us! We will be thrilled to help you.

Oksana Horobinska

Oksana ist Business Development Specialist bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

Keeping Things Up to Date with Renovate

28. Jun 2022

Our customers trust us with their most precious resource: their information systems. Our job is to keep the underlying systems running, updated, and most importantly, secure.

Project Syn with its Commodore Components is one of the primary weapons in our arsenal to configure and thus protect those systems. Thanks to its GitOps approach, we can ensure that all Kubernetes clusters are always running the latest and (hopefully) most secure version possible.

But just like any other software package, Project Syn brings its complexity: we must keep it safe and sound, which means watching over its container images, its Helm charts, and all of the Commodore Components we use every day.

As you can imagine, juggling so many different software packages is a considerable task; now, think about all of their upstream dependencies (most of them are container images and helm charts, but also Go and Python are a part of the mix). The complexity of the task exponentially increases.

How do we cope with this? Well, as usual, standing on the shoulder of giants. In this case, Renovate.

Renovate has been created to manage this complexity, whether container images, Helm charts, or upstream dependencies. But understandably enough, Renovate per se does not know anything about Commodore Components (at least not yet!), and in particular, it does not know about the Project Syn configuration hierarchy and how to find dependencies within that hierarchy.

So, what’s an Open Source developer to do? We forked Renovate, of course, and adapted it to our needs. How?

  1. We added the Project Syn configuration hierarchy as a new Manager.
  2. We reused the existing datasource to detect new versions of our Commodore Components.

Then we configured our own Renovate fork on all the repositories holding our source code and started getting notified via pull requests whenever there was a new dependency version. Voilà!

With this approach, we have been able to automate much work and avoid using outdated software by automatically being notified of new versions. No more forgotten updates!

We also decided to use „golden files“ to test our Commodore Components; this, in turn, meant that we could not merge PRs created by Renovate in case of failure. For those cases, we also taught Renovate how to update those golden files if needed.

The pull request „Update dependency ghcr.io/appuio/control-api to v0.8.1 – autoclosed #29“ is a live example of this mechanism in action, and you’re most welcome to check it out.

Christian Häusler

Christian Häusler ist ein Product Owner bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

The New Commodore Components Hub

28. Jul 2021

We’re very happy to announce a new member of the Project Syn family: the new Commodore Components Hub. This is the new central point of reference and information for all Commodore components available in GitHub.

Commodore Components Hub

Not only does the Commodore Component Hub list all existing components on GitHub, it also automatically imports and indexes the documentation of each and every one, written in Asciidoc and formatted as an Antora documentation site. This makes it very easy to find the perfect component that suits your needs.

The source code used to generate the Commodore Component Hub was born during our recent VSHN HackDay; it’s written in Python and 100% open source (of course!). Check it out in GitHub.

Get your Component on the Hub!

To add your own Commodore Component to the Hub, it’s very easy: just add the commodore-component tag to your project on GitHub, and voilà! The Commodore Component Hub is rebuilt every day at every hour from 6 AM to 7 PM (CET time).

We recommend that you write the documentation of your component with Asciidoc in the docs folder of your component. This will ensure that users will be able to find your component, and most important, also learn how to use it properly.

We look forward to featuring your Commodore Components on the Hub!

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Presse Project Syn Tech

K8up Version 1.0 Released

16. Mrz 2021

We are thrilled to announce the general availability of K8up version 1.0!

New K8up Logo
New K8up Logo

K8up (pronounced /keɪtæpp/ or simply „ketchup“) is a Kubernetes Operator distributed via a Helm chart, compatible with OpenShift and plain Kubernetes. It allows cluster operators to backup PVCs; to perform on-demand backups; or to schedule regular backups. K8up is written in Go and is an Open Source project hosted at GitHub.

This new version is a full rewrite of the operator, based on the Operator SDK. This provided us with a more stable code base, with extended testing, paving the way for future improvements and new features.

Speaking about which; some of the new features in version 1.0 are:

  • Support for Kubernetes 1.20.
  • K8up status printed when using kubectl get or oc get.
  • Run multiple replicas of K8up for high availability.
  • Specify Pod resources (CPU, memory, etc.) from different levels.
  • New random schedules (e.g. @daily-random) to help distribute the load on the cluster.
  • New end-to-end and integration tests.
  • Docker image mirroring in Docker Hub and Quay.
  • More and better documentation, including a new logo!

Current K8up users: please check the upgrade guide with all the information you need to start using the latest and greatest version of K8up.

Would you like to know more or to contribute to the project? Check the K8up GitHub project and backup your clusters!

Christian Сremer

Christian Cremer ist ein DevOps-Ingenieur bei VSHN.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Project Syn

Crossplane Community Day December 2020 Recap

16. Dez 2020

Next-gen DevOps with Crossplane by Tobias Brunner

Lightning talk at the Crossplane Community Day December 2020

I attended the Crossplane Community Day (actually evening for the Europeans) on December 15th 2020 which was titled „Modernizing with an API-centric Control Plane“.

In my lightning talk „Crossplane as a cornerstone in a next-gen hosted DevOps platform“ I introduced the project which we’re currently working on with Swisscom and revealed the Open Sourcing of our Crossplane Open Service Broker API integration. This Open Service Broker (OSB) integration with Crossplane allows to consume Crossplane objects by any OSB capable client infrastructure like Cloud Foundry and makes it possible to offer any kind of services via the many Crossplane providers. More of what we do with this integration will be written in an upcoming blog post. The application is currently in a PoC state and available on GitHub under https://github.com/vshn/crossplane-service-broker-poc. We’re actively working on the production implementation in https://github.com/vshn/crossplane-service-broker.

Crossplane was released in version 1.0 at this event. I was eagerly waiting for this release since about a year when we first discovered Crossplane and experimented with it. Congratulations to the Crossplane community and contributors and I’m really looking forward to see what happens with Crossplane in 2021 and beyond – also what we’ll do with it at VSHN. Exciting times ahead!

There where many great and very interesting talks to listen to. The recording of the full event will be available in the next days, check back the social media channels to get notified about it.

During the event I was active in the Crossplane Slack channel and was able to capture some very interesting questions and answers around Crossplane and the wider ecosystem.

Here are my highlights quoted

Question

Why would one want to use Crossplane rather than the hyperscaler’s operators, like AWS ACK and Azure Service operator directly?

Answer

Good question – I think the key reasons you might choose Crossplane boil down to the XRM and Composition.
XRM is the Crossplane Resource Model – if you’re using more than one cloud, our CRs work the same way across them all. Similar patterns.
(Or even if you’re using providers for things like SQL users and database, or Helm charts.)
Composition is a layer we provide on top of XRM compliant resources that let you define your own APIs (your own CRs) without writing code. So you can build your own classes of service and opinionated APIs atop those raw low level APIs.

Also bears mentioning that we are working with them on code generating crossplane controllers from the same codegen pipeline, so below a certain level of abstraction we will be sharing code for interacting with provider SDKs.

https://blog.crossplane.io/accelerating-crossplane-provider-coverage-with-ack-and-azure-code-generation-towards-100-percent-coverage-of-all-cloud-services/

Question

How about advantages if any for on-prem private clouds?

Answer

The answer is about the same there, as compared to something that might operate databases etc on-prem. Admittedly though our provider support for on-prem is lighter on the ground. Definitely appreciate contributions there!

The other thing I’ll add here, is Crossplane can give you a cloud-like provisioning experience in your on-premises environment which can be a big win for developers.

Question

Is it correct that external resources are not namespaced in Crossplane? If so, what is the rationale? If there’s a design doc that covers it, that would be great

Answer

With the whole separation of concerns thing we treat managed resources (our CRs that represent ERs) as a platform / infra concern, so they’re cluster scoped like a node or a PV. The claims that represent them are namespaced. This is kind of handy in two ways:

  • If you imagine an API server that’s dedicated to Crossplane, the platform team can view all the managed resources in one big global view, but see the claims that represent those resources broken down by namespace (i.e. often by team).
  • Sometimes we don’t want to offer a claim for an XR – e.g. a VPC XR is probably only something the platform operators want to control.
  • The big one – sometimes we want cross resource refs that would violate namespace boundaries. Imagine for example the platform folks create a VPC XR, and folks making claims down in namespaces can make a claim for a database that they want to be connected to that VPC. If the VPC was off in the “platform-infra” namespace or whatever they’d need to reference it across namespaces.

An alternative answer is that we designed for a world where we can partition concerns just like PV/PVC

Question

Re the current Terraform talk – is the idea to use Terraform providers to generate Crossplane CRDs and controllers that run independent of Terraform… or is the idea to proxy the CRDs through an in-cluster Terraform controller?

Answer

More the former. Terraform is actually a couple of processes running together – each provider is a process that has a gRPC API, and the terraform CLI tool sits in front of that. We run those provider binaries, but we put a Kubernetes controller in (i.e. a Crossplane provider) in front of them instead of the terraform CLI.

Furthermore I discovered some new tools:

  • Kubernetes External Secrets: „Kubernetes External Secrets allows you to use external secret management systems, like AWS Secrets Manager or HashiCorp Vault, to securely add secrets in Kubernetes“
  • CDK for Kubernetes: „Define Kubernetes apps and components using familiar languages“ with integration for Crossplane discussed here: Crossplane issue 1955.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Event Project Syn

Tobias Brunner holds Lightning Talk about Project Syn at Crossplane Community Day 2020

25. Nov 2020

The Future of Cloud Engineering is a Universal API

Tuesday, December 15, 2020 10.00am PST (which is 7pm CET) • Virtual Event


Tobias will be sharing the stage with Bassam Tabbara CEO Upbound, Kelsey Hightower Developer Advocate Google, Joe Beda Principal Engineer VMware and many more.

Tobias Brunner’s Lightning Talk about how VSHN uses Crossplane

When? On Tuesday, December 15, 2020, 12.50 pm PST (which is 9.50 pm CET)
What? Lightning Talk: Crossplane as a cornerstone in a next-gen hosted DevOps platform
Who? Tobias Brunner, Head of DevOps & Partner of VSHN – The DevOps Company:

What is Crossplane Community Day

Join us for the second Crossplane Community Day to hear about how using a Universal Cloud API can help you eliminate infrastructure bottlenecks, avoid security pitfalls, and deliver apps faster. Come hear from industry experts on how the Kubernetes-style API is modernizing application and infrastructure management beyond traditional infrastructure as code approaches.

Speakers include:

  • Bassam Tabbara CEO Upbound
  • Kelsey Hightower Developer Advocate Google
  • Joe Beda Principal Engineer VMware
  • Brendan Burns Distinguished Engineer Microsoft
  • Jay Pipes Principal Open Source Engineer Amazon
  • Daniel Mangum Software Engineer Upbound

Partnership VSHN & Crossplane

VSHN has always been cloud agnostic and will further enhance this paradigm by partnering with Crossplane – „The open source multicloud control plane“. We are using Crossplane for example as a cornerstone in our next-gen hosted DevOps product and Project Syn.
Crossplane is an extensible open-source platform that adds declarative cloud service provisioning and management to the Kubernetes API with excellent support for GitOps-style continuous deployments for cloud-native apps that is at the heart of the next-gen offering of VSHN.
Project Syn is designed to run on all Kubernetes distributions and clouds. It’s prepared to support all the specific features of any given cloud and Kubernetes distribution by abstracting the specifics. This means Project Syn will run on OpenShift with APPUiO.ch, Rancher Kubernetes and all managed Kubernetes offerings. Support for even more Kubernetes flavors and clouds are added on demand.
By leveraging Crossplane, the user of Project Syn can specify the backend services needed in a completely cloud-independent way. Provisioning of these services happens fully automated, handled by the tooling in the most optimal way.
You can find out more about the VSHN & Crossplane partnership here and on Netzwoche.

About Crossplane

Crossplane is an open source multicloud control plane to manage your cloud-native applications and infrastructure across environments, clusters, regions and clouds. It enables provisioning and full-lifecycle management of applications and managed services from your choice of cloud using kubectl. Crossplane can be installed into an existing Kubernetes cluster to add managed service provisioning or deployed as a dedicated control plane for multi-cluster management and workload scheduling. Crossplane enables the community to build and publish Stacks to add more clouds and cloud services to Crossplane with support for out-of-tree extensibility and independent release schedules. Crossplane includes Stacks for GCP, AWS, and Azure today.

About VSHN – The DevOps Company

VSHN (pronounced ˈvɪʒn like “vision”) is Switzerland’s leading DevOps, Docker, Kubernetes, OpenShift, and 24/7 cloud operations partner.
VSHN was founded with the intention to fundamentally shake up the hosting market. As a lean startup, we have focused on operating IT platforms through automation, agility, and a continuous improvement process. Completely location-independent and without our own hardware, we operate extensive applications according to the DevOps principle agile and 24/7 on every infrastructure, so that software developers can concentrate on their business and IT operations are relieved.

Markus Speth

Markus is VSHN's CEO and one of the General Managers.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Project Syn

New introduction video to Project Syn

30. Okt 2020

Tobias Brunner held a brand new introduction to #ProjectSyn at the Cloud Native Bern Meetup on Oct 29, 2020.

Watch the recording and let us know what you think.

Markus Speth

Markus is VSHN's CEO and one of the General Managers.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Event Project Syn

Project Syn ist nominiert für den DINAcon Award 2020

22. Okt 2020

DINAcon 2020

Die DINAcon, Konferenz für digitale Nachhaltigkeit, findet in diesem Jahr virtuell über BigBlueButton statt. Bei inspirierenden Keynote-Inputs und ausgewählten Sessions treffen an der Konferenz Opinion Leader aus Politik, Wirtschaft, öffentlichem Sektor und der Digitalisierungs-Szene zusammen und tauschen sich darüber aus, wie digitale Transformation mit nachhaltigem Mehrwert für Gesellschaft und Umwelt umgesetzt werden kann.
Die DINAcon wird von der Forschungsstelle Digitale Nachhaltigkeit der Universität Bern durchgeführt und durch den Location Partner Welle7 Workspace und zahlreiche Partner-Organisationen unterstützt.

DINAcon Awards am Fr. 23. Oktober 2020

Mit den DINAcon Awards werden jedes Jahr an der digitalen DINAcon Konferenz Open-Projekte von Communities, Unternehmen, Verwaltungen, Organisationen und Einzelpersonen ausgezeichnet. Die Teilnahme ist kostenlos, teilnahmeberechtigt sind Projekte aus ganz Europa. Der Gewinner des DINAcon Business Awards 2020 ist automatisch für die „Digital Economy Awards“ von swissICT nominiert.
Die Organisatoren freuen sich sehr darüber, auch in diesem aussergewöhnlichen Jahr 2020 die DINAcon Awards an 6 grossartige Projekte zu vergeben.
Die Verleihung findet am Freitag, 23. Oktober ab 15 Uhr virtuell über BigBlueButton statt.

Project Syn ist unter der Award-Shortlist

Unser Project Syn ist eines von sechs Projekten, die an den DINAcon Awards 2020 auf der Shortlist steht in der Kategorie Newcomer.
Drückt uns uns die Daumen und schaut unser Preview-Video mit Tobias Brunner:

Gratis als Teilnehmer an die virtuelle DINAcon 2020

Wer spontan teilnehmen möchte, kann sich ein kostenloses Ticket bestellen.
Viel Spass an der DINAcon und bis bald!

Markus Speth

Markus is VSHN's CEO and one of the General Managers.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

Second Beta Release of Project Syn Tools

23. Jul 2020

Without further ado, we’re announcing the release 0.2 of the Project Syn tools.
Since the first public release mid-March this year (read more about it in First Pre-Release of Project Syn Tools) we used the tools on a daily basis, in particular for the development of our new product „VSHN Syn Support“. And of course we have incorporated all of that experience in the source code. The main features are now in place, and are getting better and better on a daily basis.

New Features and Improvements

When reading the announcement of a new version, engineers are always interested in new features and improvements. So these are the most important new additions since 0.1:

  • Everything required for setting up a new cluster (GitOps repository, cluster config file in Tenant configuration, Vault secrets, and more) is now fully automated. One API call to register a new cluster and you’re done.
  • In parallel to the creation of clusters, we have also automated all steps required to decommission them (Repo deletion, Vault secret cleanup, and more). Just delete it and everything is gone (of course, there are preventive measures in place to not make this an Uh-oh moment).
  • Commodore got a lot of improvements: for local development, and for developing new components with a comprehensive cookiecutter template.

Document All The Things

Besides implementing new features and fixing bugs we put a lot of effort into the documentation. The main documentation page https://syn.tools/ got a completely new structure and a huge amount of new content. We’re in the process of adding new pages frequently, so make sure to check it out every so often.
Before 0.2 it was hard to get started with Project Syn and to understand what it was all about. To solve that issue we wrote the following introductions:

Our next goal is to document the concepts behind configuration management with Commodore in detail.

Commodore Components on GitHub

An important building block of Project Syn are Commodore Components. Over the past months we’ve written and open sourced more than 10 Commodore Components on GitHub. They offer the flexibility to install and configure Kubernetes system services, adapted to their respective distribution and infrastructure.
These Commodore Components can be found by searching for the „commodore-component“ topic on GitHub.
We are writing and refining more and more Components every day. We are going to publish some guidelines about how to write Commodore Components (one specifically for OpenShift 4 Components is already available) and eventually enforce them via CI jobs and policies.
An upcoming Component Writing Tutorial will help beginners to start writing own Components or contribute to existing ones.

The Road to 1.0 and Contributions

What we learnt while working on Project Syn over the last few months gave us a very clear picture of what we want to achieve in version 1.0. The roadmap contains the most important topics:

  • Documentation! We have to and will put a lot of effort into documentation, be it tutorials, how-to guides, or explanations.
  • Full Commodore automation to automate and decentralize the cluster catalog compilation process.
  • Developer experience improvements for simplifying the development of Commodore Components even further.
  • Engineering of a new tool helping users to launch managed services on any Kubernetes cluster.
  • Cluster provisioning automation integration, to leverage third party tools for automatically bootstrapping Kubernetes clusters.

This is not all; check the more detailed roadmap on the Project Syn page for more. The GitHub project will grow with issues over the next few weeks.
If you think that this sounds interesting and you would like to contribute, we now have an initial Contribution Guide available and are very open to suggestions and pull requests. Just get in contact with us if you’re interested.

Our Product: VSHN Syn Support

Besides the Open Source project we were also working on defining what added value you can get from VSHN. We call this product „VSHN Syn Support.“ If you’re interested in getting commercial support from VSHN for Project Syn on a Managed Kubernetes Cluster based on OpenShift 4 or Rancher, get in touch with us. More information about VSHN Syn Support can be found here.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

Tutorial: Backing up Kubernetes Clusters with K8up

23. Jun 2020

One of the most common questions we got from companies moving to Kubernetes has always had to do with backups: how can we ensure that the information in our pods and services can be quickly and safely restored in case of problems?
This situation is so common that we VSHN decided to tackle it with our own Kubernetes operator for backups, which we called K8up.
Note: This tutorial is available in three versions, each in its own branch of the GitHub repository bundled with this text:

1. What is K8up?

K8up (pronounced „/keɪtæpp/“ or simply „ketchup“) is a Kubernetes operator distributed via a Helm chart, compatible with OpenShift and plain Kubernetes. It allows cluster operators to:

  • Backup all PVCs marked as ReadWriteMany or with a specific annotation.
  • Perform individual, on-demand backups.
  • Schedule backups to be executed on a regular basis.
  • Schedule archivals (for example to AWS Glacier), usually executed in longer intervals.
  • Perform „Application Aware“ backups, containing the output of any tool capable of writing to stdout.
  • Check the backup repository for its integrity.
  • Prune old backups from a repository.
  • Based on top of Restic, it can save backups in Amazon S3 buckets, and Minio (used we’ll see in this tutorial.)

K8up is written in Go and is an open source project hosted in GitHub.

2. Introduction

This tutorial will show you how to backup a small Minikube cluster running on your laptop. We are going to deploy MinioMariaDB and WordPress on this cluster, and create a blog post in our new website. Later we’re going to „deface“ it, so that we can safely restore it later. Through this process, you are going to learn more about K8up and its capabilities.
Note: All the scripts and YAML files are available in GitHub: github.com/vshn/k8up-tutorial.

2.1 Requirements

This tutorial has been tested in both Linux (Ubuntu 18.04) and macOS (10.15 Catalina.) Please install the following software packages before starting:

  • Make sure PyYAML 5.1 or later is installed: pip install PyYAML==5.1
  • The kubectl command.
  • The Restic backup application.
  • The latest version of Minikube (1.9 at the time of this writing.)
  • Helm, required to install K8up in your cluster.
  • k9s to display the contents of our clusters on the terminal.
  • jq, a lightweight and flexible command-line JSON processor.

3. Tutorial

It consists of six steps to be executed in sequence:

  1. Setting up the cluster.
  2. Creating a blog.
  3. Backing up the blog.
  4. Restoring the contents of the backup.
  5. Scheduling regular backups.
  6. Cleaning up.

Let’s get started!

3.1 Setting up the cluster

Note: The operations of this step can be executed at once using the scripts/1_setup.sh script.

  1. Start your minikube instance with a configuration slightly more powerful than the default one:
    • minikube start --memory 4096 --disk-size 60g --cpus 4
      Note: On some laptops, running Minikube on battery power severely undermines its performance, and pods can take really long to start. Make sure to be plugged in to power before starting this tutorial.
  2. Copy all required secrets and passwords into the cluster:
    • kubectl apply -k secrets
  3. Install and run Minio in your cluster:
    • kubectl apply -k minio
  4. Install MariaDB in your cluster:
    • kubectl apply -k mariadb
  5. Install WordPress:
    • kubectl apply -k wordpress
  6. Install K8up in Minikube:
    • helm repo add appuio charts.appuio.ch
    • helm repo update
    • helm install appuio/k8up --generate-name --set k8up.backupImage.tag=v0.1.8-root

After finishing all these steps, check that everything is running; the easiest way is to launch k9s and leave it running in its own terminal window, and of course you can use the usual kubectl get pods.
Tip: In k9s you can easily delete a pod by going to the „Pods“ view (type :, write pods at the prompt and hit Enter), selecting the pod to delete with the arrow keys, and hitting the CTRL+D key shortcut.

The asciinema movie below shows all of these steps in real time.

 

3.2 Viewing Minio and WordPress on a browser

Note: The operations of this step can be executed at once using the scripts/2_browser.sh script.

  1. Open WordPress in your default browser using the minikube service wordpress command. You should see the WordPress installation wizard appearing on your browser window.
  2. Open Minio in your default browser with the minikube service minio command.
    • You can login into minio with these credentials: access key minio, secret key minio123.

3.2.1 Setting up the new blog

Follow these instructions in the WordPress installation wizard to create your blog:

  1. Select your language from the list and click the Continue button.
  2. Fill the form to create new blog.
  3. Create a user admin.
  4. Copy the random password shown, or use your own password.
  5. Click the Install WordPress button.
  6. Log in to the WordPress console using the user and password.
    • Create one or many new blog posts, for example using pictures from Unsplash.
  7. Enter some text or generate some random text using a Lorem ipsum generator.
  8. Click on the „Document“ tab.
  9. Add the image as „Featured image“.
  10. Click „Publish“ and see the new blog post on the site.

3.3 Backing up the blog

Note: The operations of this step can be executed at once using the scripts/3_backup.sh script.
To trigger a backup, use the command kubectl apply -f k8up/backup.yaml. You can see the job in the „Jobs“ section of k9s.
Running the logs command on a backup pod brings the following information:

$ kubectl logs backupjob-1564752600-6rcb4
No repository available, initialising...
created restic repository edaea22006 at s3:http://minio:9000/backups
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
Removing locks...
created new cache in /root/.cache/restic
successfully removed locks
Listing all pods with annotation appuio.ch/backupcommand in namespace default
Adding default/mariadb-9588f5d7d-xmbc7 to backuplist
Listing snapshots
snapshots command:
0 Snapshots
backing up via mariadb stdin...
Backup command: /bin/bash, -c, mysqldump -uroot -p"${MARIADB_ROOT_PASSWORD}" --all-databases
done: 0.00%
backup finished! new files: 1 changed files: 0 bytes added: 4184711
Listing snapshots
snapshots command:
1 Snapshots
sending webhook Listing snapshots
snapshots command:
1 Snapshots
backing up...
Starting backup for folder wordpress-pvc
done: 0.00%
backup finished! new files: 1932 changed files: 0 bytes added: 44716176
Listing snapshots
snapshots command:
2 Snapshots
sending webhook Listing snapshots
snapshots command:
2 Snapshots
Removing locks...
successfully removed locks
Listing snapshots
snapshots command:
2 Snapshots

If you look at the Minio browser window, there should be now a set of folders that appeared out of nowhere. That’s your backup in Restic format!

3.3.1 How does K8up work?

K8up runs Restic in the background to perform its job. It will automatically backup the following:

  1. All PVCs in the cluster with the ReadWriteMany attribute.
  2. All PVCs in the cluster with the k8up.syn.tools/backup: "true" annotation.

The PVC definition below shows how to add the required annotation for K8up to do its job.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wordpress-pvc
  labels:
    app: wordpress
  annotations:
    k8up.syn.tools/backup: "true"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Just like any other Kubernetes object, K8up uses YAML files to describe every single action: backups, restores, archival, etc. The most important part of the YAML files used by K8up is the backend object:

backend:
  repoPasswordSecretRef:
    name: backup-repo
    key: password
  s3:
    endpoint: http://minio:9000
    bucket: backups
    accessKeyIDSecretRef:
      name: minio-credentials
      key: username
    secretAccessKeySecretRef:
      name: minio-credentials
      key: password

This object specifies two major keys:

  • repoPasswordSecretRef contains the reference to the secret that contains the Restic password. This is used to open, read and write to the backup repository.
  • s3 specifies the location and credentials of the storage where the Restic backup is located. The only valid option at this moment is an AWS S3 compatible location, such as a Minio server in our case.

3.4 Restoring a backup

Note: The operations of this step can be executed at once using the scripts/4_restore.sh script.
Let’s pretend now that an attacker has gained access to your blog: we will remove all blog posts and images from the WordPress installation and empty the trash.

Oh noes! But don’t worry: thanks to K8up you can bring your old blog back in a few minutes.
There are many ways to restore Restic backups, for example locally (useful for debugging or inspection) and remotely (on PVCs or S3 buckets, for example.)

3.4.1 Restoring locally

To restore using Restic, set these variables (in a Unix-based system; for Windows, the commands are different):

export KUBECONFIG=""
export RESTIC_REPOSITORY=s3:$(minikube service minio --url)/backups/
export RESTIC_PASSWORD=p@ssw0rd
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=minio123

Note: You can create these variables simply running source scripts/environment.sh.
With these variables in your environment, run the command restic snapshots to see the list of backups, and restic restore XXXXX --target ~/restore to trigger a restore, where XXXXX is one of the IDs appearing in the results of the snapshots command.

3.4.2 Restoring the WordPress PVC

K8up is able to restore data directly on specified PVCs. This requires some manual steps.

  • Using the steps in the previous section, „Restore Locally,“ check the ID of the snapshot you would like to restore:
$ source scripts/environment.sh
$ restic snapshots
$ restic snapshots XXXXXXXX --json | jq -r '.[0].id'
  • Use that long ID in your restore YAML file k8up/restore/wordpress.yaml:
    • Make sure the restoreMethod:folder:claimName: value corresponds to the Paths value of the snapshot you want to restore.
    • Replace the snapshot key with the long ID you just found:
apiVersion: backup.appuio.ch/v1alpha1
kind: Restore
metadata:
  name: restore-wordpress
spec:
  snapshot: 00e168245753439689922c6dff985b117b00ca0e859cc69cc062ac48bf8df8a3
  restoreMethod:
    folder:
      claimName: wordpress-pvc
  backend:
  • Apply the changes:
    • kubectl apply -f k8up/restore/wordpress.yaml
    • Use the kubectl get pods commands to see when your restore job is done.

Tip: If you use the kubectl get pods --sort-by=.metadata.creationTimestamp command to order the pods in descending age order; at the bottom of the list you will see the restore job pod.

3.4.3 Restoring the MariaDB pod

In the case of the MariaDB pod, we have used a backupcommand annotation. This means that we have to „pipe“ the contents of the backup into the mysql command of the pod, so that the information can be restored.
Follow these steps to restore the database:

  1. Retrieve the ID of the MariaDB snapshot:
    • restic snapshots --json --last --path /default-mariadb | jq -r '.[0].id'
  2. Save the contents of the backup locally:
    • restic dump SNAPSHOT_ID /default-mariadb > backup.sql
  3. Get the name of the MariaDB pod:
    • kubectl get pods | grep mariadb | awk '{print $1}'
  4. Copy the backup into the MariaDB pod:
    • kubectl cp backup.sql MARIADB_POD:/
  5. Get a shell to the MariaDB pod:
    • kubectl exec -it MARIADB_POD — /bin/bash
  6. Execute the mysql command in the MariaDB pod to restore the database:
    • mysql -uroot -p"${MARIADB_ROOT_PASSWORD}" < /backup.sql

Now refresh your WordPress browser window and you should see the previous state of the WordPress installation restored, working and looking as expected!

3.5 Scheduling regular backups

Note: The operations of this step can be executed at once using the scripts/5_schedule.sh script.
Instead of performing backups manually, you can also set a schedule for backups. This requires specifying the schedule in cron format.

backup:
  schedule: '*/2 * * * *'    # backup every 2 minutes
  keepJobs: 4
  promURL: http://minio:9000

Tip: Use crontab.guru to help you set up complex schedule formats in cron syntax.
The schedule can also specify archive and check tasks to be executed regularly.

archive:
  schedule: '0 0 1 * *'       # archive every week
  restoreMethod:
    s3:
      endpoint: http://minio:9000
      bucket: archive
      accessKeyIDSecretRef:
        name: minio-credentials
        key: username
      secretAccessKeySecretRef:
        name: minio-credentials
        key: password
check:
  schedule: '0 1 * * 1'      # monthly check
  promURL: http://minio:9000

Run the kubectl apply -f k8up/schedule.yaml command. This will setup an automatic schedule to backup the PVCs every 5 minutes (for minutes that are divisors of 5).
Wait for at most 2 minutes, and run the restic snapshots to see more backups piling up in the repository.
Tip: Running the watch restic snapshots command will give you a live console with your current snapshots on a terminal window, updated every 2 seconds.

3.6 Cleaning up the cluster

Note: The operations of this step can be executed at once using the scripts/6_stop.sh script.
When you are done with this tutorial, just execute the minikube stop command to shut the cluster down. You can also minikube delete it, if you would like to get rid of it completely.

4. Conclusion

We hope that this walkthrough has given you a good overview of K8up and its capabilities. But it can do much more than that! We haven’t talked about the archive, prune, and check commands, or about the backup of any data piped to stdout (called „Application Aware“ backups.) You can check these features in the K8up documentation website where they are described in detail.
K8up is still a work in progress, but it is already being used in production in many clusters. It is also an open source project, and everybody is welcome to use it freely, and even better, to contribute to it!

Adrian Kosmaczewski

Adrian Kosmaczewski ist bei VSHN für den Bereich Developer Relations zuständig. Er ist seit 1996 Software-Entwickler, Trainer und veröffentlichter Autor. Adrian hat einen Master in Informationstechnologie von der Universität Liverpool.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Presse Project Syn Tech

First Pre-Release of Project Syn Tools

10. Mrz 2020

We have been working hard since the initial announcement of Project Syn back in November 2019, and are proud to announce version 0.1.0, the first pre-release of a set of Project Syn tools.
Quick reminder about what Project Syn is about:

Project Syn is a pre-integrated set of tools to provision, update, backup, observe and react/alert production applications on Kubernetes and in the cloud. It supports DevOps through full self-service and automation using containers, Kubernetes and GitOps. And best of all: it is Open Source.

TL;DR: The code is on GitHub, under its own organization: https://github.com/projectsyn. The official documentation is in https://docs.syn.tools/ (The documentation is open source too!)

What does Project Syn do?

Short answer: it enables the management of many Kubernetes clusters, and provides a set of services to the users of those clusters. Project Syn is composed by many tools; some specially developed for the project, some already existing, all Open Source. It’s not only about tooling, it’s also about processes and best practices.
The actual story is a bit longer.

Features of version 0.1.0

To manage a big fleet of Kubernetes clusters, we need an inventory with the following information:

  • The cloud providers they are running on;
  • Locations;
  • Tenants each cluster belongs to;
  • Kubernetes versions deployed;
  • Kubernetes flavor / distribution used;
  • …and a lot more!

This is what the Project Syn tool Lieutenant (written in Go) gives us: an inventory application to register clusters, to assign them to a tenant and to store inventory data. It consists of a REST API (based on the OpenAPI 3 specification) and a Kubernetes Operator, to store data directly in the underlying Kubernetes cluster (in CRDs) and to act on events.
Knowing about clusters is just one part. Another important element is to continuously deploy and monitor system applications (like K8up, Prometheus, …) on Project Syn enabled Kubernetes clusters. This is all done with the GitOps pattern, managed by Argo CD, which is deployed to every cluster. Thanks to Argo CD we can make sure that the applications deployed to the cluster are exactly configured as specified in the corresponding Git repository, and that they are running just fine.
Each Project Syn enabled Kubernetes Cluster has its own so-called Catalog Git Repository. This contains a set of YAML files specifically crafted for each cluster, containing the system tools to operate the cluster, and to give access to well configured self-service tooling to the user of the cluster.
The generation of these YAML files is the responsibility of the Project Syn tool Commodore (written in Python). Commodore is based upon the Open Source tool Kapitan by leveraging inventory data from Lieutenant. After gathering all needed data about a cluster from the inventory, Commodore can fetch all defined components, parameterize them with configuration data from a hierarchical GIT data structure and generate the final YAML files, ready to be applied by Argo CD to the Kubernetes Cluster. The Lieutenant API also knows where the catalog Git repository is located, and Commodore is therefore able to automatically push the catalog to the matching Git repository.
Secrets are never stored in GitOps repositories. They are instead stored securely in Hashicorp Vault, and only retrieved during the „apply“ phase, directly on the destination Kubernetes Cluster. This process is supported by the Kapitan secret management feature and by Commodore, who prepares the secret references during the catalog generation. Argo CD calls kapitan secrets --reveal  during the manifest apply phase, which then actually connects to Vault to retrieve the secrets and stores them in the Kubernetes Cluster, ready to be consumed by the application.
The management of all these Git repositories is the responsibility of the Lieutenant Operator (written in Go, based on Red Hat’s Operator SDK). It is able to manage remote Git repositories (GitLab, GitHub, Bitbucket, etc) and prepare them for Commodore and Argo CD, for example by configuring an SSH deploy key.
The Project Syn tool Steward (written in Go) has the responsibility of enabling Project Syn in a Kubernetes Cluster, communicating with the Lieutenant API, to perform the initial bootstrapping of Argo CD. This bootstrapping includes basic maintenance tasks: should Argo CD be removed from the cluster inadvertently, Steward will automatically reinstall it. An SSH deploy key is generated during bootstrapping and transmitted back to the API. With this procedure it is possible to bootstrap the whole GitOps workflow without any manual interaction.

Analogies with Puppet

For those familiar with Puppet, there are some similarities with the design of Project Syn:

  • Puppet Server: Commodore and Kapitan to generate the catalog, matching the facts from the cluster.
  • Puppet DB: Lieutenant acting as inventory / facts registry.
  • Hiera: Kapitan with its hierarchical configuration model.
  • Puppet Agent: Steward and Argo CD on the cluster. Steward to communicate with the API and Argo CD to apply the catalog.
  • Puppet Modules: Commodore Components, bringing modularity into Kubernetes application deployment.

Many of these concepts are documented in the Project Syn documentation pages, specifically the Syn Design Documents, documenting all the design decisions (even though they are still in „work-in-progress“ stages).

What are the next steps for Project Syn?

This is really just the beginning! There are a lot of plans and ideas for the future evolution of Project Syn. We have crafted an initial roadmap, and we published it as part of the official Project Syn documentation.
This initial pre-release is just the tip of the iceberg. Under the surface there is a lot more brewing, to be released as soon as possible. To reiterate: It’s not only about tools, but also about concepts and processes, which also means a lot of documentation will emerge over the next months.
One of the focus of this initial pre-release was to lay the foundation for future development. It has a strong focus on the operations side. Future milestones will broaden the focus to include more and more self-service possibilities for the user, including tight integration of Crossplane for easy and fully automated cloud service provisioning.
We at VSHN are now starting to use Project Syn for an initial set of managed Kubernetes clusters, and will continue to develop the concept, tools and processes while we learn about more use cases and with the real-life experience we gather.

How can I contribute?

Project Syn is a young project and is making the first initial steps in the open world. Many things are just getting started, just like the documentation and the contribution guidelines. Testing and giving feedback through GitHub issues is certainly a great way to start contributing. And of course, if you are looking for a Managed Kubernetes or Managed OpenShift cluster, get in touch with us with the form at the bottom of this page!

Learn more

Second Beta Release of Project Syn Tools

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Presse Project Syn

Partnerschaft VSHN & Crossplane

21. Nov 2019

VSHN geht Partnerschaft mit Crossplane ein

Zürich, 21. November 2019: VSHN ist stolz darauf, eine strategische Partnerschaft mit Crossplane.io für unser Project Syn – VSHNs Next Generation Managed Services Framework – bekannt zu geben.

Cloud Agnostic with Crossplane

VSHN has always been cloud agnostic and will further enhance this paradigm by partnering with Crossplane – „The open source multicloud control plane“.
By leveraging Crossplane, the user of Project Syn can specify the backend services needed in a completely cloud-independent way. Provisioning of these services happens fully automated, handled by the tooling in the most optimal way. As an example: when a MySQL service is requested, Crossplane would provision a cloud service if the cloud provides it or deploys it inside the Kubernetes cluster leveraging a service operator. This way the user doesn’t have to care about the implementation and can fully focus on the application.
Project Syn is designed to run on all Kubernetes distributions and clouds. It’s prepared to support all the specific features of any given cloud and Kubernetes distribution by abstracting the specifics. This means Project Syn will run on OpenShift with APPUiO.ch, Rancher Kubernetes and all managed Kubernetes offerings. Support for even more Kubernetes flavors and clouds are added on demand. Plans exist to support single node Kubernetes Clusters using Rancher k3s.

Warum Crossplane?

Tobias Brunner, Head of DevOps & Partner of VSHN – The DevOps Company:

„After several weeks of evaluation, we’re excited to be using Crossplane as a cornerstone in our next-gen hosted DevOps product. Crossplane is an extensible open-source platform that adds declarative cloud service provisioning and management to the Kubernetes API with excellent support for GitOps-style continuous deployments for cloud-native apps that is at the heart of the next-gen offering of VSHN.“

Über Crossplane

Welcome to Crossplane! Crossplane is an open source multicloud control plane to manage your cloud-native applications and infrastructure across environments, clusters, regions and clouds. It enables provisioning and full-lifecycle management of applications and managed services from your choice of cloud using kubectl. Crossplane can be installed into an existing Kubernetes cluster to add managed service provisioning or deployed as a dedicated control plane for multi-cluster management and workload scheduling. Crossplane enables the community to build and publish Stacks to add more clouds and cloud services to Crossplane with support for out-of-tree extensibility and independent release schedules. Crossplane includes Stacks for GCP, AWS, and Azure today.

Über VSHN – The DevOps Company

VSHN (ausgesprochen ˈvɪʒn wie „vision“) ist der führende Schweizer Partner für DevOps, Docker, Kubernetes, OpenShift & 24/7 Cloud Operations.
VSHN wurde mit der Absicht gegründet, den Hostingmarkt grundlegend aufzumischen. Als Lean Startup haben wir uns durch Automatisierung, Agilität und einen kontinuierlichen Verbesserungsprozess auf den Betrieb von IT-Plattformen konzentriert. Völlig standortunabhängig und ohne eigene Hardware betreiben wir umfangreiche Applikationen nach dem DevOps-Prinzip agil und 24/7 auf jeder Infrastruktur, damit sich Software-Entwickler auf ihr Business konzentrieren können und der IT-Betrieb entlastet wird. 

Markus Speth

Markus is VSHN's CEO and one of the General Managers.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Presse Project Syn

Announcing Project Syn – The Next Generation Managed Services

20. Nov 2019

VSHN announces Project Syn

VSHN is proud to announce Project Syn, the next generation Open Source managed services framework for DevOps and application operations on any infrastructure based on Kubernetes.

Project Syn is a pre-integrated set of tools to provision, update, backup, observe and react/alert production applications on Kubernetes and in the cloud. It supports DevOps through full self-service and automation using containers, Kubernetes and GitOps. And best of all: it is Open Source.

Project Syn combines tools and processes to make the best out of containers, Kubernetes and Cloud Services

VSHNs mission is to automate all aspects of software operations to help software developers to run their applications on any infrastructure. Since 2014, we have been using Puppet and Ansible to automate monitoring, backups, logs, metrics, service checks and alerts. Project Syn is the next generation of application operations tooling packaged as containers and orchestrated on any Kubernetes service.
Project Syn provides an opinionated set of integrated tools and processes on any Kubernetes service and cloud infrastructure provider:

  • GitOps and infrastructure as code: declare the application environment requirements in Git and let the tooling take care of creation/changes
  • Observability and insights: service checks, metrics, logs, thresholds, alert rules and paging
  • Service provisioning: declare backends and other service dependencies as portable Kubernetes Objects (CRD) and let the tooling create the infrastructure-specific service (e.g. database service, S3 storage service, etc) with best-practice default configuration
  • Backup: regularly back up all user data from each service and persistent volume
  • Application container deployment automatically integrating the topics above
  • Work on any Kubernetes service and cloud provider

The Project Syn tooling is a fundamental part in your DevOps journey and provides you with production quality Ops.

Cloud Agnostic with Crossplane

VSHN has always been cloud agnostic and will further enhance this paradigm by partnering with Crossplane – „The open source multicloud control plane“. By leveraging Crossplane, the user of Project Syn can specify the backend services needed in a completely cloud-independent way. Provisioning of these services happens fully automated, handled by the tooling in the most optimal way. As an example: when a MySQL service is requested, Crossplane would provision a cloud service if the cloud provides it or deploys it inside the Kubernetes cluster leveraging a service operator. This way the user doesn’t have to care about the implementation and can fully focus on the application.
Project Syn is designed to run on all Kubernetes distributions and clouds. It’s prepared to support all the specific features of any given cloud and Kubernetes distribution by abstracting the specifics. This means Project Syn will run on OpenShift with APPUiO.ch, Rancher Kubernetes and all managed Kubernetes offerings. Support for even more Kubernetes flavors and clouds are added on demand. Plans exist to support single node Kubernetes Clusters using Rancher k3s.

Details of Project Syn

Project Syn will become an Open Source project in the near future. It consists of several components, working together to bring the necessary features for running applications in production on Kubernetes, acting as an operations framework. Multiple Kubernetes distributions are supported and it can be installed on an already existing Kubernetes clusters or it can even provision a new one. Taking care of what is running inside a Kubernetes clusters (including the Kubernetes cluster itself) is in the heart of Syn.

Production readiness Syn is made for production. It brings all aspects needed to run an application in production like monitoring of all important services and backup of data.
Self-service
All parts of Syn are engineered for self-service. Define what you need – declarative in code – and the platform does it for you. Be that provisioning of services, inside or outside of the cluster, configuration consistent backup incl. monitoring or setting the matching monitoring and alerting rules, the platform automatically takes care of it.
Developer happiness
By being able to work with the platform without external dependencies, the developer can express the needs for the application in code (e.g. „a Postgres database is needed“) and do this individually.
Service provisioning
Provisioning services like databases outside of the cluster (e.g. in the cloud) or inside the cluster is completely automated by Project Syn, leveraging the endless possibilities of Crossplane. It is a key part of the platform and fully integrated with all the important production readiness features.
Crossplane abstracts the specifics of the service to be provisioned. As a user of the platform you just tell Crossplane what you want. e.g. a MySQL server, and Crossplane then takes care to deploy the best matching service, depending on which cloud it runs. On AWS, Crossplane would provision an RDS instance, on a cloud without a managed database offering, it would provision an in-cluster MySQL instance managed by a matching database operator, installed and configured by the Project Syn platform.
The reconciliation process of Crossplane ensures that the provisioned services are configured as intended all the time and will take measures should the configuration drift apart.
Best-practices configuration
Project Syn makes use of best-practices configuration, learned from running Kubernetes and applications on top of it in production since many years, and applies them continuously. As the best-practices evolve over time, they are integrated as they are learned.
Data protection
Data safety is key. Project Syn makes sure to continuously backup the important data on a filesystem level and also on an application consistent level. All data is stored encrypted at rest and in transit by leveraging possibilities of modern application offerings.
Security
No secrets are stored in plain text, they all live in protected key stores. By applying best-practices configuration we ensure secure configuration by default of all components. Only TLS secured connections are used.
Configuration auditing
All configuration is stored in Git and applied using the GitOps pattern. This allows to have full auditability and history of the full configuration. By signing the generated configuration data we ensure that only trusted configuration is applied to the cluster.
In-cluster configuration reconciliation ensures that the configuration is up-to-date all the time and matches the intended state.
Regular maintenance
Project Syn components are regularly maintained in a fully automated way. This is to ensure that latest patches are installed and no vulnerable components are part of the system.
Decentralization A key part of Project Syn is a decentralized approach. All parts are designed to work without relying on a central management service.
Open Source
One of the goals of Project Syn is to make use of existing and fantastic Open Source applications and glue them together to form a unity. To name a few – the most important ones:
 
  • Kapitan
  • Jsonnet
  • Crossplane
  • Argo CD
  • Prometheus – Alertmanager
  • Grafana
  • Loki
  • Vault
  • Alerta
  • Renovate
All Project Syn components specifically written for tying all these tools together will become Open Source as well. Contributions from Project Syn are continuously brought upstream to support these tools.

 

Project Syn as Managed Service by VSHN

Project Syn is an Open Source project and can be used by anyone for free. VSHN in addition offers Project Syn as a managed service. Taking care of the Project Syn platform with engineering, 24/7 operations and maintenance is key part of the offering. By adding additional services VSHN ensures that the platform can be trusted to run business-critical application workload.

Alert handling
Reacting to alerts and handle them according to a specified SLA, including 24/7 operations and continuous improvement of alert rules based on a day-to-day experience
Expert pool The Project Syn experts at VSHN are available to help the user of the platform
Developer support
Supporting the users of the platform by actively participating as part of the development team enables the user to get the best out of the platform. We provide the Ops part in the DevOps chain.
SLA Specific SLAs are available for applications running on the Project Syn platform
Best-practice curation Delivering of best practice configuration learned by operating many Project Syn enabled clusters in production all over the world
Container image curation Only VSHN tested and approved images are running on the platform which ensures stability and security
Regular maintenance VSHN carries out regular maintenance on all involved components by keeping them up-to-date to latest bugfix and security updates
Active project Syn development Customer needs are actively developed by VSHN engineers and brought into the Project Syn platform
Assisting services Assisting Project Syn platform services are provided, like:

 

  • Customer portal with self-service capability and deep insights
  • Service desk
  • Image registry with curated and tested images
  • Inventory
  • and more

 

Early Access for Project Syn

The foundation for Project Syn is already prepared. We are actively looking for early access users of the platform, helping to test it and shape the future of Project Syn. If you are interested in getting a glimpse at our next generation managed services platform, please fill in the form below and let us know.
[hatchbuck form=“Contactform-vshn-ch“]

 

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt