Presse

VSHN: ein bahnbrechender Crossplane-Partner

30. Jan. 2024

Aufregende Neuigkeiten für unser Team und die Crossplane-Community! Wir freuen uns sehr, bekannt geben zu können, dass VSHN nun einer der beiden offiziellen Partner von Crossplane ist, wie auf deren Community-Seite zu lesen ist, und der einzige in Europa.

Crossplane ist ein Open-Source-Projekt, das die Art und Weise, wie wir über Cloud-Native Computing denken, verändert hat. Es ist nicht nur ein Tool, sondern ein völlig neuer Ansatz für die Verwaltung von Cloud-Ressourcen, mit dem Schwerpunkt auf Infrastructure as Code und Kubernetes.

Hier bei VSHN waren wir frühe Anwender und Befürworter von Crossplane. Unsere Reise mit Crossplane war unglaublich, voller Herausforderungen, Lernprozesse und einer Menge Erfolge. Die Expertise unseres Teams im Umgang mit Crossplane in Live-Umgebungen ist anerkannt, und wir sind stolz darauf, Teil dieser lebendigen Community zu sein.

Insbesondere sind wir begeisterte Crossplane-Nutzer für unser Produkt AppCat, mit dem sich unsere Kunden selbst mit Datenbanken, Message Brokern und Storage Buckets versorgen können. Für einen unserer Privatkunden aus dem Telekommunikationssektor betreiben und verwalten wir 1500 Datenbank-Cluster mit Crossplane.

Diese Partnerschaft ist nicht nur ein Meilenstein für uns, sondern auch ein Beleg für unser Engagement für das Crossplane-Projekt und das breitere Cloud-Native-Ökosystem. Wir freuen uns darauf, unser Wissen und unsere Erfahrung einzubringen, um das Wachstum der Crossplane-Community zu unterstützen.

Wir freuen uns auf weitere Kooperationen, Innovationen und natürlich darauf, Sie dabei zu unterstützen, die Vorteile von Crossplane für Ihre eigenen Implementierungen und Anwendungen zu nutzen.

Siehe dazu den VSHN Support Plan for Crossplane!

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Composition Functions in Production

9. Jan. 2024

(This post was originally published on the Crossplane blog.)

Crossplane has recently celebrated its fifth birthday, but at VSHN, we’ve been using it in production for almost three years now. In particular, it has become a crucial component of one of our most popular products. We’ve invested a lot of time and effort on Crossplane, to the point that we’ve developed (and open-sourced) our own custom modules for various technologies and cloud providers, such as Exoscale, cloudscale.ch, or MinIO.

In this blog post, we will provide an introduction to a relatively new feature of Crossplane called Composition Functions, and show how the VSHN team uses it in a very specific product: the VSHN Application Catalog, also known as VSHN AppCat.

Crossplane Compositions

To understand Composition Functions, we need to understand what standard Crossplane Compositions are in the first place. Compositions, available in Crossplane since version 0.10.0, can be understood as templates that can be applied to Kubernetes clusters to modify their configuration. What sets them apart from other template technologies (such as Kustomize, OpenShift Template objects, or Helm charts) is their capacity to perform complex transformations, patch fields on Kubernetes manifests, following more advanced rules and with better reusability and maintainability. Crossplane compositions are usually referred to as „Patch and Transform“ compositions, or „PnT“ for short.

As powerful as standard Crossplane Compositions are, they have some limitations, which can be summarized in a very geeky yet technically appropriate phrase: they are not „Turing-complete“.

  • Compositions don’t support conditions, meaning that the transformations they provide are applied on an „all or nothing“ basis.
  • They also don’t support loops, which means that you cannot apply transformations iteratively.
  • Finally, advanced operations are not supported either, like checking for statuses in other systems, or performing dynamic data lookups at runtime.

To address these limitations, Crossplane 1.11 introduced a new Alpha feature called „Composition Functions“. Note that as of writing, Composition Functions are in Beta in 1.14.

Composition Functions

Composition functions complement and in some cases replace Crossplane „PnT“ Compositions entirely. Most importantly, DevOps engineers can create Composition Functions using any programming language; this is because they run as standard OCI containers, following a specific set of interface requirements. The result of applying a Composition Function is a new composite resource applied to a Kubernetes cluster.

Let’s look at an elementary „Hello World“ example of a Composition Function.

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
  name: example-bucket-function
spec:
  compositeTypeRef:
    apiVersion: example.crossplane.io/v1
    kind: XBucket
  mode: Pipeline
  pipeline:
  - step: handle-xbucket-xr
    functionRef:
      name: function-xr-xbucket

The example above shows a standard Crossplane composition with a new field: „pipeline“ specifying an array of functions, referred to via their name.

As stated previously, the function itself can be written in any programming language, like Go.

func (f *Function) RunFunction(_ context.Context, req *fnv1beta1.RunFunctionRequest) (*fnv1beta1.RunFunctionResponse, error) {
    rsp := response.To(req, response.DefaultTTL)
    response.Normal(rsp, "Hello world!")
    return rsp, nil
}

The example above, borrowed from the official documentation, does just one thing: it reads a request object, modifies a value, and returns it to the caller. Needless to say, this example is for illustration purposes only, lacking error checking, logging, security, and more, and should not be used in production. Developers use the Crossplane CLI to create, test, build, and push functions.

Here are a few things to keep in mind when working with Composition Functions:

  • They run in order, as specified in the „pipeline“ array of the Composition object, from top to bottom.
  • The output of the previous Composition Function is used as input for the following one.
  • They can be combined with standard „PnT“ compositions by using the function-patch-and-transform function, allowing you to reuse your previous investment in standard Crossplane compositions.
  • In the Alpha release, if you combined „PnT“ compositions with Composition Functions, „PnT“ compositions ran first, and the output of the last one is fed to the first function; since the latest release, this is no longer the case, and „PnT“ compositions can now run at any step of the pipeline.
  • Composition Functions must be called using RunFunctionRequest objects, and return RunFunctionResponse objects.
  • In the Alpha release, these two objects were represented by a now deprecated „FunctionIO“ structure in YAML format.
  • RunFunctionRequest and RunFunctionResponse objects contain a full and coherent „desired state“ for your resources. This means that if an object is not explicitly specified in a request payload, it will be deleted. Developers must pass the full desired state of their resources at every invocation.

Practical Example: VSHN AppCat

Let’s look at a real-world use case for Crossplane Composition Functions: the VSHN Application Catalog, also known as AppCat. The AppCat is an application marketplace allowing DevOps engineers to self-provision different kinds of middleware products, such as databases, message queues, or object storage buckets, in various cloud providers. These products are managed by VSHN, which frees application developers from a non-negligible burden of maintenance and oversight.

Standard Crossplane „PnT“ Compositions proved limited very early in the development of VSHN AppCat, so we started using Composition Functions as soon as they became available. They have allowed us to do the following:

  • Composition Functions enable complex tasks, involving the verification of current deployment values and taking decisions before deploying services.
  • They can drive the deployment of services involving Helm charts, modifying values on-the-go as required by our customers, their selected cloud provider, and other parameters.
  • Conditionals allow us to script complex scenarios, involving various environmental decisions, and to reuse that knowledge.
  • Thanks to Composition Functions, the VSHN team can generalize many activities, like backup handling, automated maintenance, etc.

All things considered, it is difficult to overstate the many benefits that Composition Functions have brought to our workflow and to our VSHN AppCat product.

Learnings of the Alpha Version

We’ve learned a lot while experimenting with the Alpha version of Composition Functions, and we’ve documented our findings for everyone to learn from our mistakes.

  • Running Composition Functions in Red Hat OpenShift used to be impossible in Alpha because OpenShift uses crun, but this issue has now been solved in the Beta release.
  • In particular, when using the Alpha version of Composition Functions, we experienced slow execution speeds with crun but this is no longer the case.
  • We learned the hard way that missing resources on function requests were actually deleted!

Our experience with Composition Functions led us to build our own function runner. This feature uses another capability of Crossplane, which allows functions to specify their runner in the Composition definition:

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
[...]
  functions:
  - name: my-function
    type: Container
    container:
      image: my-function
      runner:
        endpoint: grpc-server:9547

Functions run directly on the gRPC server, which, for security reasons, must run as a sidecar to the Crossplane pod. Just like everything we do at VSHN, the Composition Function gRPC Server runner (as well as its associated webhook and all of its code) is open-source, and you can find it on our GitHub. As of the composition function beta, we replaced the custom GRPC logic with the go-sdk. To improve the developer experience, we have created a proxy and enabled the gRPC server to run locally. The proxy runs in kind and redirects to the local gRPC server. This enables us to debug the code and to test changes more efficiently.

Moving to Beta

We recently finished migrating our infrastructure to the most recent Beta version of Composition Functions, released in Crossplane 1.14, and we have been able to do that without incidents. This release included various bits and pieces such as Function Pipelines, an ad-hoc gRPC server to execute functions in memory, and a Function CRD to deploy them directly to clusters.

We are also migrating all of our standard „PnT“ Crossplane Compositions to pure Composition Functions as we speak, thanks to the functions-go-sdk project, which has proven very helpful, even if we are missing typed objects. Managing the same objects with the „PnT“ and Composition Functions increases complexity dramatically. As it can be difficult to determine where an actual change happens.

Conclusion

In this blog post, we have seen how Crossplane Composition Functions compare to standard „PnT“ Crossplane compositions. We have provided a short example, highlighting their major characteristics and caveats, and we have outlined a real-world use case for them, specifically VSHN’s Application Catalog (or AppCat) product.

Crossplane Composition Functions provide an unprecedented level of flexibility and power to DevOps engineers. They enable the creation of complex transformations, with all the advantages of an Infrastructure as Code approach, and the flexibility of using the preferred programming language of each team.

Check out my talk at Control Plane Day with Crossplane, where I walk you through Composition Functions in Production in 15 minutes.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
VSHN.timer

VSHN.timer #179: Attending KubeCon + CloudNativeCon Europe 2023

17. Apr. 2023

Welcome to another VSHN.timer! Every Monday, 5 links related to Kubernetes, OpenShift, CI / CD, and DevOps; all stuff coming out of our own chat system, making us think, laugh, or simply work better.

This week we’re going to talk about the event of the week: KubeCon + CloudNativeCon Europe 2023.

1. This year a record of 8 VSHNeers (including yours truly) will attend Kubecon + CloudNativeCon Europe 2023, not only to learn spectacular new things, but also to meet new people, and to get to know new products at the sponsors‘ booths. And guess what? We’ll also host a K8up kiosk, where you can meet VSHNeers every day to learn more about K8up, our great Kubernetes backup operator. Let’s meet at kiosk K25 of the Project Pavilion!

https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/project-engagement/#project-pavilion

2. I’m interested in Adam Wolfe Gordon’s talk „What Happened to the Service Catalog?“ because we’be been building our own service catalog with Crossplane, and I’m curious about what others are doing with it.

https://kccnceu2023.sched.com/event/1HyVZ/what-happened-to-the-service-catalog-adam-wolfe-gordon-digitalocean?iframe=no

3. How do projects graduate from the CNCF? Will K8up graduate one day, too? The talk „Going for Graduation: Crossing the Chasm“ by Katie Gamanji and Bill Mulligan has probably the answer.

https://kccnceu2023.sched.com/event/1HyWR/going-for-graduation-crossing-the-chasm-bill-mulligan-isovalent-katie-gamanji-apple?iframe=no

4. Here’s a talk dear to my heart: Alanna Burke, one of our dear friends at amazee.io, will elaborate on „Creating a Culture of Documentation“ and I can’t wait to listen to what she’s got to say.

https://kccnceu2023.sched.com/event/1HyZy/creating-a-culture-of-documentation-alanna-burke-amazeeio?iframe=no

5. The talk „Kubernetes Database Operators Landscape“ is particularly interesting for us because StackGres is a core component of our VSHN Application Catalog.

https://kccnceu2023.sched.com/event/1HyZU/kubernetes-database-operators-landscape-xing-yang-vmware-melissa-logan-constantiaio-sergey-pronin-percona-alvaro-hernandez-ongres?iframe=no

Are you attending Kubecon + CloudNativeCon Europe 2023 too? What talks are you the most curious about? Would you like to recommend some other conferences to our readers? Get in touch with us, let’s meet at KubeCon (here’s my schedule by the way!), and see you next week for another edition of VSHN.timer.

PS: check out our previous VSHN.timer editions about conferences: #19, #20, #56, #57, #90, #91, and #170.

PS2: do you prefer reading VSHN.timer in your favorite RSS reader? Subscribe to this feed.

PS3: would you like to receive VSHN.timer every Monday in your inbox? Sign up for our weekly VSHN.timer newsletter.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Event

VSHN at KubeCon Europe 2023!

12. Apr. 2023

KubeCon + CloudNativeCon Europe 2023 is next week! And no less than 8 VSHNeers (around 15% of our company!) are going to attend the event in Amsterdam this year.

The complete list of VSHNeers attending KubeCon includes: Annie Talvasto, Elia Ponzio, Erik Harder, Liene Luksika, Robin Scherrer, Sebastian Widmer, Stephan Feurer, and myself, Tobias Brunner.

Not only are we impatient to learn about the latest Cloud Native technologies, we’re thrilled to be a part of the Project Pavilion where we’ll be talking about K8up, our beloved Kubernetes and OpenShift backup operator. Come join us at kiosk number K25 to learn how to use K8up to preserve your data in a convenient and secure way.

We can’t wait to attend KubeCon + CloudNativeCon next week and please, if you see the VSHN logo on our t-shirts, don’t hesitate to come and say hi! We’d love to meet you.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
VSHN.timer

VSHN.timer #170: News From FOSDEM 2023

6. Feb. 2023

Welcome to another VSHN.timer! Every Monday, 5 links related to Kubernetes, OpenShift, CI / CD, and DevOps; all stuff coming out of our own chat system, making us think, laugh, or simply work better.

This week we’re going to talk about last weekend’s FOSDEM 2023 conference in Brussels, where our CTO and Product Manager, Tobias Brunner, was dazzled and bewildered by what he saw. Here are his top picks, including links to the video and/or slides (if available at the time of publication of this article.)

https://mstdn.social/@tobru/109812927540684014

1. Phil Estes, Principal Engineer at Amazon Web Services (AWS) presented a tour of the landscape of container developer tooling in use today: podman, nerdctl, and non-Linux platform support for Rancher Desktop, Lima/colima, Finch, and Podman Desktop (video and slides)

https://fosdem.org/2023/schedule/event/container_developer_tooling/

2. Portia Burton, author of technical content for Dapper Labs, Honeycomb, and Linode, explored in her talk how to use a CI/CD workflow to encourage teams to write and maintain documentation. One of the best talks! (Video)

https://fosdem.org/2023/schedule/event/how_to_automate_documentation_workflow_for_developers/

3. Dmitriy Kostiuk, free and libre software activist from Belarus, explained how to properly diagram Kubernetes clusters, looking at how various authors usually draw Kubernetes clusters and showing how auxiliary tools such as color coding, grouping, and eye anchoring can make the cluster diagram more understandable (slides)

https://fosdem.org/2023/schedule/event/container_kubernetes_cluster_right_way/

4. Raymond de Jong, CTO at Isovalent, showed how Cilium monitors Service to Service communication, Golden Signals, detects transient network layer issues, and identifies problematic API requests with transparent tracing (slides)

https://fosdem.org/2023/schedule/event/network_cilium_and_grafana/

5. As this article hits the web, the FOSDEM 2023 team is working as fast as possible to process and publish the remaining videos of the conference. All videos will be available on this website, and we can’t recommend them enough.

https://video.fosdem.org/2023/

BONUS: Matthew Hodgson, the technical co-founder of Matrix and co-founder of Element, explained in his talk the fundamental changes which are landing in Matrix 2.0, which speeds up Matrix to be at least as snappy as the fastest proprietary messaging apps – all while handling thousands of rooms spanning millions of users.

https://fosdem.org/2023/schedule/event/matrix20/

Did you attend FOSDEM this year? Did you enjoy the talks? Would you like to recommend some to our readers? Get in touch with us, and see you next week for another edition of VSHN.timer.

PS: check out our previous VSHN.timer editions about open source: #152 and conferences: #19, #20, #56, #57, #90, and #91.

PS2: do you prefer reading VSHN.timer in your favorite RSS reader? Subscribe to this feed.

PS3: would you like to receive VSHN.timer every Monday in your inbox? Sign up for our weekly VSHN.timer newsletter.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO

Announcing DBaaS by Exoscale on AppCat

17. Jan. 2023

Last September, we announced our new AppCat service. Since then, we’ve expanded it to offer object storage provisioning in both cloudscale.ch and Exoscale. Today, we’re thrilled to announce the immediate availability of a new AppCat service: DBaaS by Exoscale, available in the APPUiO Cloud Zone on Exoscale or, on request, on APPUiO Managed clusters on Exoscale.

Getting an Exoscale DBaaS instance is as simple as specifying any Kubernetes object with YAML. A few minutes later, you’ll get access to a freshly provisioned Exoscale DBaaS instance, with the credentials to access it available in a Kubernetes Secret  object. This lets you specify the dependency of an Exoscale DBaaS instance directly within your application deployment.

DBaaS by Exoscale on AppCat has the following outstanding features:

  • Available services: Create one or many instances of PostgreSQL, MySQL, Redis, Apache Kafka, or OpenSearch.
  • Multiple plans available: Find a variety of plans with different upscale and downscale possibilities, as well as redundancy options.
  • Available in all Exoscale zones: DBaaS plans are available in all current Exoscale zones, enabling multi-zone setups and geo-replication.
  • Termination protection: Termination protection against accidental deletion is in place for all DBaaS services to keep your databases safe.
  • Automatic backup policy: Backups are automatically done daily. The daily backup frequency and retention depend on the chosen plan.
  • Your data stays In Europe: All data is stored in the country of your chosen zone, fully GDPR-compliant. DBaaS is available in every zone across Europe.

Some crunchy technical details:

  • The DBaaS offering by Exoscale is powered by Aiven, one of the leading European companies for managing open-source data infrastructure in the cloud. This partnership offers Exoscale and VSHN customers an integrated environment for their complete cloud infrastructure – without any security compromise. All involved companies are GDPR-compliant to ensure the highest data protection standards.
  • Using open-source projects ensures customers are not locked into a vendor and always enjoy the latest technology standards.
  • This service is made available by our Crossplane providers for cloudscale.ch and Exoscale. It uses a Project Syn Commodore Component to deploy the Crossplane XRDs and Compositions to the APPUiO clusters.

Learn more about AppCat DBaaS by Exoscale, including its pricing and availability, on our product website, and then learn how to use it on our AppCat documentation website.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO Tech

VSHN HackDay – Tailscale on APPUiO Cloud

21. Okt. 2022

As part of the fourth VSHN HackDay taking place yesterday and today (October 20th and 21st, 2022), Simon Gerber and I (Tobias Brunner) worked on the idea to get Tailscale VPN running on APPUiO Cloud.

tailscale logo

Tailscale is a VPN service that makes the devices and applications you own accessible anywhere in the world, securely and effortlessly. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other.

The use cases we wanted to make possible are:

  • Access Kubernetes services easily from your laptop without the hassle of „[kubectl|oc] port-forward“. Engineers in charge of development or debugging need to securely access services running on APPUiO Cloud but not exposed to the Internet. That’s the job of a VPN, and Tailscale makes this scenario very easy.
  • Connect pods running on APPUiO Cloud to services that are not directly accessible, for example, behind a firewall or a NAT. Routing outbound connections from a Pod through a VPN on APPUiO Cloud is more complex because of the restricted multi-tenant environment.

We took the challenge and found a solution for both use cases. The result is an OpenShift template on APPUiO Cloud that deploys a pre-configured Tailscale pod and all needed settings into your namespace. You only need a Tailscale account and a Tailscale authorization key. Check the APPUiO Cloud documentation to know how to use this feature.

We developed two new utilities to make it easier to work with Tailscale on APPUiO Cloud (and on any other Kubernetes cluster):

  • tailscale-service-observer: A tool that lists Kubernetes services and posts updates to the Tailscale client HTTP API to expose Kubernetes services as routes in the VPN dynamically.
  • TCP over SOCKS5: A middleman to transport TCP packets over the Tailscale SOCKS5 proxy.

Let us know your use cases for Tailscale on APPUiO Cloud via our product board! Are you already a Tailscale user? Do you want to see deeper integration into APPUiO Cloud?

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO

Ankündigung von AppCat auf APPUiO Cloud

14. Sep. 2022

Wir freuen uns, die sofortige Verfügbarkeit des VSHN Application Catalog (oder kurz „AppCat“) auf APPUiO Cloud bekannt zu geben.

Der VSHN AppCat ist ein Cloud Native-Marktplatz, welcher Services von Anbietern wie Amazon AWS, Google Cloud, Microsoft Azure, Aiven.io, Exoscale oder cloudscale.ch sowie Managed Services von VSHN anbietet. Diese Services können ganz einfach als Kubernetes-Ressourcen angefordert und in jeden GitOps- oder CI/CD-Workflow integriert werden. Bestelle Services so, wie du deine Anwendung auf Kubernetes bereitstellen würdest! Mit dem VSHN Application Catalog konzentrierst du dich auf deine Anwendung, und wir kümmern uns um das Bootstrapping und den Betrieb der benötigten Services.

Der erste AppCat-Services, der ab sofort verfügbar ist, ist der Object Storage Bucket-Services in der APPUiO Cloud Zone LPG 2. Er ermöglicht es Entwicklern, schnell S3-kompatible Storage Buckets in cloudscale.ch bereitzustellen. Wir werden den Services bald auch in der Exoscale CH-GVA-2 0 Zone aktivieren.

Wir werden in Kürze weitere AppCat-Services einführen. Lass uns wissen, welche anderen Services du gerne in AppCat sehen würdest und ob du daran interessiert wärst, AppCat in deinem eigenen APPUiO Managed Cluster zu betreiben.

Weitere Informationen über AppCat findest du auf der Website unserer Produkte und in der APPUiO Cloud Dokumentation.

AppCat basiert auf Crossplane und Project Syn und ist 100% Open Source! Schau dir die Projekte provider-cloudscale, provider-exoscale und component-appcat auf GitHub an.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
APPUiO

Behind the Scenes at APPUiO Cloud

3. Mai 2022

Since our original announcement of APPUiO Cloud last September we’ve been very busy collecting feedback from our customers, and working on improving the product with many new features and capabilities. And we are very pleased to see our platform grow more and more every day!

This article contains a summary of everything that has happened in the first six months of APPUiO Cloud, with some hints about what’s coming next.

Security

We have placed a strong focus in securing APPUiO Cloud to the maximum, yet offering flexibility and manageability to our customers.

All security policies are managed through the Kyverno policy engine, a CNCF Sandbox project. On the other hand, the management of organizations and teams is based on Keycloak Groups, with additional syncing to OpenShift provided by an ad hoc, open source Kubernetes operator.

Management

APPUiO Cloud has been possible thanks to the incredible experience of the VSHN team and to their investment in tooling. In particular, Project Syn has made possible the automation of many different tasks in APPUiO Cloud. And of course it is all 100% open source, as the APPUiO Cloud Commodore Component.

The APPUiO Cloud billing system is also a fine piece of automation, helping us process the consumption of resources, and generating the invoices for our customers in a timely manner. Christian Häusler, APPUiO Cloud’s Product Owner, described it in a previous blog post.

API and Portal

We’ve created an extensive APPUiO Cloud Control API, fully open source, based on the OpenShift and Kubernetes APIs, and running on top of vcluster. Another recent blog post also from our Product Owner Christian Häusler gives more information about this topic.

We have also recently launched the APPUiO Cloud Portal, fully open source, offering our users a self-service mechanism to manage their organizations, users, and to quickly access useful information about their APPUiO Cloud usage. The portal uses the same Kubernetes-based APPUiO Cloud Control API mentioned above. This Portal is under heavy development, so you can expect new features to be released regularly.

Cilium

Following our partnership with Isovalent APPUiO Cloud is currently using Cilium Enterprise as its CNI of choice, positioning it at the forefront of technology, and opening fantastic opportunities for the future.

New Zone

After the first APPUiO Cloud zone opened in December (running on cloudscale.ch in Kanton Aargau), we quickly opened the second in February, this time on Exoscale in the Canton de Genève.

Documentation

And none of this effort could go into production without a great deal of documentation. Not only is our user documentation completely open source, we have also been updating our APPUiO Cloud for System Engineers documentation regularly, with lots of juicy details for your technical teams to learn about how APPUiO Cloud works.

Interested?

Would you like to know more about APPUiO Cloud?

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Presse Tech

K8up für CNCF Projekt Onboarding akzeptiert

19. Nov. 2021

Update: k8up ist jetzt ganz offiziell ein CNCF Sandbox Projekt!

Wir freuen uns sehr, bekanntgeben zu können, dass K8up, der von VSHN entwickelte Kubernetes-Backup-Operator, in den CNCF Project Onboarding-Prozess aufgenommen wurde!

Wir werden nun mit der CNCF zusammenarbeiten, um den Onboarding-Prozess abzuschliessen und alle erforderlichen Informationen für die Übertragung der Projektverantwortung an die CNCF bereitzustellen.

Während dieser Phase werden wir bei VSHN unsere Arbeit fortsetzen und K8up mit neuen Funktionen und Möglichkeiten verbessern. Das GitHub-Projekt ist die Hauptstütze für diese Arbeit und du bist herzlich eingeladen, dir unseren Getting Started Guide anzusehen und mehr über K8up zu erfahren.

Vielen Dank an die CNCF für ihr Vertrauen in K8up! Wir wissen, dass dieses Projekt eine grossartige Ergänzung zur ständig wachsenden Welt der Cloud Native-Projekte sein wird, und wir freuen uns auf seine Zukunft!

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Tech

Happy 30th Birthday, Linux!

25. Aug. 2021

On the evening of August 25th, 1991, Linus Torvalds wrote a short message on the comp.os.minix Usenet newsgroup:

I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones.

Fast-forward 30 years, and to make a long story short, well: Linux is now both big and professional. It’s nearly everywhere, hidden behind the shiny graphical user interface of your smartphone or tablet (well, except for iOS users), in your gaming box, in your car, and even on Mars.

I discovered Linux as a young boy still at school, in 1998. It was in a book store where I was attracted by a shiny package (I had no idea what Linux was) called „SuSE Linux 6.0“ (nowadays available for download from archive.org), and since then I couldn’t stop working with it.

Over time, I got more and more into Linux, and it became an important hobby, with all my PCs running Linux, and also on my home server under my desk at home. Many years later, in 2011, I could finally start working full-time as a Linux administrator. My desktop computer has been only powered by Linux since around 2004, and I’ve been a big fan of KDE ever since.

The author’s original SuSE Linux 6.0 CDs, originally released in December 1998.
More memorabilia from early Linux versions.

Today, the Linux Kernel is more present, yet less visible than ever. We interact with it on containers and in VMs on the cloud, and it gets more and more abstracted away, deep down in serverless architectures, making the Kernel even more invisible than ever before. Albeit out of sight of most users, Linux has become much more solid, mature, and pervasive, and does its great job behind the scenes without interruption.

Linux empowers VSHN at all levels; not only it is the dominating technology we use every day, empowering Kubernetes, containers, and cloud VMs, but it is also the operating system that the majority of VSHNeers (around 66%, according to an internal poll) use for their daily job, the remaining third using macOS or Windows.

Some numbers: of those two thirds of VSHNeers that use Linux every day in their laptops, 61% chose Ubuntu (or one of its various derivatives); 17% use Arch, 11% Fedora, and others use DebianMint, and other distributions. Some even contemplate switching to Qubes OS soon! As for desktop environments, around 35% use GNOME, 25% use KDE, 20% use i3, and 6% use Cinnamon.

Each one of us VSHNeers has a unique feeling about Linux; here are some thoughts about what Linux means to us:

Before using Linux, I was primarily focused on how to use and work with computer systems. With the switch to Linux I started to understand how they actually work.

What I really appreciate about Linux is that it’s (still relatively) lightweight, powerful, transparent and adaptable. I do heavyweight gaming and video livestreaming on the same OS that runs my file servers and backup systems (not all on the same machine, don’t worry). Even my car and my television run Linux! This universality combined with the permissive licenses means that whenever one industry improves Linux (the kernel), every other industry profits.

I originally just wanted to play Minecraft with my friends. Suddenly I had to learn how to host this on a Linux server, which then turned into a fascination on how things work on the backstage. It’s the urge to see how our modern society works!

Linux is the operating system of our software-driven world.

On to the next 30 years of Linux!

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

The New Commodore Components Hub

28. Juli 2021

We’re very happy to announce a new member of the Project Syn family: the new Commodore Components Hub. This is the new central point of reference and information for all Commodore components available in GitHub.

Commodore Components Hub

Not only does the Commodore Component Hub list all existing components on GitHub, it also automatically imports and indexes the documentation of each and every one, written in Asciidoc and formatted as an Antora documentation site. This makes it very easy to find the perfect component that suits your needs.

The source code used to generate the Commodore Component Hub was born during our recent VSHN HackDay; it’s written in Python and 100% open source (of course!). Check it out in GitHub.

Get your Component on the Hub!

To add your own Commodore Component to the Hub, it’s very easy: just add the commodore-component tag to your project on GitHub, and voilà! The Commodore Component Hub is rebuilt every day at every hour from 6 AM to 7 PM (CET time).

We recommend that you write the documentation of your component with Asciidoc in the docs folder of your component. This will ensure that users will be able to find your component, and most important, also learn how to use it properly.

We look forward to featuring your Commodore Components on the Hub!

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Tech

Crossplane – The Control-Plane of the future

29. Apr. 2021

VSHN has been a fan of Crossplane since its very early days in 2019. Since then the project has matured a lot and is now used in production by VSHN and many others. In mid 2020 Crossplane became a CNCF Sandbox project and lately applied to be promoted to be a CNCF Incubation project. It’s time for an introduction to Crossplane, why it matters to VSHN and talk about our production usage.

This blog article is also available as a video talk (with corresponding slides).

Use Case: Self-Service Marketplace with Crossplane Service Broker

The very first use case we were able to fulfill by using Crossplane is a project for a VSHN customer who provides a self-service marketplace to their internal customers (developer and service teams). This marketplace is available in their internal Cloudfoundry environment, presented as a one-click service provisioning web-interface. Via this interface, the end-user can order a MariaDB Galera Cluster or a Redis Cluster with different characteristics (e.g. available storage or memory), called Service Plans, with one click. This infrastructure runs in an on-premise datacenter which doesn’t provide any of the well-known hyperscaler services and APIs. We were able to use the Crossplane Helm Provider to deploy services which are specified by Crossplane Composite Resources and Compositions.

In the world of Cloudfoundry the Open Service Broker API is used to provision and manage services. To have a native integration in to the marketplace we developed a Crossplane Service Broker which maps the concepts of the Open Service Broker API specification to the concepts of Crossplane. As they match very well, the integration and translation between these two APIs is very easy.

The Crossplane Service Broker is Open Source and available under https://github.com/vshn/crossplane-service-broker.

This concept lays the foundation for many upcoming new services of VSHN, under the name „Application Catalog“. Watch out this space for more articles about this topic!

What is Crossplane?

In short:

Crossplane is an open source Kubernetes add-on that enables platform teams to assemble infrastructure from multiple vendors, and expose higher level self-service APIs for application teams to consume, without having to write any code. 

(Source: crossplane.io)

To achieve this promise, Crossplane brings three main features with it:

  • Providers: These are the pluggable building blocks to provision and manage infrastructure. Each provider enables the use of an upstream or third-party API, for example of a cloud provider, and manages the abstraction to it by bringing Kubernetes custom resources (CRDs) with it. These custom resources are called „managed resources“ in the Crossplane world and resemble the upstream API as closely as possible. As each upstream provider has its own opinionated API, Crossplane aligns these interface by providing its own opinionated structure, the Crossplane Resource Model, abbreviated XRM. This allows for example to have a unified API for things like status conditions and references.
    Crossplane itself brings already a lot of providers out-of-the-box and under the crossplane-contrib GitHub organization a lot of third-party providers are being developed.
  • Compositions: This is a Crossplane specific construct which enables the possibility to define new custom APIs – called „composite resources“ (XR) – which provide a pre-configured set of managed resources. By predefining a set of managed resources – called „composition“ –  the end-user of the platform (e.g. the developer) is being enabled to get infrastructure in an actually usable self-service way. The user doesn’t have to care about the inner details of for example an AWS RDS instance which most of the time needs a lot of configuration and other objects (VPC, networking, firewalling, access control). This work is done by the platform team.
  • Packages: Sharing opinionated infrastructures is done by packaging up all the resources (XRD, Provider, Compositions) in to a package and re-distributing it as a standard OCI image.

The Crossplane developer Nic Cope has written a very good overview in Crossplane vs Cloud Provider Infrastructure Addons and the Crossplane documentation gives another good introduction.

The project overview slides Crossplane – CNCF Project Overview walks the reader through the project (As of March 2021).

Why do we care?

The three core features described above in itself are already very cool, but we feel that there is much more behind it.

As Crossplane leverages the Kubernetes API and concepts, it enables a lot of possibilities:

  • Usage of a well-known API. There is no need to learn a completely new API.
  • This allows to re-use tooling like GitOps to declaratively manage infrastructure.
  • The infrastructure is defined in the same language (Kubernetes API style) as the application deployment is described. No need to learn a new domain-specific language.
  • With that there can be one place which describes everything needed to run an application, including all the infrastructure needs (databaes, caches, indexes, queues, …).
  • As Crossplane is a Kubernetes operator, it has reconciliation built into its heart and therefore all the time actively makes sure that the infrastructure adheres to the defined state. Manual changes to the infrastructure will be rolled-back immediately. No configuration drift is possible this way.
  • Battle-tested Kubernetes RBAC rules help to control access to infrastructure provisioning and management.

Kubernetes is much more than „just“ container orchestration. It’s the platform aspect that counts, the well-defined API and the concepts of the control-loop. Crossplane brings this to the next level, making Kubernetes more independent of containers than ever before.

VSHN lately updated its company beliefs in which we set out to bet on Kubernetes as the technical foundation for everything we do in the future. With Crossplane we can now even provision all the infrastructure we need straight out of Kubernetes, no need to use another, disconnected tool anymore.

One API to rule them all:

Core Kubernetes APIs to orchestrate application containers
Crossplane APIs to orchestrate infrastructure

Comparison to other Infrastructure as Code tools

The above description might already shed some light where the differences to other Infrastructure as code tools like Terraform, Pulumi or Ansible are.

Why not Terraform?

When directly comparing Crossplane with Terraform the most important aspect is that Terraform is a command-line (CLI) tool acting on control-planes, where Crossplane is a control-plane itself which is active all the time.

To configure infrastructure with Terraform, the user has to declare the intended architecture in the domain specific HashiCorp Configuration Language (HCL). After that the CLI has to be invoked manually which starts Terraform to plan and actually apply the configuration. After that the current state is stored in a state file to represent the current infrastructure. When something changes in the infrastructure without telling Terraform about it, the stored state differs from the actual state and on the next CLI invocation no-one knows what happens. Also, when only wanting to change something on one of the probably many provisioned services, Terraform always configures all services which could take a long time and affect other services as well, unintended. Automating Terraform is very hard because of these and many other issues. Many more aspects are discussed in Crossplane vs Terraform.

That doesn’t mean Terraform is bad, it’s just a completely different approach to manage infrastructure.

Why not cloud specific Kubernetes operators?

Many, if not all, of the big cloud providers (Hyperscaler) are providing a native Kubernetes operator to manage their cloud of out Kubernetes: Google Cloud has their Config Connector, Azure the Service Operator and AWS the Controllers for Kubernetes. All these operators are specific to the cloud they are engineered for and are providing low-level access to their services. While it’s perfectly fine to use them, Crossplane provides an abstraction layer where the same APIs can be used cross-cloud and presents the platform user the same API, independent on which cloud the cluster is running on and how all the services are named. By leveraging the Composition feature of Crossplane the user doesn’t have to care what all is needed to properly provision a service: For example a production-grade AWS RDS instance has hundreds of configuration values and needs networking, security group, API connection secret, user, schema and grants. This all can be easily abstracted by Crossplane Compositions. An in-depth discussion can be found in Crossplane vs Cloud Provider Infrastructure Addons.

Keeping up with infrastructure providers

Cloud providers are constantly adding and changing services. The Crossplane providers somehow have to keep up with that. As it’s nearly impossible and also impractical to manually keep up with the changes, the Crossplane developers have engineered tools to generate Crossplane providers out of already existing and well maintained SDKs.

Examples:

Crossplane Terminology

To better understand Crossplane, a few Crossplane specific terms will need an explanation:

  • Providers: Extends Crossplane to enable infrastructure resource provisioning. In order to provision a resource, a Custom Resource Definition (CRD) needs to be registered in the Kubernetes cluster and its controller should be watching the Custom Resources those CRDs define.
  • Managed Resources: The Crossplane representation of the cloud provider resources and they are considered primitive low level custom resources that can be used directly to provision external cloud resources for an application or as part of an infrastructure composition.
  • Crossplane Resource Model (XRM): Standardization between all providers, defining for example the status properties and resource references.
  • Composite Resource (XR): A special kind of custom resource that is composed of other resources. Its schema is user-defined.
  • Composite Resource Definition (XRD): Defines a new kind of composite resource, and optionally the claim it offers.
  • Composite Resource Claim (XRC): Declares that an application requires a particular kind of infrastructure, as well as specifying how to configure it.
  • Composition: Specifies how Crossplane should reconcile a composite infrastructure resource – i.e. what infrastructure resources it should compose. It can be used to build a catalog of custom resources and classes of configuration that fit the needs and opinions of an organization.

Video introduction

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Project Syn

Crossplane Community Day December 2020 Recap

16. Dez. 2020

Next-gen DevOps with Crossplane by Tobias Brunner

Lightning talk at the Crossplane Community Day December 2020

I attended the Crossplane Community Day (actually evening for the Europeans) on December 15th 2020 which was titled „Modernizing with an API-centric Control Plane“.

In my lightning talk „Crossplane as a cornerstone in a next-gen hosted DevOps platform“ I introduced the project which we’re currently working on with Swisscom and revealed the Open Sourcing of our Crossplane Open Service Broker API integration. This Open Service Broker (OSB) integration with Crossplane allows to consume Crossplane objects by any OSB capable client infrastructure like Cloud Foundry and makes it possible to offer any kind of services via the many Crossplane providers. More of what we do with this integration will be written in an upcoming blog post. The application is currently in a PoC state and available on GitHub under https://github.com/vshn/crossplane-service-broker-poc. We’re actively working on the production implementation in https://github.com/vshn/crossplane-service-broker.

Crossplane was released in version 1.0 at this event. I was eagerly waiting for this release since about a year when we first discovered Crossplane and experimented with it. Congratulations to the Crossplane community and contributors and I’m really looking forward to see what happens with Crossplane in 2021 and beyond – also what we’ll do with it at VSHN. Exciting times ahead!

There where many great and very interesting talks to listen to. The recording of the full event will be available in the next days, check back the social media channels to get notified about it.

During the event I was active in the Crossplane Slack channel and was able to capture some very interesting questions and answers around Crossplane and the wider ecosystem.

Here are my highlights quoted

Question

Why would one want to use Crossplane rather than the hyperscaler’s operators, like AWS ACK and Azure Service operator directly?

Answer

Good question – I think the key reasons you might choose Crossplane boil down to the XRM and Composition.
XRM is the Crossplane Resource Model – if you’re using more than one cloud, our CRs work the same way across them all. Similar patterns.
(Or even if you’re using providers for things like SQL users and database, or Helm charts.)
Composition is a layer we provide on top of XRM compliant resources that let you define your own APIs (your own CRs) without writing code. So you can build your own classes of service and opinionated APIs atop those raw low level APIs.

Also bears mentioning that we are working with them on code generating crossplane controllers from the same codegen pipeline, so below a certain level of abstraction we will be sharing code for interacting with provider SDKs.

https://blog.crossplane.io/accelerating-crossplane-provider-coverage-with-ack-and-azure-code-generation-towards-100-percent-coverage-of-all-cloud-services/

Question

How about advantages if any for on-prem private clouds?

Answer

The answer is about the same there, as compared to something that might operate databases etc on-prem. Admittedly though our provider support for on-prem is lighter on the ground. Definitely appreciate contributions there!

The other thing I’ll add here, is Crossplane can give you a cloud-like provisioning experience in your on-premises environment which can be a big win for developers.

Question

Is it correct that external resources are not namespaced in Crossplane? If so, what is the rationale? If there’s a design doc that covers it, that would be great

Answer

With the whole separation of concerns thing we treat managed resources (our CRs that represent ERs) as a platform / infra concern, so they’re cluster scoped like a node or a PV. The claims that represent them are namespaced. This is kind of handy in two ways:

  • If you imagine an API server that’s dedicated to Crossplane, the platform team can view all the managed resources in one big global view, but see the claims that represent those resources broken down by namespace (i.e. often by team).
  • Sometimes we don’t want to offer a claim for an XR – e.g. a VPC XR is probably only something the platform operators want to control.
  • The big one – sometimes we want cross resource refs that would violate namespace boundaries. Imagine for example the platform folks create a VPC XR, and folks making claims down in namespaces can make a claim for a database that they want to be connected to that VPC. If the VPC was off in the “platform-infra” namespace or whatever they’d need to reference it across namespaces.

An alternative answer is that we designed for a world where we can partition concerns just like PV/PVC

Question

Re the current Terraform talk – is the idea to use Terraform providers to generate Crossplane CRDs and controllers that run independent of Terraform… or is the idea to proxy the CRDs through an in-cluster Terraform controller?

Answer

More the former. Terraform is actually a couple of processes running together – each provider is a process that has a gRPC API, and the terraform CLI tool sits in front of that. We run those provider binaries, but we put a Kubernetes controller in (i.e. a Crossplane provider) in front of them instead of the terraform CLI.

Furthermore I discovered some new tools:

  • Kubernetes External Secrets: „Kubernetes External Secrets allows you to use external secret management systems, like AWS Secrets Manager or HashiCorp Vault, to securely add secrets in Kubernetes“
  • CDK for Kubernetes: „Define Kubernetes apps and components using familiar languages“ with integration for Crossplane discussed here: Crossplane issue 1955.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Project Syn Tech

Second Beta Release of Project Syn Tools

23. Juli 2020

Without further ado, we’re announcing the release 0.2 of the Project Syn tools.
Since the first public release mid-March this year (read more about it in First Pre-Release of Project Syn Tools) we used the tools on a daily basis, in particular for the development of our new product „VSHN Syn Support“. And of course we have incorporated all of that experience in the source code. The main features are now in place, and are getting better and better on a daily basis.

New Features and Improvements

When reading the announcement of a new version, engineers are always interested in new features and improvements. So these are the most important new additions since 0.1:

  • Everything required for setting up a new cluster (GitOps repository, cluster config file in Tenant configuration, Vault secrets, and more) is now fully automated. One API call to register a new cluster and you’re done.
  • In parallel to the creation of clusters, we have also automated all steps required to decommission them (Repo deletion, Vault secret cleanup, and more). Just delete it and everything is gone (of course, there are preventive measures in place to not make this an Uh-oh moment).
  • Commodore got a lot of improvements: for local development, and for developing new components with a comprehensive cookiecutter template.

Document All The Things

Besides implementing new features and fixing bugs we put a lot of effort into the documentation. The main documentation page https://syn.tools/ got a completely new structure and a huge amount of new content. We’re in the process of adding new pages frequently, so make sure to check it out every so often.
Before 0.2 it was hard to get started with Project Syn and to understand what it was all about. To solve that issue we wrote the following introductions:

Our next goal is to document the concepts behind configuration management with Commodore in detail.

Commodore Components on GitHub

An important building block of Project Syn are Commodore Components. Over the past months we’ve written and open sourced more than 10 Commodore Components on GitHub. They offer the flexibility to install and configure Kubernetes system services, adapted to their respective distribution and infrastructure.
These Commodore Components can be found by searching for the „commodore-component“ topic on GitHub.
We are writing and refining more and more Components every day. We are going to publish some guidelines about how to write Commodore Components (one specifically for OpenShift 4 Components is already available) and eventually enforce them via CI jobs and policies.
An upcoming Component Writing Tutorial will help beginners to start writing own Components or contribute to existing ones.

The Road to 1.0 and Contributions

What we learnt while working on Project Syn over the last few months gave us a very clear picture of what we want to achieve in version 1.0. The roadmap contains the most important topics:

  • Documentation! We have to and will put a lot of effort into documentation, be it tutorials, how-to guides, or explanations.
  • Full Commodore automation to automate and decentralize the cluster catalog compilation process.
  • Developer experience improvements for simplifying the development of Commodore Components even further.
  • Engineering of a new tool helping users to launch managed services on any Kubernetes cluster.
  • Cluster provisioning automation integration, to leverage third party tools for automatically bootstrapping Kubernetes clusters.

This is not all; check the more detailed roadmap on the Project Syn page for more. The GitHub project will grow with issues over the next few weeks.
If you think that this sounds interesting and you would like to contribute, we now have an initial Contribution Guide available and are very open to suggestions and pull requests. Just get in contact with us if you’re interested.

Our Product: VSHN Syn Support

Besides the Open Source project we were also working on defining what added value you can get from VSHN. We call this product „VSHN Syn Support.“ If you’re interested in getting commercial support from VSHN for Project Syn on a Managed Kubernetes Cluster based on OpenShift 4 or Rancher, get in touch with us. More information about VSHN Syn Support can be found here.

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Presse Project Syn Tech

First Pre-Release of Project Syn Tools

10. März 2020

We have been working hard since the initial announcement of Project Syn back in November 2019, and are proud to announce version 0.1.0, the first pre-release of a set of Project Syn tools.
Quick reminder about what Project Syn is about:

Project Syn is a pre-integrated set of tools to provision, update, backup, observe and react/alert production applications on Kubernetes and in the cloud. It supports DevOps through full self-service and automation using containers, Kubernetes and GitOps. And best of all: it is Open Source.

TL;DR: The code is on GitHub, under its own organization: https://github.com/projectsyn. The official documentation is in https://docs.syn.tools/ (The documentation is open source too!)

What does Project Syn do?

Short answer: it enables the management of many Kubernetes clusters, and provides a set of services to the users of those clusters. Project Syn is composed by many tools; some specially developed for the project, some already existing, all Open Source. It’s not only about tooling, it’s also about processes and best practices.
The actual story is a bit longer.

Features of version 0.1.0

To manage a big fleet of Kubernetes clusters, we need an inventory with the following information:

  • The cloud providers they are running on;
  • Locations;
  • Tenants each cluster belongs to;
  • Kubernetes versions deployed;
  • Kubernetes flavor / distribution used;
  • …and a lot more!

This is what the Project Syn tool Lieutenant (written in Go) gives us: an inventory application to register clusters, to assign them to a tenant and to store inventory data. It consists of a REST API (based on the OpenAPI 3 specification) and a Kubernetes Operator, to store data directly in the underlying Kubernetes cluster (in CRDs) and to act on events.
Knowing about clusters is just one part. Another important element is to continuously deploy and monitor system applications (like K8up, Prometheus, …) on Project Syn enabled Kubernetes clusters. This is all done with the GitOps pattern, managed by Argo CD, which is deployed to every cluster. Thanks to Argo CD we can make sure that the applications deployed to the cluster are exactly configured as specified in the corresponding Git repository, and that they are running just fine.
Each Project Syn enabled Kubernetes Cluster has its own so-called Catalog Git Repository. This contains a set of YAML files specifically crafted for each cluster, containing the system tools to operate the cluster, and to give access to well configured self-service tooling to the user of the cluster.
The generation of these YAML files is the responsibility of the Project Syn tool Commodore (written in Python). Commodore is based upon the Open Source tool Kapitan by leveraging inventory data from Lieutenant. After gathering all needed data about a cluster from the inventory, Commodore can fetch all defined components, parameterize them with configuration data from a hierarchical GIT data structure and generate the final YAML files, ready to be applied by Argo CD to the Kubernetes Cluster. The Lieutenant API also knows where the catalog Git repository is located, and Commodore is therefore able to automatically push the catalog to the matching Git repository.
Secrets are never stored in GitOps repositories. They are instead stored securely in Hashicorp Vault, and only retrieved during the „apply“ phase, directly on the destination Kubernetes Cluster. This process is supported by the Kapitan secret management feature and by Commodore, who prepares the secret references during the catalog generation. Argo CD calls kapitan secrets --reveal  during the manifest apply phase, which then actually connects to Vault to retrieve the secrets and stores them in the Kubernetes Cluster, ready to be consumed by the application.
The management of all these Git repositories is the responsibility of the Lieutenant Operator (written in Go, based on Red Hat’s Operator SDK). It is able to manage remote Git repositories (GitLab, GitHub, Bitbucket, etc) and prepare them for Commodore and Argo CD, for example by configuring an SSH deploy key.
The Project Syn tool Steward (written in Go) has the responsibility of enabling Project Syn in a Kubernetes Cluster, communicating with the Lieutenant API, to perform the initial bootstrapping of Argo CD. This bootstrapping includes basic maintenance tasks: should Argo CD be removed from the cluster inadvertently, Steward will automatically reinstall it. An SSH deploy key is generated during bootstrapping and transmitted back to the API. With this procedure it is possible to bootstrap the whole GitOps workflow without any manual interaction.

Analogies with Puppet

For those familiar with Puppet, there are some similarities with the design of Project Syn:

  • Puppet Server: Commodore and Kapitan to generate the catalog, matching the facts from the cluster.
  • Puppet DB: Lieutenant acting as inventory / facts registry.
  • Hiera: Kapitan with its hierarchical configuration model.
  • Puppet Agent: Steward and Argo CD on the cluster. Steward to communicate with the API and Argo CD to apply the catalog.
  • Puppet Modules: Commodore Components, bringing modularity into Kubernetes application deployment.

Many of these concepts are documented in the Project Syn documentation pages, specifically the Syn Design Documents, documenting all the design decisions (even though they are still in „work-in-progress“ stages).

What are the next steps for Project Syn?

This is really just the beginning! There are a lot of plans and ideas for the future evolution of Project Syn. We have crafted an initial roadmap, and we published it as part of the official Project Syn documentation.
This initial pre-release is just the tip of the iceberg. Under the surface there is a lot more brewing, to be released as soon as possible. To reiterate: It’s not only about tools, but also about concepts and processes, which also means a lot of documentation will emerge over the next months.
One of the focus of this initial pre-release was to lay the foundation for future development. It has a strong focus on the operations side. Future milestones will broaden the focus to include more and more self-service possibilities for the user, including tight integration of Crossplane for easy and fully automated cloud service provisioning.
We at VSHN are now starting to use Project Syn for an initial set of managed Kubernetes clusters, and will continue to develop the concept, tools and processes while we learn about more use cases and with the real-life experience we gather.

How can I contribute?

Project Syn is a young project and is making the first initial steps in the open world. Many things are just getting started, just like the documentation and the contribution guidelines. Testing and giving feedback through GitHub issues is certainly a great way to start contributing. And of course, if you are looking for a Managed Kubernetes or Managed OpenShift cluster, get in touch with us with the form at the bottom of this page!

Learn more

Second Beta Release of Project Syn Tools

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Presse Project Syn

Announcing Project Syn – The Next Generation Managed Services

20. Nov. 2019

VSHN announces Project Syn

VSHN is proud to announce Project Syn, the next generation Open Source managed services framework for DevOps and application operations on any infrastructure based on Kubernetes.

Project Syn is a pre-integrated set of tools to provision, update, backup, observe and react/alert production applications on Kubernetes and in the cloud. It supports DevOps through full self-service and automation using containers, Kubernetes and GitOps. And best of all: it is Open Source.

Project Syn combines tools and processes to make the best out of containers, Kubernetes and Cloud Services

VSHNs mission is to automate all aspects of software operations to help software developers to run their applications on any infrastructure. Since 2014, we have been using Puppet and Ansible to automate monitoring, backups, logs, metrics, service checks and alerts. Project Syn is the next generation of application operations tooling packaged as containers and orchestrated on any Kubernetes service.
Project Syn provides an opinionated set of integrated tools and processes on any Kubernetes service and cloud infrastructure provider:

  • GitOps and infrastructure as code: declare the application environment requirements in Git and let the tooling take care of creation/changes
  • Observability and insights: service checks, metrics, logs, thresholds, alert rules and paging
  • Service provisioning: declare backends and other service dependencies as portable Kubernetes Objects (CRD) and let the tooling create the infrastructure-specific service (e.g. database service, S3 storage service, etc) with best-practice default configuration
  • Backup: regularly back up all user data from each service and persistent volume
  • Application container deployment automatically integrating the topics above
  • Work on any Kubernetes service and cloud provider

The Project Syn tooling is a fundamental part in your DevOps journey and provides you with production quality Ops.

Cloud Agnostic with Crossplane

VSHN has always been cloud agnostic and will further enhance this paradigm by partnering with Crossplane – „The open source multicloud control plane“. By leveraging Crossplane, the user of Project Syn can specify the backend services needed in a completely cloud-independent way. Provisioning of these services happens fully automated, handled by the tooling in the most optimal way. As an example: when a MySQL service is requested, Crossplane would provision a cloud service if the cloud provides it or deploys it inside the Kubernetes cluster leveraging a service operator. This way the user doesn’t have to care about the implementation and can fully focus on the application.
Project Syn is designed to run on all Kubernetes distributions and clouds. It’s prepared to support all the specific features of any given cloud and Kubernetes distribution by abstracting the specifics. This means Project Syn will run on OpenShift with APPUiO.ch, Rancher Kubernetes and all managed Kubernetes offerings. Support for even more Kubernetes flavors and clouds are added on demand. Plans exist to support single node Kubernetes Clusters using Rancher k3s.

Details of Project Syn

Project Syn will become an Open Source project in the near future. It consists of several components, working together to bring the necessary features for running applications in production on Kubernetes, acting as an operations framework. Multiple Kubernetes distributions are supported and it can be installed on an already existing Kubernetes clusters or it can even provision a new one. Taking care of what is running inside a Kubernetes clusters (including the Kubernetes cluster itself) is in the heart of Syn.

Production readiness Syn is made for production. It brings all aspects needed to run an application in production like monitoring of all important services and backup of data.
Self-service
All parts of Syn are engineered for self-service. Define what you need – declarative in code – and the platform does it for you. Be that provisioning of services, inside or outside of the cluster, configuration consistent backup incl. monitoring or setting the matching monitoring and alerting rules, the platform automatically takes care of it.
Developer happiness
By being able to work with the platform without external dependencies, the developer can express the needs for the application in code (e.g. „a Postgres database is needed“) and do this individually.
Service provisioning
Provisioning services like databases outside of the cluster (e.g. in the cloud) or inside the cluster is completely automated by Project Syn, leveraging the endless possibilities of Crossplane. It is a key part of the platform and fully integrated with all the important production readiness features.
Crossplane abstracts the specifics of the service to be provisioned. As a user of the platform you just tell Crossplane what you want. e.g. a MySQL server, and Crossplane then takes care to deploy the best matching service, depending on which cloud it runs. On AWS, Crossplane would provision an RDS instance, on a cloud without a managed database offering, it would provision an in-cluster MySQL instance managed by a matching database operator, installed and configured by the Project Syn platform.
The reconciliation process of Crossplane ensures that the provisioned services are configured as intended all the time and will take measures should the configuration drift apart.
Best-practices configuration
Project Syn makes use of best-practices configuration, learned from running Kubernetes and applications on top of it in production since many years, and applies them continuously. As the best-practices evolve over time, they are integrated as they are learned.
Data protection
Data safety is key. Project Syn makes sure to continuously backup the important data on a filesystem level and also on an application consistent level. All data is stored encrypted at rest and in transit by leveraging possibilities of modern application offerings.
Security
No secrets are stored in plain text, they all live in protected key stores. By applying best-practices configuration we ensure secure configuration by default of all components. Only TLS secured connections are used.
Configuration auditing
All configuration is stored in Git and applied using the GitOps pattern. This allows to have full auditability and history of the full configuration. By signing the generated configuration data we ensure that only trusted configuration is applied to the cluster.
In-cluster configuration reconciliation ensures that the configuration is up-to-date all the time and matches the intended state.
Regular maintenance
Project Syn components are regularly maintained in a fully automated way. This is to ensure that latest patches are installed and no vulnerable components are part of the system.
Decentralization A key part of Project Syn is a decentralized approach. All parts are designed to work without relying on a central management service.
Open Source
One of the goals of Project Syn is to make use of existing and fantastic Open Source applications and glue them together to form a unity. To name a few – the most important ones:
 
  • Kapitan
  • Jsonnet
  • Crossplane
  • Argo CD
  • Prometheus – Alertmanager
  • Grafana
  • Loki
  • Vault
  • Alerta
  • Renovate
All Project Syn components specifically written for tying all these tools together will become Open Source as well. Contributions from Project Syn are continuously brought upstream to support these tools.

 

Project Syn as Managed Service by VSHN

Project Syn is an Open Source project and can be used by anyone for free. VSHN in addition offers Project Syn as a managed service. Taking care of the Project Syn platform with engineering, 24/7 operations and maintenance is key part of the offering. By adding additional services VSHN ensures that the platform can be trusted to run business-critical application workload.

Alert handling
Reacting to alerts and handle them according to a specified SLA, including 24/7 operations and continuous improvement of alert rules based on a day-to-day experience
Expert pool The Project Syn experts at VSHN are available to help the user of the platform
Developer support
Supporting the users of the platform by actively participating as part of the development team enables the user to get the best out of the platform. We provide the Ops part in the DevOps chain.
SLA Specific SLAs are available for applications running on the Project Syn platform
Best-practice curation Delivering of best practice configuration learned by operating many Project Syn enabled clusters in production all over the world
Container image curation Only VSHN tested and approved images are running on the platform which ensures stability and security
Regular maintenance VSHN carries out regular maintenance on all involved components by keeping them up-to-date to latest bugfix and security updates
Active project Syn development Customer needs are actively developed by VSHN engineers and brought into the Project Syn platform
Assisting services Assisting Project Syn platform services are provided, like:

 

  • Customer portal with self-service capability and deep insights
  • Service desk
  • Image registry with curated and tested images
  • Inventory
  • and more

 

Early Access for Project Syn

The foundation for Project Syn is already prepared. We are actively looking for early access users of the platform, helping to test it and shape the future of Project Syn. If you are interested in getting a glimpse at our next generation managed services platform, please fill in the form below and let us know.
[hatchbuck form=“Contactform-vshn-ch“]

 

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Allgemein Tech

Unser Weg zu Managed OpenShift 4

23. Sep. 2019

Red Hat OpenShift 4

Diesen Sommer hat Red Hat OpenShift 4 veröffentlicht. Auf den ersten Blick handelt es sich bei der neuen Major Version um eine stetige Weiterentwicklung von OpenShift 3 mit relativ überschaubaren Änderungen für den Anwender. Schaut man jedoch unter die Haube, dann erkennt man schnell ein vollständig überarbeitetes OpenShift. Der Blogpost von Benjamin Affolter auf dem APPUiO Blog untersucht die Änderungen von OpenShift 4 und beschreibt sie im Detail.

Mit dem nachfolgenden Artikel möchten wir hinter die Kulissen unseres Managed OpenShift Angebotes schauen und erklären, was wir alles zu tun haben, um unseren Managed Service mit OpenShift 4 anbieten zu können.

Vorteile von OpenShift 4

Red Hat verspricht mit Version 4 von OpenShift unter anderem folgende Verbesserungen:

  • Neuer Installer
  • Komplett automatisierte Operations, Wartung und Konfiguration mittels Operators
  • Integration von Operator Hub
  • Aktuelle Versionen von Kubernetes

Um die Vorteile und auch Auswirkungen zu verstehen, müssen wir einen Schritt zurückgehen und uns OpenShift 3 anschauen.

Managed OpenShift 3 – Was ist alles dabei?

Zum Verständnis ein kurzer Überblick, was unser bisheriger Managed OpenShift 3 Service alles beinhaltet (nicht abschliessend):

  • Architektur Engineering und Aufbau des OpenShift Cluster auf nahezu beliebiger Infrastruktur (Cloud, On-Premise)
  • Monitoring aller Cluster-relevanten Komponenten zur Sicherstellung des Betriebs
  • Regelmässiges Backup der Cluster Konfiguration inkl. Sicherstellung der Integrität des Backups
  • Wöchentliche Wartung aller Systeme, Einspielung von Softwarepatches und Konfigurationsverbesserungen auf allen Clustern
  • Automatisierung aller Arbeiten mittels Ansible (Konfiguration, Wartung, Updates, Upgrades, Installation, Sanity Checks und vieles mehr)
  • Integration in unser zentrales Kundenportal für einen Überblick über den Zustand des Clusters und weiterer Funktionen
  • Umfangreiche Dashboards in Grafana
  • Enge Zusammenarbeit mit dem Red Hat Support u.a. zur Lösung von Bugs in OpenShift
  • Unterhalt diverser interner Lab Cluster um Änderungen an produktiven Clustern testen zu können
  • Bereitstellung von Persistentem Storage mittels Gluster
  • Management und Pflege des Betriebssystems Red Hat Enterprise Linux für die OpenShift Master und Nodes
  • Ausbildung von System Engineers um OpenShift betreiben zu können

Alle diese aufgelisteten Punkte sind seit der allerersten Version von OpenShift 3 entwickelt worden und werden täglich von unseren VSHNeers weiterentwickelt.

Status Quo VSHN Systeme

Aus technischer Sicht sieht unsere heutige Systemlandschaft in etwa so aus (Kurzübersicht):

  • Puppet für das lokale Betriebssystem Management aller VMs (Systemkonfiguration, Aufrechterhaltung des definierten Zustandes) und Inventarisierung aller Systeme und Services
  • Icinga2 fürs Monitoring aller Betriebssystem Parameter innerhalb der VM, aber auch sehr umfangreiche Checks aller OpenShift Cluster Komponenten. Icinga2 wird durch Puppet konfiguriert und orchestriert.
  • Ansible zur Installation und Konfiguration von OpenShift, zur regelmässigen Wartung und für vieles mehr
  • BURP für konsistente Datenbackups inkl. Cluster Konfiguration, konfiguriert und orchestriert durch Puppet
  • Gluster für Persistent Storage, verwaltet mittels Ansible

Über die Jahre haben sich unzählige Ansible Playbooks gesammelt und unser ganzes Wissen und die Automatisierung steckt in diesen Playbooks. Wir pflegen unseren eigenen Fork vom offiziellen OpenShift Ansible Repository, um schnell auf eventuelle Bugs reagieren zu können. Diesen Fork halten wir regelmässig auf dem aktuellen Stand mit Upstream.
Puppet kümmert sich nicht nur um die lokale Betriebssystem Konfiguration, sondern steuert auch viele wichtige Komponenten wie das Monitoring und Backup System. Zudem haben wir mittels der PuppetDB ein stets aktuelles Inventar aller von uns gemanagten Systeme inkl. detaillierter Versionsangaben der installierten Komponenten. Dies ist auch in unserem Kundenportal integriert und wird zur automatischen Verrechnung unserer Managed Services verwendet.
Die von uns entwickelten Monitoring Plugins für Icinga2 decken nahezu jedes von uns bis heute entdeckte Problem mit OpenShift ab und melden uns, wenn etwas mit dem Cluster oder einer Komponente davon nicht mehr in Ordnung sein sollte.
Unsere Systemdokumentation und die Anleitung zum Betrieb von OpenShift umfassen mehrere dutzend Wiki Artikel.

Managed OpenShift 4 – Was gibt es für VSHN zu tun?

Aus Sicht des System Engineering ist OpenShift 4 ein komplett neues Produkt. Für VSHN bedeutet das, dass wir einen grossen Teil der genannten Punkte vollständig neu entwickeln müssen.
Ein paar Beispiele:

  • Die Installation und Konfiguration von OpenShift 4 basiert nicht mehr auf Ansible, sondern auf einem eigenen Installer (benutzt im Hintergrund Terraform) und die Konfiguration geschieht mittels In-Cluster Operators. Unsere Ansible Playbooks für OpenShift 3 können zum grössten Teil nicht mehr für OpenShift 4 verwendet werden.
  • Als Betriebssystem kommt nicht mehr Red Hat Enterprise Linux zum Einsatz, sondern Red Hat CoreOS, dass sich komplett anders verhält. Puppet kann so nicht mehr eingesetzt werden und wie oben beschrieben müssen wir andere Wege finden für die Inventarisierung, Orchestrierung und Verrechnung der Umsysteme.
  • Unsere Monitoring Plugins für Icinga2 sind nicht mehr mit OpenShift 4 kompatibel und das Monitoring Konzept mit Icinga2 passt nicht mehr auf die überarbeitete Architektur der Plattform. Das bedeutet für uns eine Neuentwicklung unseres Monitoring Konzepts.
  • Das Backup System BURP kann nicht mehr in der heutigen Form eingesetzt werden, ein neues Backup System muss erarbeitet werden.

Dies ist keine abschliessende Liste, es gibt noch viele weitere Details in unserer Systemlandschaft, die angepasst werden müssen.

Der Weg in die Produktion

Für uns als Managed Service Provider ist Stabilität und Skalierbarkeit das A und O und keine Verhandlungssache. Das bedeutet, dass wir uns die notwendige Zeit nehmen müssen, um alle Änderungen und Eigenheiten für einen produktiven Betrieb von OpenShift 4 zu lernen. Die Anpassung und das Erarbeiten der notwendigen Tools und Prozesse für den Betrieb dutzender Cluster benötigt viel Zeit und Engineering Aufwand. Wir haben aber bereits früh begonnen und konnten schon erste Erfahrungen mit OpenShift 4 sammeln. Die Erfahrungen stimmen uns sehr zuversichtlich, dass OpenShift 4 seine Versprechen für einen stark vereinfachten Betrieb einhalten kann.
Die aktuelle Version OpenShift 4.1 hat aber auch noch ein paar Einschränkungen. Hier eine kleine Auswahl, was uns aufgefallen ist:

  • Kein Support für Proxies
  • AWS und VMware sind die einzigen unterstützten IaaS Provider mit OpenShift 4.1 (aktuelle Version zum Zeitpunkt dieses Artikels)
  • Installation auf nicht unterstützten und nicht-Cloud Plattformen ist sehr fragil
  • Container Storage nur über CSI

Viele IaaS Provider sind zum jetzigen Zeitpunkt noch nicht bereit für OpenShift 4. Wir stehen jedoch in engem Kontakt mit unseren IaaS & Cloud Partnern wie cloudscale.ch, Exoscale, Swisscom und AWS, um die Kompatibilität herstellen zu können, damit wir auch mit OpenShift 4 künftig einen reibungslosen Betrieb anbieten können.
OpenShift 4.1 erinnert uns teilweise an die Anfangszeiten von OpenShift 3, damals benötigte es auch einige Zeit, bis OpenShift 3 für den Produktivbetrieb bereit war.
Wir sind aber sehr zuversichtlich, dass die offenen Punkte noch gelöst werden können und freuen uns auf die 4. Generation von Red Hat OpenShift!

Weitere Infos

Unsere Freunde von Adfinis SyGroup haben in ihrem Blogpost „OpenShift 4 – Learnings aus der ersten produktiven Umgebung“ ihre ersten Erfahrungen mit OpenShift 4 beschrieben, dies deckt sich sehr gut mit unseren bisherigen Beobachtungen.
Wenn du mehr zum Thema OpenShift und Kubernetes erfahren willst, empfehlen wir dir unseren Artikel „Was ist eine Kubernetes Distribution und was sind die Unterschiede zwischen Kubernetes und OpenShift“ oder schau dir die Impressionen vom Red Hat Forum Zürich 2019 an, wo wir mit APPUiO wieder als Sponsor mit einem Stand vor Ort waren.

APPUiO – Swiss Container Platform

Mit APPUiO.ch haben wir eine auf Red Hat OpenShift basierende Schweizer Containerplattform geschaffen, auf der wir Managed Services als PaaS-Lösung (Platform-as-a-Service) auf beliebiger Infrastruktur anbieten können: public, dedicated, private und on-premises. Auf Basis bewährter Open Source Konzepte wie Docker und Kubernetes entwickelst, betreibst und skalierst du eine Anwendung nach deinen Bedürfnissen. Mit APPUiO können deine Applikationen sowohl auf Public Clouds als auch unternehmensintern betrieben werden. Die Plattform wurde 2015 von den beiden IT-Spezialisten Puzzle ITC und VSHN AG ursprünglich für die Professionalisierung der internen IT entwickelt. Heute wird APPUiO bereits von etlichen Kunden produktiv eingesetzt und wird von einer starken Community gestützt.

Wie können wir helfen?

Durch unsere Erfahrung im Betrieb von OpenShift Clustern rund um die Welt bieten wir Managed OpenShift Cluster auf nahezu jeder Public, Private oder On-Premise Cloud an. Wir helfen gerne bei der Evaluation, Integration und Betrieb und unterstützen mit unserer langjährigen Kubernetes Erfahrung. Kontaktiere uns, abonniere unseren Newsletter und folge uns auf Twitter (@vshn_ch und @APPUiO) oder wirf einen Blick auf unsere Services.
Wir freuen uns auf dein Feedback! 

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Presse

VSHN veröffentlicht K8up – Backup Operator für Kubernetes an der KubeCon 2019

21. Mai 2019

VSHN – The DevOps Company freut sich, den Release von K8up (ausgesprochen /keɪtæpp/) an der KubeCon & CloudNativeCon 2019 bekannt zu geben, unseren Open Source Backup Operator für Kubernetes und OpenShift.
Alle Informationen über K8up findest du hier:
https://vshn.ch/k8up

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt
Interne

Vier Wochen Vaterschaftsurlaub bei VSHN – Ein Erfahrungsbericht

18. Jan. 2019

In der Schweiz steht einem Vater gesetzlich nur ein Tag Urlaub für die Geburt seines Kindes zu, sprich genauso viel wie bei einem Wohnungswechsel. VSHN hat deshalb letztes Jahr vier Wochen Vaterschaftsurlaub eingeführt.
Ich hatte das grosse Glück, als erster VSHNeer davon profitieren zu dürfen. Ende November ist unser Sohn Lias auf die Welt gekommen:
https://twitter.com/tobruzh/status/1069218379414806528
Diese erste Zeit mit ihm war absolut fantastisch. Ich konnte mit meiner Frau zusammen Lias im ersten Monat kennenlernen und so bereits eine gute Beziehung zu ihm aufbauen. Ich hätte mir nicht vorstellen können, in dieser ersten, sehr intensiven Phase nach der Geburt weiter normal arbeiten zu gehen, allein schon weil meine Gedanken bei Frau und Kind waren.
Jeder Vater sollte das Recht haben, nach der Geburt für seine Frau und den Nachwuchs da zu sein.
Auch meine Frau sagt:

Dass mein Mann während der ersten 4 Wochen bei uns Zuhause sein konnte, war für mich Gold wert. Ich weiss nicht, wie ich das sonst hätte bewältigen sollen. Nun weiss ich auch, dass mein Mann unseren Sohn gut kennt und für ihn sorgen kann.

Auf viele weitere VSHN Väter und Mütter!

Tobias Brunner

Tobias Brunner arbeitet seit über 20 Jahren in der Informatik und seit bald 15 Jahren im Internet Umfeld. Neue Technologien wollen ausprobiert und darüber berichtet werden.

Kontaktiere uns

Unser Expertenteam steht für dich bereit. Im Notfall auch 24/7.

Kontakt