Project Syn Tech

Rewriting a Python Library in Rust

20. Mar 2024

Earlier this month I presented at the Rust Zürich meetup group about how we re-implemented a critical piece of code used in our workflows. In this presentation I walked the audience through the migration of a key component of Project Syn (our Kubernetes configuration management framework) from Python to Rust.

We tackled this project to address the longer-than-15-minute CI pipeline runs we needed to roll out changes to our Kubernetes clusters. Thanks to this rewrite (and some other improvements) we’ve been able to reduce the CI pipeline runs to under 5 minutes.

The related pull request, available on GitHub, was merged 5 days ago, and includes the mandatory documentation describing its functionality.

I’m also happy to report that this talk was picked up by the popular newsletter “This Week in Rust” for its 538th edition! You can find the recording of the talk, courtesy of the Rust Zürich meetup group organizers, on YouTube.

Simon Gerber

Simon Gerber is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Events Tech

Watch the Recording of “How to Keep Container Operations Steady and Cost-Effective in 2024”

1. Feb 2024

Yesterday took place the “How to Keep Container Operations Steady and Cost-Effective in 2024” event on LinkedIn Live, and for those who couldn’t attend live, you can watch the recording here.

In a rapidly evolving tech landscape, staying ahead of the curve is crucial. This event that will equip you with the knowledge and tools needed to navigate container operations effectively while keeping costs in check.

In this session, we’ll explore best practices, industry insights, and practical tips to ensure your containerized applications run smoothly without breaking the bank.

We will cover:

  • Current Trends: Discover the latest trends shaping container operations in 2024.
  • Operational Stability: Learn strategies to keep your containerized applications running seamlessly.
  • Cost-Effective Practices: Explore tips to optimize costs without compromising performance.
  • Industry Insights: Gain valuable insights from real-world experiences and success stories.

Schedule:

17:30 – 17:35 – Welcome and Opening Remarks
17:35 – 17:50 – Navigating the Container Landscape: 2024 Trends & Insights
17:50 – 17:55 – VSHN’s Impact: A Spotlight on Our Market Presence
17:55 – 18:10 – Guide to Ensuring Steady Operations in Containerized Environments
18:10 – 18:25 – Optimizing Costs without Compromising Performance: A Practical Guide
18:25 – 18:30 -Taking Action: Implementing Best Practices for Container Operations
18:30 -> Q&A

Don’t miss out on this opportunity to set a solid foundation for your containerized applications in 2024.

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Composition Functions in Production

9. Jan 2024

(This post was originally published on the Crossplane blog.)

Crossplane has recently celebrated its fifth birthday, but at VSHN, we’ve been using it in production for almost three years now. In particular, it has become a crucial component of one of our most popular products. We’ve invested a lot of time and effort on Crossplane, to the point that we’ve developed (and open-sourced) our own custom modules for various technologies and cloud providers, such as Exoscale, cloudscale.ch, or MinIO.

In this blog post, we will provide an introduction to a relatively new feature of Crossplane called Composition Functions, and show how the VSHN team uses it in a very specific product: the VSHN Application Catalog, also known as VSHN AppCat.

Crossplane Compositions

To understand Composition Functions, we need to understand what standard Crossplane Compositions are in the first place. Compositions, available in Crossplane since version 0.10.0, can be understood as templates that can be applied to Kubernetes clusters to modify their configuration. What sets them apart from other template technologies (such as Kustomize, OpenShift Template objects, or Helm charts) is their capacity to perform complex transformations, patch fields on Kubernetes manifests, following more advanced rules and with better reusability and maintainability. Crossplane compositions are usually referred to as “Patch and Transform” compositions, or “PnT” for short.

As powerful as standard Crossplane Compositions are, they have some limitations, which can be summarized in a very geeky yet technically appropriate phrase: they are not “Turing-complete”.

  • Compositions don’t support conditions, meaning that the transformations they provide are applied on an “all or nothing” basis.
  • They also don’t support loops, which means that you cannot apply transformations iteratively.
  • Finally, advanced operations are not supported either, like checking for statuses in other systems, or performing dynamic data lookups at runtime.

To address these limitations, Crossplane 1.11 introduced a new Alpha feature called “Composition Functions”. Note that as of writing, Composition Functions are in Beta in 1.14.

Composition Functions

Composition functions complement and in some cases replace Crossplane “PnT” Compositions entirely. Most importantly, DevOps engineers can create Composition Functions using any programming language; this is because they run as standard OCI containers, following a specific set of interface requirements. The result of applying a Composition Function is a new composite resource applied to a Kubernetes cluster.

Let’s look at an elementary “Hello World” example of a Composition Function.

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
  name: example-bucket-function
spec:
  compositeTypeRef:
    apiVersion: example.crossplane.io/v1
    kind: XBucket
  mode: Pipeline
  pipeline:
  - step: handle-xbucket-xr
    functionRef:
      name: function-xr-xbucket

The example above shows a standard Crossplane composition with a new field: “pipeline” specifying an array of functions, referred to via their name.

As stated previously, the function itself can be written in any programming language, like Go.

func (f *Function) RunFunction(_ context.Context, req *fnv1beta1.RunFunctionRequest) (*fnv1beta1.RunFunctionResponse, error) {
    rsp := response.To(req, response.DefaultTTL)
    response.Normal(rsp, "Hello world!")
    return rsp, nil
}

The example above, borrowed from the official documentation, does just one thing: it reads a request object, modifies a value, and returns it to the caller. Needless to say, this example is for illustration purposes only, lacking error checking, logging, security, and more, and should not be used in production. Developers use the Crossplane CLI to create, test, build, and push functions.

Here are a few things to keep in mind when working with Composition Functions:

  • They run in order, as specified in the “pipeline” array of the Composition object, from top to bottom.
  • The output of the previous Composition Function is used as input for the following one.
  • They can be combined with standard “PnT” compositions by using the function-patch-and-transform function, allowing you to reuse your previous investment in standard Crossplane compositions.
  • In the Alpha release, if you combined “PnT” compositions with Composition Functions, “PnT” compositions ran first, and the output of the last one is fed to the first function; since the latest release, this is no longer the case, and “PnT” compositions can now run at any step of the pipeline.
  • Composition Functions must be called using RunFunctionRequest objects, and return RunFunctionResponse objects.
  • In the Alpha release, these two objects were represented by a now deprecated “FunctionIO” structure in YAML format.
  • RunFunctionRequest and RunFunctionResponse objects contain a full and coherent “desired state” for your resources. This means that if an object is not explicitly specified in a request payload, it will be deleted. Developers must pass the full desired state of their resources at every invocation.

Practical Example: VSHN AppCat

Let’s look at a real-world use case for Crossplane Composition Functions: the VSHN Application Catalog, also known as AppCat. The AppCat is an application marketplace allowing DevOps engineers to self-provision different kinds of middleware products, such as databases, message queues, or object storage buckets, in various cloud providers. These products are managed by VSHN, which frees application developers from a non-negligible burden of maintenance and oversight.

Standard Crossplane “PnT” Compositions proved limited very early in the development of VSHN AppCat, so we started using Composition Functions as soon as they became available. They have allowed us to do the following:

  • Composition Functions enable complex tasks, involving the verification of current deployment values and taking decisions before deploying services.
  • They can drive the deployment of services involving Helm charts, modifying values on-the-go as required by our customers, their selected cloud provider, and other parameters.
  • Conditionals allow us to script complex scenarios, involving various environmental decisions, and to reuse that knowledge.
  • Thanks to Composition Functions, the VSHN team can generalize many activities, like backup handling, automated maintenance, etc.

All things considered, it is difficult to overstate the many benefits that Composition Functions have brought to our workflow and to our VSHN AppCat product.

Learnings of the Alpha Version

We’ve learned a lot while experimenting with the Alpha version of Composition Functions, and we’ve documented our findings for everyone to learn from our mistakes.

  • Running Composition Functions in Red Hat OpenShift used to be impossible in Alpha because OpenShift uses crun, but this issue has now been solved in the Beta release.
  • In particular, when using the Alpha version of Composition Functions, we experienced slow execution speeds with crun but this is no longer the case.
  • We learned the hard way that missing resources on function requests were actually deleted!

Our experience with Composition Functions led us to build our own function runner. This feature uses another capability of Crossplane, which allows functions to specify their runner in the Composition definition:

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
[...]
  functions:
  - name: my-function
    type: Container
    container:
      image: my-function
      runner:
        endpoint: grpc-server:9547

Functions run directly on the gRPC server, which, for security reasons, must run as a sidecar to the Crossplane pod. Just like everything we do at VSHN, the Composition Function gRPC Server runner (as well as its associated webhook and all of its code) is open-source, and you can find it on our GitHub. As of the composition function beta, we replaced the custom GRPC logic with the go-sdk. To improve the developer experience, we have created a proxy and enabled the gRPC server to run locally. The proxy runs in kind and redirects to the local gRPC server. This enables us to debug the code and to test changes more efficiently.

Moving to Beta

We recently finished migrating our infrastructure to the most recent Beta version of Composition Functions, released in Crossplane 1.14, and we have been able to do that without incidents. This release included various bits and pieces such as Function Pipelines, an ad-hoc gRPC server to execute functions in memory, and a Function CRD to deploy them directly to clusters.

We are also migrating all of our standard “PnT” Crossplane Compositions to pure Composition Functions as we speak, thanks to the functions-go-sdk project, which has proven very helpful, even if we are missing typed objects. Managing the same objects with the “PnT” and Composition Functions increases complexity dramatically. As it can be difficult to determine where an actual change happens.

Conclusion

In this blog post, we have seen how Crossplane Composition Functions compare to standard “PnT” Crossplane compositions. We have provided a short example, highlighting their major characteristics and caveats, and we have outlined a real-world use case for them, specifically VSHN’s Application Catalog (or AppCat) product.

Crossplane Composition Functions provide an unprecedented level of flexibility and power to DevOps engineers. They enable the creation of complex transformations, with all the advantages of an Infrastructure as Code approach, and the flexibility of using the preferred programming language of each team.

Check out my talk at Control Plane Day with Crossplane, where I walk you through Composition Functions in Production in 15 minutes.

Tobias Brunner

Tobias Brunner is working since over 20 years in IT and more than 15 years with Internet technology. New technology has to be tried and written about.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

“Composition Functions in Production” by Tobias Brunner at the Control Plane Day with Crossplane

17. Oct 2023

VSHN has been using Crossplane’s Composition Functions in production since its release. In this talk, Tobias Brunner, CTO of VSHN AG, explains what Composition Functions are and how they are used to power crucial parts of the VSHN Application Catalog or AppCat. He also introduces VSHNs custom open-source gRPC server which powers the execution of Composition Functions. Learn how to leverage Composition Functions to spice up your Compositions!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

New OpenShift 4.13 Features for APPUiO Users

5. Sep 2023

We have just upgraded our APPUiO Cloud clusters from version 4.11 to version 4.13 of Red Hat OpenShift, and there are some interesting new features for our APPUiO Cloud and APPUiO Managed users we would like to share with you.

Kubernetes Beta APIs Removal

OpenShift 4.12 and 4.13 respectively updated their Kubernetes versions to 1.25 and 1.26, providing cumulative updates to various Beta APIs. If you are using objects with the CRDs below, please make sure to migrate your deployments accordingly.

CronJobbatch/v1beta1batch/v1
EndpointSlicediscovery.k8s.io/v1beta1discovery.k8s.io/v1
Eventevents.k8s.io/v1beta1events.k8s.io/v1
FlowSchemaflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
HorizontalPodAutoscalerautoscaling/v2beta1 and autoscaling/v2beta2autoscaling/v2
PodDisruptionBudgetpolicy/v1beta1policy/v1
PodSecurityPolicypolicy/v1beta1Pod Security Admission
PriorityLevelConfigurationflowcontrol.apiserver.k8s.io/v1beta1flowcontrol.apiserver.k8s.io/v1beta3
RuntimeClassnode.k8s.io/v1beta1node.k8s.io/v1

As a reminder, the next minor revision of Red Hat OpenShift will update Kubernetes to version 1.27.

Web Console

APPUiO users will discover a neat new feature on the web console: resource quota alerts are displayed now on the Topology screen whenever any resource reaches its usage limits. The alert label link will take you directly to the corresponding ResourceQuotas list page.

Have any questions or comments about OpenShift 4.13? Contact us!

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

VSHN’s Response to Zenbleed CVE-2023-20593

25. Jul 2023

Yesterday evening, on Monday, July 24th, 2023, at around 21:15 CEST / 12:15 PDT, our security team received a notification about a critical security vulnerability called “Zenbleed” potentially affecting the cloud providers where VSHN’s customers systems run on.

This blog post provides details about Zenbleed and the steps taken to mitigate its risks.

What is Zenbleed?

Zenbleed, also known as CVE-2023-20593, is a speculative execution bug discovered by Google, related to but somewhat different from side channel bugs like Meltdown or Spectre. It is a vulnerability affecting AMD processors based on the Zen2 microarchitecture, ranging from AMD’s EPYC datacenter processors to the Ryzen 3000 CPUs used in desktop & laptop computers. This flaw can be exploited to steal sensitive data stored in the CPU, including encryption keys and login credentials.

VSHN’s Response

VSHN immediately set up a task force to discuss this issue, including the team of one of our main cloud providers (cloudscale.ch) in a call to determine choices of action; among possible options, were contemplated ideas like isolating VSHN customers on dedicated nodes, or patching the affected systems directly.

At around 22:00 CEST, the cloud provider decided after a fruitful discussion with the task force that the best approach was to implement a microcode update. Since Zenbleed is caused by a bug in CPU hardware, the only possible direct fix (apart from the replacement of the CPU) consists of updating the CPU microcode. Such updates can be applied by updating the BIOS on affected systems, or applying an operating system kernel update, like the recently released new Linux kernel version that addresses this vulnerability.

Zenbleed isn’t limited to just one cloud provider, and may affect customers operating their own infrastructure as well. We acknowledged that addressing this vulnerability is primarily a responsibility of the cloud providers, as VSHN doesn’t own any infrastructure that could directly be affected.

The VSHN task force handed the monitoring over to VSHN Canada to test the update as it rolled out to production systems, who stayed in close contact to ensure there were no QoS degradations after the microcode update.

Aftermath

cloudscale.ch successfully finished its work at 01:34 CEST / 16:34 PDT. All VSHN systems running on that provider have been patched accordingly, and the tests carried show that this specific vulnerability has been fixed as required. VSHN Canada confirmed that all systems were running without any problems.

We will continue to monitor this situation and to inform our customers accordingly. All impacted customers will be contacted by VSHN. Please do not hesitate to contact us for more information.

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Get the “DevOps in Switzerland 2023” Report

12. May 2023

We are thrilled to announce the fourth edition of our “DevOps in Switzerland” report!

From February to April 2023 we conducted a study to learn how Swiss companies implement and apply DevOps principles.

We compiled the results into a PDF file, and just like in the previous edition, we provided a short summary of our findings in the first pages.

You can get the report here. Enjoy reading and we look forward to your feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Canada Tech

VSHN Canada Hackday: A Tale of Tech Triumphs and Tasty Treats

24. Mar 2023

The VSHN Canada Hackday turned into an epic two-day adventure, where excitement and productivity went hand in hand. Mätthu, Bigli, and Jay, our stellar team members, joined forces to level up VSHN as a company and expand their skill sets. In this blog post, we’re ecstatic to share the highlights and unforgettable moments from our very own Hackday event.

🏆 Notable Achievements

1️⃣ Revamping Backoffice Tools for VSHN Canada

Mätthu dove deep into several pressing matters, including:

  • Time tracking software that feels like a relic from the 2000s. With the Odoo 16 Project underway, we explored its impressive features and found a sleek solution for HR tasks like time and holiday tracking and expenses management. Now we just need to integrate it as a service for VSHN Canada.
  • Aligning the working environments of VSHN Switzerland and Canada. Although not identical, we documented the similarities and differences in our handbook to provide a seamless experience.
  • Tidying up our document storage in Vancouver. Previously scattered across Google Cloud and Nextcloud, a cleanup session finally brought order to the chaos.

📖 Documentation available in the Handbook.

2️⃣ Mastering SKS GitLab Runners

Bigli and Jay teamed up to craft fully managed SKS GitLab runners using Project Syn, aiming to automate GitLab CI processes and eliminate manual installation and updates. This collaboration also served as an invaluable learning experience for Jay, who delved into Project Syn’s architecture and VSHN’s inner workings. Hackday milestones included:

  • Synthesizing the GitLab-runner cluster
  • Updating the cluster to the latest supported version
  • Scheduling cluster maintenance during maintenance windows
  • Developing a component for the GitLab-runner
  • Implementing proper monitoring, time permitting

📖 Documentation available on our wiki.

🍻 Festive Fun

On Hackday’s opening day, we treated ourselves to a team outing at “Batch,” our go-to local haunt nestled in Vancouver’s scenic harbor. Over unique beers and animated chatter, we toasted to our first-ever Canadian Hackday.

🎉 Wrapping Up

VSHN Canada’s Hackday was an exhilarating mix of productivity, learning, and amusement. Our team banded together to confront challenges, develop professionally, and forge lasting memories. We can hardly wait for future Hackday events and the continued growth of VSHN Canada and VSHN.

Jay Sim

Jay Sim is a DevOps engineer in VSHN Canada.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Stay Ahead of the Game with Kubernetes

13. Jan 2023

Kubernetes is a powerful platform for deploying and managing containerized applications at scale, and it has become increasingly popular in Switzerland in recent years.

One way to approach it is outsourcing. This can be a strategic and cost-effective option for organizations that do not have the in-house DevOps expertise, know-how, or resources to manage their infrastructure and application operations efficiently.

Not every tech company is in the business of building platforms and operating Kubernetes clusters. Thus by partnering with an experienced partner, companies can tap into a wealth of knowledge and expertise to help them run their applications.

Some companies adopt Kubernetes and look to leverage its capabilities themselves. It’s essential to consider time, effort, and possible implications while utilizing the latest developments and continually adding value to the core business.

In all cases, it will be helpful to align with fundamentals. For this reason, I have compiled a quick guide to Kubernetes in 2023 and best practices in Switzerland.

  1. Understand the basics: Before diving into Kubernetes, have a solid understanding of the reasoning and concepts. This could include cloud infrastructure, networking, containers, how they liaise with each other, and how they can be managed with Kubernetes.
  2. Plan your deployment carefully: When deploying applications with Kubernetes, you must plan thoroughly and consider your workloads’ specific needs and requirements. This includes but is not limited to resource requirements, network connectivity, scalability, latency, and security considerations.
  3. Use appropriate resource limits: One of the critical benefits of Kubernetes is its ability to manage resources dynamically based on the needs of your applications. To take advantage of this, try to set appropriate resource limits for your application. This will help ensure that your application has the resources they need to run effectively while preventing them from consuming too many resources and impacting other applications.
  4. Monitor your application: It’s essential to monitor your applications and the underlying Kubernetes cluster to ensure they are running smoothly and to identify any issues that may arise. You want to analyze the alerts and react accordingly. You can use several tools and practices to monitor your applications, including log analysis, monitoring with tools like Prometheus and Grafana, and alerting systems.
  5. Use appropriate networking configurations: Networking is critical to any Kubernetes deployment, and choosing the proper network configuration is substantial. What about load balancing, service discovery, and network segmentation?
  6. Secure your application: Security is a top concern for many companies and organizations in Switzerland. You cannot proceed without ensuring that your Kubernetes deployment is secure. At this stage, your team is implementing network segmentation, using secure container runtime environments, and implementing advanced authentication and authorization systems.
  7. Consider using a managed Kubernetes service: For companies without the resources or needing DevOps expertise to manage their clusters, managed Kubernetes services can be a business-saving solution. With managed services, you can get a production-ready cluster, i.e., a fully-managed Kubernetes environment, allowing teams and software engineers to focus on developing new features and deploying their applications rather than managing the underlying infrastructure.
  8. Stay up-to-date with the latest developments: The Kubernetes ecosystem is constantly evolving, and it’s better to stay up-to-date with the latest developments and best practices. This may involve subscribing to newsletters like VSHN, VSHN.timer, or Digests, attending conferences and CNCF meetups, and following key players in the Kubernetes community.

By following best practices, IT leaders, stakeholders, and decision-makers can ensure that they use Kubernetes constructively and get the most out of the platform technology.

Oksana Horobinska

Oksana is Business Development Specialist at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

VSHN HackDay – Tailscale on APPUiO Cloud

21. Oct 2022

As part of the fourth VSHN HackDay taking place yesterday and today (October 20th and 21st, 2022), Simon Gerber and I (Tobias Brunner) worked on the idea to get Tailscale VPN running on APPUiO Cloud.

tailscale logo

Tailscale is a VPN service that makes the devices and applications you own accessible anywhere in the world, securely and effortlessly. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other.

The use cases we wanted to make possible are:

  • Access Kubernetes services easily from your laptop without the hassle of “[kubectl|oc] port-forward”. Engineers in charge of development or debugging need to securely access services running on APPUiO Cloud but not exposed to the Internet. That’s the job of a VPN, and Tailscale makes this scenario very easy.
  • Connect pods running on APPUiO Cloud to services that are not directly accessible, for example, behind a firewall or a NAT. Routing outbound connections from a Pod through a VPN on APPUiO Cloud is more complex because of the restricted multi-tenant environment.

We took the challenge and found a solution for both use cases. The result is an OpenShift template on APPUiO Cloud that deploys a pre-configured Tailscale pod and all needed settings into your namespace. You only need a Tailscale account and a Tailscale authorization key. Check the APPUiO Cloud documentation to know how to use this feature.

We developed two new utilities to make it easier to work with Tailscale on APPUiO Cloud (and on any other Kubernetes cluster):

  • tailscale-service-observer: A tool that lists Kubernetes services and posts updates to the Tailscale client HTTP API to expose Kubernetes services as routes in the VPN dynamically.
  • TCP over SOCKS5: A middleman to transport TCP packets over the Tailscale SOCKS5 proxy.

Let us know your use cases for Tailscale on APPUiO Cloud via our product board! Are you already a Tailscale user? Do you want to see deeper integration into APPUiO Cloud?

Tobias Brunner

Tobias Brunner is working since over 20 years in IT and more than 15 years with Internet technology. New technology has to be tried and written about.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
APPUiO Cloud Tech

Most Interesting New Features of OpenShift 4.11

13. Oct 2022

Red Hat OpenShift 4.11 brings a substantial amount of new features. We’ve teased a few of them in our latest VSHN.timer, but in this article, we’re going to dive deeper into those that have the highest impact on our work and on our customers.

Support for CSI generic ephemeral volumes

Container Storage Interface (CSI) generic ephemeral volumes are a cool new feature. We foresee two important use cases for them:

  • When users need an ephemeral volume that exceeds what the node’s file system provides;
  • When users need an ephemeral volume with prepopulated data: this could be done, for example, by creating the volume from a snapshot.

Route Subdomains

The Route API now provides subdomain support, something that was not possible before 4.11:

You can now specify the spec.subdomain field and omit the spec.host field of a route. The router deployment that exposes the route will use the spec.subdomain value to determine the host name.

Pod Security Admissions

Pod security admission now runs globally with restricted audit logging and API warnings. This means that, while everything should still run as it did before, you will most likely encounter warnings like these if you relied on security contexts being set by OpenShift’s Security Context Constraints:

Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false…

To solve this issue, users now need to explicitly set security contexts in manifests to avoid these warnings.

Developer Features

The Developer Perspective provides improved integration with GitHub Actions, allowing developers to trigger pipelines and run tasks following Git events such as pushes or tags. And not only that, but the OpenShift console now has a dark mode, too.

CLI Features

The following OpenShift CLI (oc) commands and flags for requesting tokens are deprecated; these include:

  • oc serviceaccounts create-kubeconfig
  • oc serviceaccounts get-token
  • oc serviceaccounts new-token
  • The --service-account/-z  flag for the oc registry login  command

Moreover, the oc create token command generates tokens with a limited lifetime, which can be controlled with the --duration  command line argument. The API server can return a token with a lifetime that is shorter or longer than the requested lifetime. The command apparently generates tokens with a lifetime of one hour by default. If users need a token that doesn’t expire (for example, for a CI/CD pipeline), they should create a ServiceAccount API token secret instead.

OpenShift 4.11 on APPUiO Cloud

At the time of this writing, Red Hat has not yet decided to promote 4.11 as an upgrade target, and for that reason, we have not upgraded APPUiO Cloud clusters yet. As soon as Red Hat enables this, we will update our APPUiO Cloud zones accordingly.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Project Syn Tech

Keeping Things Up to Date with Renovate

28. Jun 2022

Our customers trust us with their most precious resource: their information systems. Our job is to keep the underlying systems running, updated, and most importantly, secure.

Project Syn with its Commodore Components is one of the primary weapons in our arsenal to configure and thus protect those systems. Thanks to its GitOps approach, we can ensure that all Kubernetes clusters are always running the latest and (hopefully) most secure version possible.

But just like any other software package, Project Syn brings its complexity: we must keep it safe and sound, which means watching over its container images, its Helm charts, and all of the Commodore Components we use every day.

As you can imagine, juggling so many different software packages is a considerable task; now, think about all of their upstream dependencies (most of them are container images and helm charts, but also Go and Python are a part of the mix). The complexity of the task exponentially increases.

How do we cope with this? Well, as usual, standing on the shoulder of giants. In this case, Renovate.

Renovate has been created to manage this complexity, whether container images, Helm charts, or upstream dependencies. But understandably enough, Renovate per se does not know anything about Commodore Components (at least not yet!), and in particular, it does not know about the Project Syn configuration hierarchy and how to find dependencies within that hierarchy.

So, what’s an Open Source developer to do? We forked Renovate, of course, and adapted it to our needs. How?

  1. We added the Project Syn configuration hierarchy as a new Manager.
  2. We reused the existing datasource to detect new versions of our Commodore Components.

Then we configured our own Renovate fork on all the repositories holding our source code and started getting notified via pull requests whenever there was a new dependency version. Voilà!

With this approach, we have been able to automate much work and avoid using outdated software by automatically being notified of new versions. No more forgotten updates!

We also decided to use “golden files” to test our Commodore Components; this, in turn, meant that we could not merge PRs created by Renovate in case of failure. For those cases, we also taught Renovate how to update those golden files if needed.

The pull request “Update dependency ghcr.io/appuio/control-api to v0.8.1 – autoclosed #29” is a live example of this mechanism in action, and you’re most welcome to check it out.

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
General Tech

Get the “DevOps in Switzerland 2022” Report

25. May 2022

We are thrilled to announce the third edition of our “DevOps in Switzerland” report!

From January to March 2022 we conducted a study to learn how Swiss companies implement and apply DevOps principles.

We compiled the results into a PDF file (only available in English), and just like in the previous edition, we provided a short summary of our findings in the first pages.

You can get the report on our website. Enjoy reading and we look forward to your feedback!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

How to Restrict Container Registries per Namespace with Kyverno

24. May 2022

We have recently received a request from a customer, asking us to restrict the container registries that could be used to deploy images from in their OpenShift 4 cluster.

We could have added such configuration directly at node level, as explained in Red Hat’s documentation; it’s indeed possible to whitelist registries on repository and tag level, but that would have forced us to keep all those whitelists updated with those our Project Syn components regularly use.

We have instead chosen to use Kyverno for this task: it allows us to enforce the limitations on a per-namespace level, with much more flexibility and maintanability.

This is a ClusterPolicy object for Kyverno, adapted from the solution we provided to our customer, showing how we can restrict the limitation to some namespaces, so that containers can be pulled only from some specific registries.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restrict-registries
  annotations:
    policies.kyverno.io/title: Restrict Image Registries
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      Restrict image pulling only to whitelisted registries
spec:
  validationFailureAction: enforce
  background: true
  rules:
  - name: validate-registries
    match:
      all:
      - resources:
          kinds:
          - Pod
          namespaces:
          - "namespace-wildcard-*"
    validate:
      message: "Image registry not whitelisted"
      pattern:
        spec:
          containers:
          - image: "registry.example.com/* | hub.docker.com/some-username/*"

Andreas Tellenbach

Andreas Tellenbach is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Agent-less GitLab integration with OpenShift

20. Apr 2022

As you know (and if you didn’t, now you do) GitLab has deprecated the certificate-based integration with Kubernetes in version 14.5, and it is expected that version 15 will disable it completely.

The official replacement to the (now legacy) certificate-based integration mechanism is the GitLab Agent, to be installed in your Kubernetes cluster, and providing a tighter integration between our two beloved platforms.

Well, hold on; things aren’t that easy. For example in our case, if we wanted to offer the GitLab Agent to our APPUiO Cloud users, we would run into two issues:

  • On one side, installing the GitLab Agent is more complicated and expensive, because it would run as another pod in the same namespace, consuming resources.
  • And on the other, we cannot install it cluster-wide, because of the admin permissions we have configured in this multi-tenant shared platform. Users would have to manage each and every GitLab Agent on their own, possibly having multiple agents deployed in several namespaces and GitLab repositories.

So, what’s a DevOps engineer to do? Well, here’s a simple, very simple solution; so simple that you might have never thought of it before: create your own KUBECONFIG variable in GitLab with a dedicated service account!

Here’s how it works using the OpenShift oc tool in your own system:

Create a service account in your OpenShift project:

oc create serviceaccount gitlab-ci

Add an elevated role to this service account so it can manage your deployments:

oc policy add-role-to-user admin -z gitlab-ci --rolebinding-name gitlab-ci

Create a local KUBECONFIG variable and login to your OpenShift cluster using the gitlab-ci service account:

TOKEN=$(oc sa get-token gitlab-ci)
export KUBECONFIG=gitlab-ci.kubeconfig
oc login --server=$OPENSHIFT_API_URL --token=$TOKEN
unset KUBECONFIG

You should now have a file named gitlab-ci.kubeconfig in your current working directory; copy its contents and create a variable named KUBECONFIG in the GitLab settings with the value of the file (that’s under “Settings” > “CI/CD” > “Variables” > “Expand” > “Add variable”). Remember to set the “environment” scope for the variable and to disable the old Kubernetes integration, as the KUBECONFIG variable might collide with this new variable.

Tada! There are a few advantages to this approach:

  • It is certainly faster and simpler to setup for simple push-based deployments than by using the GitLab Agent.
  • It is also easier to get rid of deprecated features without having to change the pipeline or migrating to the GitLab Agent.

But there are a few drawbacks as well:

  • You don’t get all bells and whistles. It is reasonable to think that at some point the GitLab Agent will offer advanced features, such as access to pod logs from GitLab, monitoring alerts directly from the GitLab user interface, and many other things the old Kubernetes certificate-based integration could do. This approach does not provide anything like this.
  • The cluster’s API endpoint has to be publicly accessible, not behind any firewall or VPN; conveniently enough, the GitLab Agent solves exactly this problem.

If you are interested in GitLab and Kubernetes, join us next week in our GitLab Switzerland Meetup for more information about the GitLab Agent for Kubernetes, and afterwards for a nice apéro with like-minded GitLab enthusiasts!

Oh, and a final tip; if you find yourself having to log in to OpenShift integrated registry but you don’t have yq installed in the job’s image, you can use sed to extract the token from the $KUBECONFIG file:

sed -n 's/^\s*token:\s*\(.*\)/\1/ p' "${KUBECONFIG}" | docker login -u gitlab-ci --password-stdin "${OPENSHIFT_REGISTRY}"

Christian Сremer

Christian Cremer is a DevOps engineer in VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Serverless on Kubernetes: Knative

9. Mar 2022

Back in 2019 we published a review of the most relevant serverless frameworks available for Kubernetes. That article became one of the most visited in our blog in the past two years, so we decided to return to this subject and provide an update for our readers.

TL;DR: Serverless in Kubernetes in 2022 means, to a large extent, Knative.

What’s in a Word

The “Serverless” word is polarizing.

Robert Scoble, one of the first tech influencers, uttered it for the first time fourteen years ago, as James Potter reported recently. In those times, “Serverless” just meant “not having servers and being an AWS EC2 customer”. Because, yes, companies used to have their own physical servers back then. Amazing, isn’t it?

Fast forward to 2022, and the CNCF Serverless Landscape has grown to such an extent that it can be very hard to figure out what “serverless” truly means.

Even though for some it just represents the eternal return of 1970s style batch computing, in the past five years the “Serverless” word has taken a different and very specific meaning.

The winning definition caters for the complete abstraction of the infrastructure required to run individual pieces of software, at scale, on a cloud infrastructure provider.

Or, in less buzzword-y terms, just upload your code, and let the platform figure out how to run it for you: the famous “FaaS”, also known as Function as a Service.

The IaaS Front

The three major Infrastructure as a Service providers in the world offer their own, more-or-less-proprietary version of the Serverless paradigm: AWS Lambda, Azure Functions, and Google Cloud Run (which complemented the previous Google Cloud Functions service). These are three different approaches to the subject of FaaS, each with its advantages and caveats.

Some companies, like A Cloud Guru, have successfully embraced the Serverless architecture from the very start (in this case, based on AWS Lambda), creating cost-effective solutions with incredible scalability.

But one of the aforementioned caveats, and not a small one for that matter, is platform lock-in. Portability has always been a major concern for enterprise computing: if building apps on AWS Lambda is an interesting prospect, could we move them to a different provider later on?

Well, we now have an answer to that question, thanks to our good old friend: the container.

The Debate Is Over

Almost exactly three years ago, Anthony Skipper wrote:

We will say it again… packaging code into containers should be considered a FaaS anti-pattern!

Containers or not? This was still a big debate at the time of our original article in 2019.

Some frameworks like Fission and IaaS services such as AWS Lambda and Google Cloud Functions did not require developers to package their apps as containers; just upload your code and watch it run. On the other hand, OpenFaaS and Knative-based offerings did require containers. Who would win this fight?

The world of Serverless solutions in 2022 has decided that wrapping functions in containers is the way to go. Even AWS Lambda started offering this option in December 2020. This is a huge move, allowing enterprises to run their code in whichever infrastructure they would like to.

In retrospect, the market has chosen wisely. Containers are now a common standard, allowing the same code to run unchanged, from a Raspberry Pi to your laptop to an IBM Mainframe. It is a natural choice, and it turned out that it was a matter of time before this happened.

Even better, with increased industry experience, container images got smaller and smaller, thanks to Alpine-based, scratch-based, and distroless-based images. Being lightweight allows containers to start and stop almost instantly, and makes scaling applications faster and easier than ever.

And this choice turned out to benefit one specific framework among all: Knative.

The Age of Knative

In the Kubernetes galaxy, Knative has slowly by steadily imposed its mark as the core infrastructure of Kubernetes serverless workloads.

In 2019, our article compared six different mechanisms to run serverless payloads on Kubernetes:

  1. OpenFaaS
  2. Fn Project
  3. Fission
  4. OpenWhisk
  5. Kubeless
  6. TriggerMesh

Of those, Fn Project and Kubeless have been simply abandoned. Other frameworks suffered the same fate: Riff has disappeared, just like Gestalt, including its parent company Galatic Fog. IronFunctions moved away from Kubernetes into its own PaaS product. Funktion has been sandboxed and abandoned; Leveros is abandoned too; BlueNimble does not show much activity.

On the other hand, new players have appeared in the serverless market: Rainbond, for example; or Nuclio, targeting the scientific computation market.

But many new contenders are based on Knative: apart from TriggerMesh, which we mentioned in 2019 already, we have now Kyma, Knix, and Red Hat’s OpenShift 4 serverless, all powered by Knative.

The interest in Knative is steadily growing these days: CERN uses it. IBM is talking about it. The Serverless Framework has a provider for it. Even Google Cloud Run is based on it! Which shouldn’t be surprising, knowing that Knative was originally created by Google, just like Kubernetes.

And now Knative has just been accepted as a CNCF incubating project!

Even though Knative is not exactly a FaaS per se, it deserves the top spot in our review of 2022 FaaS-on-K8s technologies, being the platform upon which other serverless services are built, receiving huge support from the major names of the cloud-native industry.

Getting Started with Knative

Want to see Knative in action? Getting started with Knative on your laptop now is easier than ever.

  1. Install kind.
  2. Run the following command on your terminal:

$ curl -sL install.konk.dev | bash

To work with Knative objects on your cluster, install the kn command-line tool. Once you have launched your new Knative-powered Kind cluster, create a file called knative-service.yaml with the contents below:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
spec:
  template:
    metadata:
      name: hello-world
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: "World"

And then just apply it: kubectl apply -f knative-service.yaml.

The kn service list command should now display your “helloworld” service, which should become available after a few seconds. Once it’s ready, you can execute it simply with curl:

$ curl http://hello.default.127.0.0.1.sslip.io

(If you prefer to use Minikube, you can follow this tutorial instead.)

Thanks to Knative, developers can roll out new versions of their services (called “revisions” in Knative terminology) while the old ones are still running, and even distribute traffic among them. This can be very useful in A/B testing sessions, for example. Knative services can be triggered by a large array of events, with great flexibility.

A full introduction to Knative is outside of the scope of this review, so here are some resources we recommend to learn everything about Knative serving and eventing:

  • The excellent and funny “Knative in Action” (2021) book by Jacques Chester, available for free courtesy of VMWare.
  • A free, full introduction to Knative (July 2021) by Sebastian Goasguen, the founder of TriggerMesh; a video and its related source code are provided as well.
  • And to top it off, the “Knative Cookbook” (April 2020) by Burr Suter and Kamesh Sampath, also available for free, courtesy of Red Hat.

Interested in Knative and Red Hat OpenShift Serverless? Get in touch with us and let us help you in your FaaS journey!

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Changes in OpenShift 4.9

23. Nov 2021

As part of our ongoing improvement process, we evaluate the requirements and new features of each version of Red Hat OpenShift, not only for our customers, but also for our internal use. Version 4.9 of Red Hat’s flagship container platform was announced October 18th, 2021 and it included some very important changes, some of which are potentially breaking ones.

In this article, targeted towards DevOps engineers and maintenance crews, I will provide a short overview of the most important points to take care before upgrading to this new version.

Kubernetes 1.22

The most important change in OpenShift 4.9 is the update to Kubernetes 1.22. This release of Kubernetes has completely removed APIs marked as v1beta1. The complete list is available in the Red Hat documentation website, but suffice to say that common objects such as Role or RoleBinding (rbac.authorization.k8s.io/v1beta1) and even Ingress (networking.k8s.io/v1beta1) are no more.

This is a major change, and it needs to be taken care of for all users of your clusters before an upgrade takes place, and all manifest files using those resources should be updated accordingly. Red Hat’s Knowledge Base includes a special article explaining all the steps required for upgrading to OpenShift 4.9.

And of course, if you need any help regarding the upgrade process towards OpenShift 4.9, just contact us, we will be glad to help you and your teams.

Mutual TLS Authentication

Mutual TLS is a strong way to secure an application running in the cloud, allowing a server application to authenticate any client connecting to it. It comes with some complexity to setup and manage, as it requires a certificate authority. But thanks to its inclusion as a feature in OpenShift 4.9, its usage is much simpler now.

Please check the release notes section related to Mutual TLS for more information.

Registry Multiple Logins

In previous versions of OpenShift, you could only list one repository from a given registry per project. OpenShift 4.9 includes multiple logins to the same registry, which allows pods to pull images from specific repositories in the same registry, each with different credentials. You can even define a registry with a specific namespace. The documentation contains examples of manifests to use this feature.

Changes to lastTriggeredImageID Field Update

And finally, here’s a change that can cause unforeseen headaches to your teams: OpenShift 4.9 does not update anymore the buildConfig.spec.triggers[].imageChange.lastTriggeredImageID field when the ImageStreamTag changes and references a new image. This subtle change in behavior is easy to overlook, and if your team depends on this feature, beware for trouble.

Learn More about OpenShift 4.9

If you’re interested in knowing what else changed in OpenShift 4.9, here are some selected resources published by Red Hat to help you:

Christian Häusler

Christian Häusler is a Product Owner at VSHN.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Press Tech

K8up Accepted for CNCF Project Onboarding

19. Nov 2021

Update: k8up is now officially a CNCF sandbox project!

We are thrilled and honored to announce that K8up, the Kubernetes backup operator created by VSHN, has entered the CNCF Project Onboarding process!

We will now work together with the CNCF to complete the onboarding process, providing all the required information geared towards the transfer of the project stewardship to the CNCF.

During this phase, we at VSHN will continue our work, improving K8up with new features and possibilities. The GitHub project is the primary support for this work, and you’re very welcome to check our getting started guide and learn more about K8up.

Thanks to the CNCF for their vote of confidence in K8up! We know this project will be a great addition to the ever-growing world of Cloud Native projects, and we look forward to its future!

Tobias Brunner

Tobias Brunner is working since over 20 years in IT and more than 15 years with Internet technology. New technology has to be tried and written about.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Tech

Microservices or Not? Your Team has Already Decided

10. Nov 2021

Let’s take a somewhat tangential approach to the subject of the Microservices architecture. Most discussions about it are centered around technological aspects: which language to choose, how to create the most RESTful services, which service mesh is the most performant, etc.

However at VSHN we have long ago figured out that the most important factor of success for software projects is people. And the thesis of this article is that the choice of Microservices as an architectural pattern has more to do with the organizational structure of your organization, rather than the technological constraints and features of the final deliverable.

Or, put differently: your team has already chosen an architecture for your software, even if they are not fully aware of it. And lo and behold, Microservices might or might not be it.

Definition

First of all we must define the Microservices architecture. What is it?

“Microservices” is an architectural pattern in which the functionality of the whole system is decomposed into completely independent components, communicating with each other over the network.

The counterpart of the Microservices architecture is what is commonly referred to as the “Monolith”, which has been the most common approach for web applications and services in the past 25 years. In a Monolith, all functions of the application, from data management to the UI, are all contained within the same binary or package.

On the opposite side of the street we find the Microservices architecture, where each service is responsible of its own implementation and data storage.

By definition, Microservices are fine grained, and have a single purpose. They have strong cohesion, that is, the operations they encapsulate have a high degree of relatedness. They are an example of the “Single Responsibility Principle”. They are also deployed separately, with visibly bounded contexts, and they require DevOps approaches to their deployment and management, such as automation and CI/CD pipelines.

Very importantly, the Microservices architecture is a “share nothing architecture”, in which individual services never share common code through libraries, instead restricting all interaction and communication through the network connecting them.

And last but not least, Microservices (as the name implies) should be as small as possible, have low requirements of memory, and should be able to start and stop in seconds.

Given all of these characteristics, Microservices are, by far, the most complex architectural pattern ever created. It is difficult to plan, it can dramatically increase the complexity of projects, and for some teams, experience has shown that it was impossible to cope with.

History

This idea of “components sending messages to one another” is absolutely not new. Back in 1966, one of the greatest computer scientists of all time, Alan Kay, coined the term “Object Oriented Programming”. The industry co-opted and deformed his original definition, which was the following:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

Alan Kay, source.

This text is from 2003; the following is from 1998:

The big idea is “messaging” — that is what the kernal of Smalltalk/Squeak is all about (…) The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.

Alan Kay, source.

Alan Kay designed in the 1970s the Smalltalk programming language, based on these concepts. And after reading the texts above, it becomes clear that to a large extent, the Microservices architecture is an implementation of Alan Kay’s ideas of messaging, decoupling and abstraction, taken to the extreme.

To achieve the current state of Microservices, though, many other breakthroughs were required. During the 2000s, the emergence of XML, the SOAP protocol, and its related web services made the term “Service Oriented Architecture” a very common staple in architectural discussions. The rise of Agile methodologies made the industry pivot towards the REST approach instead of SOAP. During the last decade, the rise of DevOps and the rise of containerization through Docker and Kubernetes finally enabled teams to deploy thousands of containers as a microservice architecture, thanks to the whole catalog of Cloud Native technologies.

Pros and Cons

If Microservice architectures are so complex, why use them? It turns out, they can bring many benefits:

  • Since each component can be completely isolated from the others, once teams agree on their interfaces they can be developed, documented, and tested thoroughly, 100% independently from others. This allows teams to move forward in parallel, implementing features that have 0% chance of collision with one another.
  • Teams are also encouraged to choose the programming language that fits best the particular task that their microservice must fulfill.
  • Since they are by definition “micro” services, they are designed to be quickly launched and dismissed, so that they only intervene when required, making the whole system more efficient and responsive.
  • The size of microservices also allows for higher availability, since it is possible to have many of them behind a load balancer; should any instance of a microservice fail, it can be quickly dismissed and a new one instantiated in its place, without loss of availability.
  • Systems can be updated progressively, with each team fixing bugs and adding functionality without disturbing the others. As long as interfaces are respected (and eventually versioned) there is no reason for the system to suffer from updates.

But there are many reasons not to choose the Microservices architectural approach; among the most important:

  • Performance: a system built with Microservices must take into account the latency between those services; and in this respect, Monolithic applications are faster; a function call is always faster than a network call.
  • Team maturity: Microservices demand a certain degree of experience and expertise from teams; those new to Microservices have a higher chance of project failures.
  • Cost: creating a Microservices system will be costlier, if anything, because of the overhead created by each individual service.
  • Feasibility: sometimes it is simply not possible to use Microservices, for example when dealing with legacy systems.
  • Team structure: this is a decisive factor we will talk about extensively later.
  • Complexity: it is not uncommon to end up with systems composed of thousands of concurrent services, and this creates challenges that we will also discuss later.

I would like to talk now about the last two points, which are in our experience the biggest issues in Microservice implementations: team structure and the perception and management of complexity.

Conway’s Law

One of the decisive factors that constrain teams’ ability to implement microservices is often invisible and overlooked: their own structure. Again, this phenomenon is not new. In 1968, Melvin A. Conway wrote an article for the Datamation magazine called “How Do Committees Invent?” in which the following idea stands out:

The basic thesis of this article is that organizations which design systems (…) are constrained to produce designs which are copies of the communication structures of these organizations.

Melvin Conway, source.

There is extensive evidence, both anecdotal and empirical, of this fact through research.

The corollary of this principle is the following: the choice of a Monolithic versus a Microservices architecture is already ingrained in the very hierarchical chart representing any organization.

One of the services we offer at VSHN tackles, precisely, this very issue. In our “DevOps enablement workshop” we evaluate the degree of agility of organizations, and the extent and improvements that DevOps could bring. Based on that information, we reverse engineer their structure through Conway’s Law in order to find a starting point for their digital transformation.

Complex vs. Complicated

Another important point is the distinction of “Complex” versus “Complicated”, as these two words can sometimes be confused with one another in everyday language, and to make things more difficult, the word “Simple” can be used as an antonym of both.

“Complex” is borrowed from the Latin complexus, meaning “made of intertwined elements”. Complexus is itself derived from plectere (“bend, intertwine”). This word has been used since the XVI century to qualify that which is made of heterogenous elements. It shares the same root (plectere) with the medical term “plexus” meaning “interlacing” and used since the 16th century as a medical term for “network of nerves or blood vessels”.

(Source: Dictionnaire historique de la langue française by Alan Rey)

On the other hand, “Complicated” has a similar origin but a different construction: it comes from the Latin complicare, literally meaning “to fold by rolling up”. Figuratively speaking this was taken as close to the notion of embarrassment or awkwardness. The word is composed of the word plicare which means “to fold”. Watches commonly known as “Complications” (such as the Patek Philippe Calibre 89, the Franck Muller Aeternitas Mega and the “Référence 57260” de Vacheron Constantin) are, well, complicated machines by definition.

In short, “Complex” and “Complicated” stem from slightly different roots: the Latin root plectere (“to intertwine”) in the former, and plicare (“to fold”) for the latter. Complex conveys the idea of a network of intertwined objects, whose state and behavior are continuously altered by the interaction with their peers in said network. The word complicated implies an intrinsic apparent “obscurity” through folding unto itself, inviting to an “unfolding” discovery process.

Or, put in another way: individual Microservices should not be complicated, but a Microservice architecture is complex by definition. Monoliths, on the other hand, tend to become very complicated as time passes. And of course, neither are simple.

History shows that software developers have a passionate relationship with complication; complicated systems are great to brag about on Hacker News, while maintainers also cry about them in private.

A “best practice” in this context has one and only one basic job: to help engineers translate the complicated into the complex. Most software-related disasters are caused by a simple fact: because of deadlines, organization, tooling, or just plain ignorance, software developers have a tendency to build complicated, instead of complex, systems.

This is another point we take care of in our DevOps Workshop, through the evaluation of the current assets (not only source code, but also current databases schemas, security requirements, network topologies, deployment procedures and rhythms, etc.)

Migrate or Rewrite? Equilibrium

The complication of Monoliths by itself is not problematic; it makes for tightly bound systems, which tend to be very fast, because as we said, a function call is faster than a network call. After all, we have been building very successful monoliths in the past. But experience shows that they tend to present problems regarding availability and scalability. Microservices represent a diametrally oposed approach, based on complexity rather than complication, but one that solves those issues.

There is a tension, then, between complexity and complication on one side, and organizational forces on the other. Put in other words, there is a tension between monolithic and microservices systems on one side, and more or less hierarchical structures on the other. Achieving equilibrium between these forces is, then, the engineering challenge faced by software architects these days.

Many teams face today the task, either requested internally (from their management) or externally (through customer demand or vendor requirements) of migrating their monoliths into Microservice-based architectures. Architects can thankfully apply a few techniques to find an equilibrium:

  1. Start your migration path knowing that very often one does not need to migrate the whole application to Microservices. Some parts can and should remain monolithic, and in particular, proven older systems, even if written in COBOL or older technologies, can still deliver value, and can play a very important role in the success of the transition.
  2. Identify components correctly, so that when isolated they will be neither only functionality-driven, nor only data-driven, nor only request-driven, but rather driven by these three factors (functionality, data, and request) at the same time. Pay attention to the organizational chart, and use that as a basis for the decomposition in microservices.
  3. Remember that network bandwidth is not infinite. Some interfaces should be “chunky”, while others should be “chatty”. Plan for latency issues from the start.
  4. Reduce inter-service communication as much as possible, which can be done in many ways:
    1. Consolidating services together
    2. Consolidating data domains (combining database schemas or using common caches)
    3. Using technologies such as GraphQL to reduce network bandwidth
    4. Using messaging queues, such as RabbitMQ.
  5. Adopt Microservice-friendly technologies, such as Docker containers, Kubernetes, Knative, or Red Hat’s OpenShift and Quarkus.
  6. Implement an automated testing strategy for each and every microservice.
  7. Standardize technology stacks around containers & Kubernetes, and create common ground for a true microservice ecosystem within organizations.
  8. Automate all of your work as much as possible, knowing that the effort for automation (DevOps, CI/CD) can be amortized over many services, and becomes thus a net investment in the long run.

As mentioned previously, we regularly help organizations in their digital transformation towards microservices, Kubernetes, OpenShift, DevOps, CI/CD, GitLab, and DevOps in general, to support your teams with the tooling they will need in the future. Borrowing Henny Portman’s Team Topologies concept, VSHN can support both as an “Enabling Team” (DevOps workshop, consulting) and a “Platform Team” (Kubernetes/OpenShift) to build microservices on, ensuring stability and peace of mind.

Conclusion

Going beyond the hype, Microservice architectures bring great benefits, but can become huge challenges to software teams.

The best way to tackle those challenges consists in reverse engineering Conway’s Law, and start by the analysis of the human organization of your teams first. Make them independent, agile, and free to choose the best tools for their job. Encourage them to negotiate with one another the interfaces of their respective components.

Let us create and run complex, not complicated, systems. We cannot get away from complexity; that is our job as engineers. But we can get rid of the complicated part.

As a closing thought, I will quote my former colleague and lifetime friend Adam Jones, an independent IT consultant from Geneva: in order to achieve success with the Microservice architecture, you must embed structure in your activities, and remove it from your hierarchy. It is not about making the structure go away; but instead, moving it to where it does the most good.

Adrian Kosmaczewski

Adrian Kosmaczewski is in charge of Developer Relations at VSHN. He is a software developer since 1996, a trainer, and a published author. Adrian holds a Master in Information Technology from the University of Liverpool.

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us
Guests Tech

The 5 Most Persistent Myths about Container Technologies

28. Sep 2021

Guest article by Richard Zobrist, Head of Partners & Alliances Switzerland, Red Hat.

Open-source container technologies are an important measure to protect data. Despite this, some companies consider container solutions to be too insecure, too difficult to integrate, too slow, or completely unnecessary. It’s high time to dispel these persistent myths.

1. Too little security

Security teams are finding it increasingly difficult to keep up with the changing risks, compliance requirements, tools, and architectural changes introduced by new technologies such as containers, Kubernetes, software-defined infrastructure, and cloud technologies.

However, to be successful in the long term, security professionals need to change the way they think about containers: they are not virtual machines (VMs) or hosts, and they bring with them changing security challenges that cannot be addressed with traditional tools. Red Hat OpenShift Container Platform enables users to combine open-source benefits with the stability and security of a managed product. In addition, Red Hat OpenShift provides more consistent security over the long term, integrated monitoring, centralized policy management, and compatibility with Kubernetes workloads. Red Hat OpenShift can increase security by providing faster software updates that close security gaps-without you having to be actively involved.

2. Too difficult to integrate

Another myth that persists unwaveringly is the “difficult integration” of open source applications. The explanation for this is that today’s IT landscapes often offer few interfaces to which open-source platforms can dock. However, in this year’s “Open Source Study Switzerland” published by the University of Bern, the main reason cited for using open software is precisely the open standards on which open source is based. This shows that interoperability is central today and that monolithic IT systems have had their day. Business applications are expected to have open interfaces via application programming interfaces, which could be used to exchange microservices data, for example.

3. Lack of expertise

Many companies have a common concern: How can they benefit from open source technologies even if they don’t have their own specialist staff? What solutions are there in concrete terms? And what are the hurdles to overcome? To migrate your applications to Red Hat OpenShift, you don’t need additional staff. You can either work directly with Red Hat or let a certified partner – such as VSHN – do the migration.

Red Hat OpenShift gives you the added benefit of an energetic and supportive community behind the scenes with which to share knowledge and experience. In the dynamic world of IT, this access to knowledgeable professionals is one of the most important reasons for using open-source software. In addition, the dissemination of open source know-how also creates the basis for professional support and ultimately the possibility of hiring experienced open source specialists directly. Thus, open-source becomes a trump card in the battle for IT talent, because the technology makes companies attractive to them.

4. All applications must be based on open source

More and more companies are transforming their business by adopting DevOps principles, microservices, and container technologies like Kubernetes. Red Hat OpenShift is nevertheless often accused of having too few interfaces to other systems and only being successful if all applications are based on open source technologies. The “Open Source Study Switzerland”, on the other hand, shows that an important reason for using Red Hat OpenShift is the enormous selection of freely available components and tools. In recent years, a significant ecosystem has formed around OpenShift, from which customers can benefit in the simplest way.
Since IT decisions are often based on what others are doing, the popularity of open-source software multiplies as it becomes more widely used.

5. Unclear business model of the providers

To put an end to this myth as well, it is worth taking a look back at the beginnings of open source technologies. A milestone in open source history was the publication of the first version of the Linux kernel by the Finnish computer scientist Linus Torvalds. With the invention of Linux, Torvalds succeeded in developing the first completely free operating system for computers and thus laid the foundation not only for a large developer community but also for numerous projects and distributions based on the Linux kernel, such as “Red Hat Enterprise Linux”. As a result, the spread and popularity of Linux and other free software in the corporate world grew steadily, whether it was software for servers, office programs for desktop PCs, or virtualization tools for cloud platforms.

Meanwhile, open-source has solidified its reputation as a driver of innovation for the software industry. The trends that are now increasingly driving our work are all based on open source. These include Red Hat Enterprise Linux, cloud computing, edge technology and Internet of Things (IoT), containers, artificial intelligence (AI) and machine learning (ML), and DevOps. Today, Red Hat is the world’s leading provider of enterprise open source solutions. The company employs approximately 18,000 people worldwide and has 105 offices located on every continent.

Cooperation with VSHN

With the “Leading with Containers” initiative, Red Hat supports both its own customers and its partners in the introduction of “Red Hat OpenShift“. Customers who want to benefit from Red Hat’s container technology via a partner – such as VSHN – receive the same advantages as Red Hat’s own customers. Because VSHN has been a recognized Advanced Partner of Red Hat for over 3 years, specializing in the area of “Certified Cloud & Service Provider” (CCSP).

Richard Zobrist

Head of Partners & Alliances Switzerland and (interim) Country Manager Austria at Red Hat

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us