General Tech

What is DevOps – what does VSHN do?

16. Feb 2016

DevOps is a common term, but unfortunately as vague as “Cloud”: everyone knows that they want or need it and yet it is not something you can simply order and get delivered.
By DevOps we mean the interdisciplinary cooperation between developers and operators of software in order to deploy the applications quickly and systematically.

Similar to agile software development (e.g. Scrum) – where the “product owner” specifies the next development steps together with the software developers and takes over finished work – this promotes communication between the parties involved and reduces misunderstandings and thus expensive errors.
The promotion of cooperation between developers and operators contrasts with the previous practice of strictly separating these teams – whether for reasons of separation of powers (no access by developers to production data) or because developers and operators had to fulfill different requirement profiles (programming skills, on-call service).
In the meantime, however, a number of findings and proven methods from software development have also arrived in the operating processes:

  1. Infrastructure as Code: the description and configuration of infrastructure components using scripts to quickly and reliably automate recurring tasks (e.g. installation of a server or installation/upgrade of an application). Depending on the application and environment, there are different tools – Docker, Ansible, Puppet, SaltStack, etc. – which already have their own frameworks and ecosystems with ready-made building blocks for standard components.
  2. Test systems: if the setup of a server is completely automated, this minimizes the effort to create one or more test servers. If developers can use a test server that is the same as a production server, they can find errors before they occur in production.
  3. Versioning: if the infrastructure or at least parts of it are mapped in code, it can be managed with well-known code versioning tools (Git, SVN, etc.). This makes it possible to track changes to the infrastructure (“Who changed what when?”, “Why does it suddenly no longer work even though the software hasn’t been changed?”) and to roll back changes completely in case an error should occur.
  4. Continuous integration of the infrastructure code: just as the actual application is compiled automatically with each change and tested functionally both component by component and as a whole, the requirements for the infrastructure can also be verified with automated tests. The effects are minimized by detecting an error as early as possible. For example, publishing changes can be blocked if errors occur during testing.

Conversely, experience gained during operation also flows into modern software architectures:

  1. Packaging and version management: to ensure that all persons involved speak of the same version of the software throughout the entire quality assurance process from test / development server, acceptance by the product owner, possible external testing / validation (beta, user, UX tests), integration with external interfaces (backends, APIs) to production, the software is stored in a versioned package. The type of packaging can be determined by the development environment (e.g. JAR for Java, WAR for Tomcat) or operating environment (e.g. DEB/RPM for Linux, MSI for Windows), or it can also be independent in the case of Docker. This ensures that the software can be installed and updated completely (with all required libraries) and automates these steps as far as possible.
  2. Service Oriented Architectures (SOA) and Microservices: as soon as an application becomes so extensive and/or complex in development that more than a handful of teams take care of it, it is easier to divide the teams into smaller sub-projects (“microservices”) and explicitly define the interfaces between them than to coordinate all teams with each other in the same “project” regarding technology, development progress and internal responsibilities. This not only allows the teams to develop in a decoupled way, but also allows them to choose more suitable technologies for their purpose – provided that the interface to other teams does not change. Ideally, most components / services would be fault-tolerant with each other, i.e. if a sub-component with limited functionality fails, they would continue to function, making the overall project more robust.
  3. Configuration Management: most applications have interfaces to other applications – for example to a database or other APIs / services – and write log files. During development, quality assurance testing and production, different endpoints (addresses, credentials, etc.) are used. This allows the isolation of test and production data, so a test of a new version cannot accidentally delete the production customer data. This is why the access data is not managed directly in the code, but in configuration files, which in turn can be generated automatically for each environment or read from environment variables. A modern definition for this is, for example, the twelve-factor method (http://12factor.net/de/).
  4. Scalability: applications and services that have clearly defined interfaces can easily be scaled horizontally, i.e. distributed across different servers. This enables the company to offer the service multiple times, redundantly and thus highly available and to react to different loads by adding or removing servers. Even these steps can be automated: It is possible to automatically obtain or release more server resources based on the current load and, depending on the billing model of the individual resources, to produce costs only if the service is also used effectively.

What does the fusion of development methods and operating processes bring in concrete terms?

  1. The automation of the infrastructure (see “Infrastructure as Code” above) makes the infrastructure faster, more reliable and prevents inconsistencies due to (missing) manual steps on different systems. It enables developers and product owners to effectively test their results under the same conditions as production.
  2. Automating the software lifecycle from development to production makes the whole process faster, more reliable, and can best be done by the product owner himself after the release of the latest version. Thus, after the developers, the operators also give the business the reins for the application into their own hands and are available for further developments. The product owner can thus determine both the scope and the frequency of the deployments. The more frequently a product is rolled out, which means that the scope of the respective changes is smaller, the smaller is the risk of undesirable side effects and errors. If errors nevertheless occur, the product owner himself can reverse the last change and call on the developers to remedy the situation without penalizing the company.
  3. Both together prevent IT from blocking the critical path of the project as an end in itself and enable the developers and the business to “self-service”. Of course, this also means a cultural change within a company: if a deployment fails or problems occur in production, then developers and business people have to solve the problem together and make sure that it does not happen again (e.g. by means of automatic testing). It doesn’t matter why or because of whom the problem occurred: no “culprit” has to be found, but the whole process has to be continuously improved.

We at VSHN do nothing all day long but automate different development processes, different technologies, different backends (databases, cache servers, proxies, WAFs, etc.) and operate them according to the requirements of our customers and / or development partners on any infrastructure – be it public clouds such as Amazon, Azure, Cloudscale.ch, Cloudsigma, Exoscale.ch, Safe Swiss Cloud, Swisscom Cloud or private, i.e. company-internal infrastructures on a VMware or Hyper-V basis.
We advise our customers on the location of data storage (CH, EU, international), will soon be ISO27001-certified ourselves and, together with our partners, can offer hosting in accordance with the FINMA standard.
Our core values are trustworthiness and availability of professional competence. Trustworthiness and security through transparency: transparent communication of processes, transparent order definitions and billing models. We work agilely with our clients and communicate regularly. We are available 24×7 around the clock and proactively take care of “our” applications.
We are VSHNeers.

Markus Speth

Marketing, People, Strategy

Contact us

Our team of experts is available for you. In case of emergency also 24/7.

Contact us