Over the past few years, Kubernetes has taken the industry by storm on how to build, test, deploy and scale services. Kubernetes (K8S) helps in automating resource-intensive manual processes involved in managing containerized services.

However, in most of the organizations, K8S is used for QA, Pre-Prod and Prod Environments. Developers can be left behind.

The Quest to Improve Developers’ Experience and Agility to Adapt:

  • When an engineer joins a team, it typically takes 2-5 days to completely set up and equip the workstation with software, and applications required for day-to-day activities and for their initial code check-in.
  • Also, we often hear from engineers, “It worked in local! Need to check why it is not working in QA/Pre-Prod/Prod as they are working in the cloud!”
  • To minimize the difference between how the code will be executed in production, to how the code is tested during development to ensure reliability is where Self Service Internal Kubernetes Platform comes to the rescue.
  • Most importantly, this help us in adapting to open sources more easily, work with other team services without worrying about the setup issues allowing developers to focus of development rather than getting stuck with environmental issues.

Self-Service Internal Kubernetes Platform using Loft

Loft is an advanced control plane that runs on top of your existing Kubernetes clusters to add multi-tenancy and self-service capabilities to these clusters to get the full value out of Kubernetes beyond cluster management.

I know, that’s a big sentence!

Let’s break it down, you would want to master these core concepts:

  • Advanced Control Plane:

First, what is a control plane? It is a container orchestration layer that exposes the API and interfaces to define, deploy and manage the lifecycle of containers. In simple terms, it manages the worker nodes and the pods in the cluster that could be running on a single node or multiple nodes.

  • Multi-Tenancy:

This allows multiple teams/applications or so called “tenants” to use the same cluster without interfering with each other.

  • Self-Service Capability:

A user should be able to create resources on demand without the need of IT teams involvement where the application can be deployed, tested as well as destroy the instances if need be at the end. This can be scary sometimes when we think from the audit prospective. Here is where virtual clusters provide a clean way to do all these.

What is a Virtual Cluster?

Imagine a virtual Kubernetes cluster as a fully functional Kubernetes cluster that runs on top of another cluster – contained in a single namespace of the underlying host cluster. A virtual Kubernetes cluster can be used just like any other Kubernetes cluster with kubectl, helm or whatever client tool has access to a valid kube-context for this virtual cluster.

The virtual cluster roughly consists of two things:

  • A control plane i.e., API server, database like etcd and controller-manager.
  • A syncer which actually starts and manages the containers in the underlying host cluster

This means if a user is given access to a namespace, that user is an admin for that environment. No more admin access requests and no more debates to decide to give it or not.

But Why Do We Need this?

Independence, velocity and ease of experimentation, without the fear of bringing down anyone else’s work. 

There’s also less pressure on cluster admins to create and maintain all the requirements of the end users. They can focus on bigger aspects like stability and security of the applications.

What it Provides:

  • Cluster Provisioning
  • Template Creation
  • User Management
  • Auditability

Now that we know why we need this, let’s see how we did it and what we learnt.

Overall deployment architecture is as follows:  

We have broadly divided the environments into 2 categories:

  1. Connect to real resources
  2. Connect to Mocked/Self-Service applications
  1. Connect to real resources

This environment is mainly used for either QA or Nightly Build environments where we would essentially create resources required similar to how production resources are created. Ex: Managed Service of MySql, Kafka, Redis etc.

  1. Connect to Mocked/Self-Service applications

For development, we don’t always need real resources to work with. We have to rely on self service docker images which will be used by the application to connect.

We have identified the interdependencies of the services and grouped them into a single template based on teams. This gives us the flexibility of on-boarding a new engineer to this specific team with ease. All we need to do is create a new virtual cluster by using the team template. In case of Env1, it would essential ask for resource names and case of Env2, we would essential spin up all required resources and pre-seed the required data using Init containers

We moved all applications from local systems to the cloud… but wait a minute!? How do engineers access the cloud environment to debug from local machines, given we practically develop a lot in debugging mode!

That’s a topic for another blog on how we use Devspace to remote debug our code.

Did this post stimulate all the right brain cells? Well then, we have some exciting opportunities in store for excellent engineers just like you!

Jobs at Cashfree Payments
Author

Discover more from Cashfree Payments Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading