Do you want to build cloud native applications?

Do you want to avoid human errors by automating the deployment of your app?

Do you want to reduce downtime when publishing a new version?

Enter Kubernetes.

What is Kubernetes or K8s?

The journey from monoliths to microservices has allowed organizations to deploy applications independently. Modern teams no longer contain entire applications in a virtual machine. Instead, they have each individual microservices operating as autonomous processes, communicating with each other via APIs. In this world of microservices, Kubernetes and containerization go in synergy.

Kubernetes or K8s is the standard open source orchestration platform for cloud native applications. In the 2021 Agora DSI&CIO, DevSecOps Survey, we found that Kubernetes use in production has grown due to the rise in container usage. Moreover, it has faster deployment and higher automation capabilities compared to the other technologies.

 

However, Kubernetes is not the only management or orchestration tool for containerized applications. For example, Docker Swarm is another tool by Docker with features as good as Kubernetes itself. OpenShift is yet another platform, also called “Enterprise Kubernetes” by its vendor Red Hat.

Kubernetes has a four-layered approach to cloud native security known as the 4C’s – code, container, cluster, and cloud. Here, each layer is dependent on its next consecutive layer for security. Therefore, if the initial cloud layer is not secure, then the other components built on top of this layer are likely to be vulnerable too. Therefore, you need to ensure security best practices at each layer.

What features do container orchestration platforms like Kubernetes offer?
Managing multiple containers across different environments using different scripts and tools can get really complex and tedious. This leads to the need for container orchestration technologies like Kubernetes, with features like:

 

  • High availability to make sure the application is always accessible.
  • Scalability of the containers, ensuring high user response rate.
  • Disaster recovery so that the infrastructure can back up the data.
  • A simplified process for all your deployment operations.
  • Great interoperability and integration with other common external tools.

What are the some basic components in Kubernetes?

Node:

It is the machine (physical or virtual) where Kubernetes is installed. Each node has multiple pods with the containers inside them. They are called worker nodes because they are the ones who run the applications. One or more master nodes are responsible for managing and monitoring them all.

Cluster: 
A group of nodes pools their resources to make up a cluster. This way different nodes share the computing load between themselves. In addition, if a node fails, you can still access the application from another node and achieve high availability for your applications.

Container:
A container contains the application with all its dependencies. The idea is to isolate so that you can control your applications better. A Docker file is universal for containerizing your applications. You need to specify the Docker image for your pods.

 

Pod:
It is the smallest unit of Kubernetes.  It can hold multiple containers to separate the aspects of your application but ideally, you should run only one application container in a pod. The pod acts like a wrapper around the container, with a unique IP address to communicate with the other pods. Based on your requirements, there could be different patterns for pods – sidecar, adapter, ambassador (type of sidecar) etc. Kubernetes manages the pods, instead of managing the containers directly.

Ingress:
It is the entry point into the cluster that redirects the request from the client to the pods running inside the cluster. An ingress controller and load balancer are some good ways to supervise the external traffic that comes into your application.

 

 

A key challenge of cloud native microservices in Kubernetes
East-west or API traffic flow is rising

Cloud native applications using containerized technologies are gaining popularity but their security continues to be a key challenge. When the data packets are transferred from microservices to microservices, this kind of internal communication is called as east-west traffic. APIs are the primary means for such data exchange. It is important to secure this sideways traffic in order to reduce the available attack surface in our internal systems. If attackers gain access to a container, they can gradually compromise the entire internal system. Since this kind of traffic is growing exponentially, it needs to be evaluated.

 

 

WAF gateway vs micro-waf: centralized vs distributed strategy

Since no traffic can be trusted, we need to build a first line of defense. The WAF gateway enables safe access to the microservices and their APIs, being an entry point into the system. However, integrating WAF gateway in a microservices based application will help you handle north-south traffic but not the east-west one. This is due to the scaling of multiple microservices and their APIs. Our strategy is to take the functionality of the WAF gateway version and deploy it as a micro-WAF, close to the application, when dealing with microservices.

 

 

R&S®Trusted Application Factory

Cloud native protection aimed for your cloud native applications

R&S®Trusted Application Factory can be seamlessly deployed into the Kubernetes orchestration platform to secure your microservices based applications further. In fact, being an agnostic solution, it can be deployed on many other orchestrators such as Red Hat Openshift, Docker Swarm etc.

To address the shift in the philosophy of application protection, our idea is to reside inside the Kubernetes cluster as a micro WAF for each application instead of being a WAF gateway outside it. This way it can scale automatically with the application load and ensure that the security is always up to date with the application version.

A simple architecture – intended for Kubernetes

R&S®Trusted Application Factory consists of a distributed architecture, and can be close to the application, with two possibilities:

  • It could be deployed inline, next to each application.
  • Alternatively, it could be deployed as a sidecar in front of each application.

The sidecar extends the main container in a pod to share the same network and storage. It is a good pattern to handle east-west traffic. North-south traffic i.e. traffic flowing in and out of the Kubernetes cluster is watched upon by your ingress controller.

 

Benefits that your transversal teams will appreciate:

  • Reduced complexity with simple configuration file formats like YAML, JSON, known by Kubernetes and API developers.
  • Simplified API protection with OpenApi enforcement and proactive security engines.
  • Increased ROI by scaling automatically to adapt to the application traffic.

Cloud native application protection solutions leveraging containers could be your answer to the Kubernetes mystery. After all, you can have the finest Kubernetes cluster, but if your CI/CD pipeline is not automated and secure enough, your applications will not be able to deliver their full capabilities. Micro WAF solutions enable an agnostic and secure approach for your modern microservices based applications.

 

Autres communications