Featured

ELK Cluster on Kubernetes with SearchGuard using Helm

Many users who have setup ELK and SearchGuard either on Docker or VMs , know that its such a huge pain to live with , specially when there is time for upgrade and scaling the clusters manually. I used to have many branches in my deployment repo for different clusters , different versions , different environments etc, such a huge mess to deal with , and I was suppose to spend only about 2 % of my working time on managing all those stuff.

ElasticSearch has an official repo for distributing all the components of the ELK stack , but then if you are just going for open source free versions , you will need to setup your own stuff on top of those images , such as SearchGuard , and enabling only free features of Xpack. And its not just setting up the correct compatible versions of SG but you will also need to make changes to your configuration files for each upgrade ,to make sure the configuration keys are compatible with new versions of all the components. Also , if you are just using Docker Hosts or VMs , and maybe you are managing 10 clusters , each of 50 nodes , its fire fighting for sure, all the time. Using VMs and standalone Docker Hosts also lack dynamic scaling up or scaling down the cluster sizing.

Luckily , there are some new tools , that we can use to get rid of all that pain. Here are the steps to setup a test environment, that you can then promote to production.

  • Setup Minikube
  • Setup Helm client and Install teller
  • Grab the Helm Chart from official SearchGuard Repo
  • Deploy the Helm chart
  • Access Kibana , and Load sample data and Dashboards
  • Scale up or down the number of data nodes/masters nodes / client nodes
  • Rolling Upgrade the cluster to new version without doing anything
### Setup minikube

## macOS
Install https://www.virtualbox.org/wiki/Downloads
brew install kubectl kubernetes-helm
brew cask install minikube

## Linux
## Install https://www.virtualbox.org/wiki/Downloads or https://www.linux-kvm.org/page/Main_Page

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo cp kubectl /usr/local/bin/ && rm kubectl

## Set enough resources

minikube config set memory 8192
minikube config set cpus 4
minikube delete
minikube start

### Setup Helm

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --wait --service-account tiller --upgrade

## Install the helm chart , this will setup everything for you
## Optionally read the comments in sg-helm/values.yaml and customize them to suit your needs. Specially make the kibana service of type nodeport so that you can access it outside the cluster.  

$ git clone https://github.com/floragunncom/search-guard-helm.git
$ helm install search-guard-helm/sg-helm

## After deploying the chart , check the pods , services etc will be up and running in a few minutes.

### cluster admin username is admin , and you can get the password from the secret: passwd-secret , by doing kubectl get secret -o yaml , and then base64 decode the secret , you are ready to login to kibana and load sample data.

### scaling the data nodes , or changing other configuration , just change the values.yml file and then do helm upgrade.

Since the image used in the chart is already packaged , configured and tested by SearchGuard team , you don’t need to worry about the compatibility issues at all. Also you don’t need multiple branches in your deployment repo in version control as the chart can be deployed to different environments with different versions with different values that you can inject through your CI/CD system at runtime , also you don’t need to worry about doing much when it comes to scale up or scaling down the cluster , as you just need to change the replica count.

Featured

Free Online Lab Environments for Learning DevOps Tools

Docker ClassRoom

This is an online virtual lab environment along with instructions how to perform those labs , they cover from basic to advance , docker basics , docker swarm , example application containerisation such as nodejs and java , application packaging in docker and then deployment , highly recommended for getting started and making sense of docker workflow. This environment also includes docker bash completion , so that you don’t have to type long commands. The instructions also includes some docker concepts , along with how to run those labs. https://training.play-with-docker.com/alacart/

PlayWithDocker

Again , this is an online playground lab environment from Docker , but this just enables you to spin up one or more docker hosts , just in case you already have a book or lab guide to practice on. Also includes bash completion for docker. https://labs.play-with-docker.com/

Katacoda Docker Course

This is a complete free online course along with built in lab environment , that takes you from start to finish , beginner to advance docker practice. Although the instructions are just enough to go through those labs , but it doesn’t include detailed theory Docker. https://www.katacoda.com/courses/docker

Docker Enterprise Hosted Trial

You can subscribe to spin up a Docker enterprise edition cluster , in cloud , to play with what docker EE offers , it includes docker swarm , kubernetes , and docker trusted registry. Useful for those who are planning to consider using Docker EE as enterprise production swarm/kubernetes solution. https://trial.docker.com/

Play with Kubernetes

Similar to play with docker , this playground environment enables you to practice your kubernetes labs , on free virtual lab environment. You can add one or more instances to your playground. https://labs.play-with-k8s.com/

Kubernetes ClassRoom

A free online lab environment , just like Docker ClassRoom , that contains instructions on who to do stuff in kubernetes , plus the ability to practice it at the same time. Very useful for getting started with Kubernetes. https://training.play-with-kubernetes.com/kubernetes-workshop/

Katacoda Kubernetes Course

A beginner to advance , interactive kubernetes course along with lab environment , also includes how-to’s related to Kubernetes operations. https://www.katacoda.com/courses/kubernetes

Hello Minikube

An online Minikube environmenet to run example workloads and see them in browser. https://kubernetes.io/docs/tutorials/hello-minikube/

CI/CD with Jenkins and Docker

A scenario based , interactive course along with lab environment from Katacoda. https://www.katacoda.com/courses/cicd

Other useful courses with online labs from Katacoda

  • Terraform
  • Docker Security
  • Running Java in Docker
  • Git Version Control
  • Running .Net in Docker
  • Service Meshes
  • Prometheus
  • Nomad
  • OpenTracing
  • Consul

Automated Image Signing Workflow for Docker Trusted Registry (DTR)

Just in case you are using a Docker trusted registry ( any image registry that supports docker content trust ) to establish a secure supply chain of docker images to your swarm or kubernetes cluster , this post may be of some help , because when we were doing this , we could not find a ready made recipe to automate the docker image signing process. There were lot and lot of documentation pages to read and those were just adding to the confusion how all of the process work and how to automate it.

Docker image singing is part of the docker content trust , and may be needed for a couple reasons , such as allowing only signed images to be run on the cluster , making sure that the image belong to a trusted source , and using it a way to restrict the docker registry to be used is that of your docker trusted registry instead of allowing images from public registries on the public internet.

Ok , you found that docker image signing is awesome , But you would not like to manually sign every image that you produce , or your CI build produces. Your build pipeline maybe producing hundreds of images a day and its not practical to sign each of those images manually.

There are many ways of signing images , and the image signing make use of notary client , not easy to use. Luckily you will find that docker has made it easy for you if you just use the docker trust command, but before using docker trust command you will need to setup the docker image repository trust meta data. This only needs to be done once per project/repo.

So , your image build pipeline will be using some docker client running on either a Jenkins node , or a Gitlab CI runner , or any such other CI/CD tool. You just need to configure that client once per project/repo , for using docker content trust ( DCT) , then you can use it in your build pipeline for pushing signed images to that repo forever.

Client Setup ( One time setup per project , this client cloud be a gitlab runner or Jenkins node or any node running your image build )

#### note that the actual values of secrets such as passwords and keys used, can come from the secret variables of you CI environment such as Gitlab , or Jenkins etc

export DTR=registry.example.com
export USER=<YOUR_USERNAME>
export PROJECT=$USER/<PROJECT_REPO_NAME>
export DTR_TOKEN=$YOUR_DTR_TOKEN
export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE=$SECRE_PASS_01
export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=$SECRET_PASS_02
docker login -u $USER -p $DTR_TOKEN $DTR
## key.pem and cert.pub comes from your docker UCP client cert bundle under your ucp profile

docker trust key load key.pem --name $USER
docker trust signer add --key cert.pub $USER $DTR/$PROJECT

Once the client is setup , then you just keep running your builds on that client, and use docker push command that will auto sign and push your image to DTR.

export CONTAINER_IMAGE=$DTR/$PROJECT

docker pull $CONTAINER_IMAGE:latest || true
docker build --cache-from $CONTAINER_IMAGE:latest --tag $CONTAINER_IMAGE:latest .

export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE=$SECRE_PASS_01
export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=$SECRET_PASS_02
export DOCKER_CONTENT_TRUST=1
docker push $CONTAINER_IMAGE:latest
docker logout $DTR

Once you are done with setting this up for first project , then you can just repeat these two lines for each of your new project , and auto signing will work. ( you can just use the same secrets and keys as trust metadata for all of your projects )

docker trust key load key.pem --name $USER
docker trust signer add --key cert.pub $USER $DTR/$PROJECT

Questions and Issues ? Please ask in the comments.

CI/CD for Spring Boot Application on Kubernetes

  • Get the code Repo from Version Control
  • Run/test the App
  • Dockerize the App
  • Test using simple Docker Run
  • Deploy to Kubernetes
  • Setup CI/CD
## Note: for quick testing you can run all these commands on online free minikube
## https://kubernetes.io/docs/tutorials/hello-minikube/
## In online minikube there is a plus button to access ports in browser.
## Use that feature to see the app in browser after running

## Deploy Spring Boot App

git clone https://github.com/spring-guides/gs-spring-boot.git

cd gs-spring-boot/complete


## create the JAR deployable for the application

./mvnw -DskipTests package


## create the container image and push it to the Container Registry.
## you can skip exporting these , if you are just using dockerhub

export DTR=registry.example.com
export USER=foo
export DTR_TOKEN=bar

## skip this if you are logiging in to dockerhub instead
docker login -u $USER -P $DTR_TOEKN $DTR


./mvnw -DskipTests com.google.cloud.tools:jib-maven-plugin:build \
  -Dimage=$DTR/$USER/hello-java:v1


 ##  test the image locally with the following command:

 docker run -ti --rm -p 8080:8080 \
  $DTR/$USER/hello-java:v1


### Once you verify that the app is running fine locally in a Docker container
### you can stop the running container by pressing Ctrl+C.


### Deploy on K8s
## note: make your image public or use imagepullsecret for k8s deployment

kubectl run hello-java \
  --image=$DTR/$USER/hello-java:v1 \
  --port=8080

## check your app is running

kubectl get deployments
kubectl get pods

## allow external traffic
## must use nodeport

kubectl expose deployment hello-java --type=NodePort


kubectl get svc

### test access the service

http://<CLUSTER_NODE_IP>:<nodePort>

### you can further do some testing by scaling up and down the number replicas.

kubectl scale deployment hello-java --replicas=3

At this point your app is up and running on Kubernetes. Next we will take this , and put it into a CI tool to setup a CI/CD pipeline , so that you can roll out new versions , and rollback when needed.