ELK Cluster on Kubernetes with SearchGuard using Helm

Many users who have setup ELK and SearchGuard either on Docker or VMs , know that its such a huge pain to live with , specially when there is time for upgrade and scaling the clusters manually. I used to have many branches in my deployment repo for different clusters , different versions , different environments etc, such a huge mess to deal with , and I was suppose to spend only about 2 % of my working time on managing all those stuff.

ElasticSearch has an official repo for distributing all the components of the ELK stack , but then if you are just going for open source free versions , you will need to setup your own stuff on top of those images , such as SearchGuard , and enabling only free features of Xpack. And its not just setting up the correct compatible versions of SG but you will also need to make changes to your configuration files for each upgrade ,to make sure the configuration keys are compatible with new versions of all the components. Also , if you are just using Docker Hosts or VMs , and maybe you are managing 10 clusters , each of 50 nodes , its fire fighting for sure, all the time. Using VMs and standalone Docker Hosts also lack dynamic scaling up or scaling down the cluster sizing.

Luckily , there are some new tools , that we can use to get rid of all that pain. Here are the steps to setup a test environment, that you can then promote to production.

  • Setup Minikube
  • Setup Helm client and Install teller
  • Grab the Helm Chart from official SearchGuard Repo
  • Deploy the Helm chart
  • Access Kibana , and Load sample data and Dashboards
  • Scale up or down the number of data nodes/masters nodes / client nodes
  • Rolling Upgrade the cluster to new version without doing anything
### Setup minikube

## macOS
Install https://www.virtualbox.org/wiki/Downloads
brew install kubectl kubernetes-helm
brew cask install minikube

## Linux
## Install https://www.virtualbox.org/wiki/Downloads or https://www.linux-kvm.org/page/Main_Page

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo cp kubectl /usr/local/bin/ && rm kubectl

## Set enough resources

minikube config set memory 8192
minikube config set cpus 4
minikube delete
minikube start

### Setup Helm

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --wait --service-account tiller --upgrade

## Install the helm chart , this will setup everything for you
## Optionally read the comments in sg-helm/values.yaml and customize them to suit your needs. Specially make the kibana service of type nodeport so that you can access it outside the cluster.  

$ git clone https://github.com/floragunncom/search-guard-helm.git
$ helm install search-guard-helm/sg-helm

## After deploying the chart , check the pods , services etc will be up and running in a few minutes.

### cluster admin username is admin , and you can get the password from the secret: passwd-secret , by doing kubectl get secret -o yaml , and then base64 decode the secret , you are ready to login to kibana and load sample data.

### scaling the data nodes , or changing other configuration , just change the values.yml file and then do helm upgrade.

Since the image used in the chart is already packaged , configured and tested by SearchGuard team , you don’t need to worry about the compatibility issues at all. Also you don’t need multiple branches in your deployment repo in version control as the chart can be deployed to different environments with different versions with different values that you can inject through your CI/CD system at runtime , also you don’t need to worry about doing much when it comes to scale up or scaling down the cluster , as you just need to change the replica count.

Leave a comment

Your email address will not be published. Required fields are marked *