Featured

Kubernetes 101 Series , How to Build Kubernetes Clusters

Contents:

  1. Using Minikube, Run Kubernetes Locally
  2. Using KubeAdm , Basic Multi Node Cluster
  3. Using Rancher ( Open Source ) , Production Grade
  4. Using Kentona Pharos ( Opensource , Apache License ) , Production Grade
  5. Cloud Options
    1. GKE Kubernetes Engine
    2. Azure Container Service
    3. IBM Kubernetes Service
  6. Other Options for Enterprise Clusters:
    1. Using Kops , Production Grade
    2. Using Docker EE , Production Grade
    3. Using PKS, Production Grade
    4. Using Openshift ( PaaS ) , Production Grade
    5. SuSe CaaS , Production Grade

Using Minikube

Run Kubernetes Locally

Official Link:  https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/

Environment: Dev/Test/Local

OS: Linux/Mac/Windows

Stackoverflow Tag: https://stackoverflow.com/search?q=user:4550110+[minikube]

Github: https://github.com/kubernetes/minikube

Setup Instructions: https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/

Key Points:

  • Use it for local dev/testing purpose on workstation/laptop
  • Default Storage Class is auto created
  • In case of issues , wipe it out and re-create it from scratch

Using KubeAdm

Multi Node Cluster with Single Master/Control Plane

Official Link: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Recommended Environment: PoC/Staging

OS: Linux

Stackoverflow Tag: https://stackoverflow.com/questions/tagged/kubeadm

Github: https://github.com/kubernetes/kubeadm

Setup Instructions:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Key Points:

  • Great for beginners who want to play around and do some testing
  • Use it only for Poc/Testing environment
  • The control plane is single node
  • Follow the StackOverflow tag for questions/answers posted
  • Contribute to the project on Github

Using Kontena Pharos , Production Grade Kubernetes

The simple, solid, certified Kubernetes distribution that just works

Official Link: https://www.kontena.io/pharos

Environment: PoC/Testing/Staging/Production

OS: Linux

Github: https://github.com/kontena/pharos-cluster/

Slack: https://slack.kontena.io/

Setup Instructions: https://pharos.sh/docs/

Blog Post: https://devopsexamples.com/2019/04/04/setup-multi-node-secure-kubernetes-clusters-using-opensource-tools/h

Key Points:

  • It is Open Source , options are available to buy support and some extra features if needed
  • Very easy to use and manage multiple clusters
  • Use the Yaml file to compose the cluster and then create it using Pharos CLI
  • Version control your clusters yaml for change management ( IaaC )
  • Using templates for everything common to all your Kubernetes clusters
  • Post questions/issues to Slack and Github
  • Follow the Stackoverflow tag for questions posted
  • Contribute to the project on Github
  • Be aware of environment specific settings such as Proxy etc
  • Use network plugin that supports the features that you need
  • Create separate clusters to achieve hard multi-tenancy across teams

Using Rancher , Production Grade Kubernetes

Complete container management platform

Official Link: http://rancher.com

Environment: PoC/Testing/Staging/Production

OS: Linux

Github: https://github.com/rancher/rancher

Stackoverflow Tag: https://stackoverflow.com/questions/tagged/rancher

Slack: https://slack.rancher.io/

Setup Instructions: https://rancher.com/docs/rke/latest/en/example-yamls/

Key Points:

  • It is Open Source  , one can buy support, If needed
  • Easy to use and can manage large number of clusters
  • Use the Yaml file to compose the cluster and then create it using Rancher CLI
  • Version control your cluster yaml for change management ( IaaC )
  • Using templates for everything common to all your kubernetes clusters in Rancher
  • Post questions/issues to Slack and Github
  • Follow the Stackoverflow tag for questions posted
  • Contribute to the project on Github
  • Be aware of environment specific settings such as Proxy etc
  • Use network plugin that supports the features that you need
  • Create separate clusters to achieve hard multi-tenancy across teams

Cloud Options , Running Kubernetes Reliably on Cloud

Kubernetes is available on major cloud providers such as:

  1. GKE, Google Kubernetes Engine
  2. Azure Container Service
  3. IBM Kubernetes Service

Key Points:

  • Ease of use and speed of creating clusters
  • Offers free trials and free tier accounts
  • Cloud Provider will take care of running the Cluster
  • Integrated with Cloud Services such as external load balancers , cloud storage , and DNS services
  • Expensive/Costly as compared to on-prem options

Other Options for On Prem Enterprise Clusters

Kubernetes Operations (kops) – Production Grade K8s Installation, Upgrades, and Management

The Modern Platform for High-Velocity Innovation

Highly Available, Built for Day 2 Operations

The Kubernetes Platform for Big Ideas

Kubernetes , Ready for the Enterprise

Featured

Free Online Lab Environments for Learning DevOps Tools

Docker ClassRoom

This is an online virtual lab environment along with instructions how to perform those labs , they cover from basic to advance , docker basics , docker swarm , example application containerisation such as nodejs and java , application packaging in docker and then deployment , highly recommended for getting started and making sense of docker workflow. This environment also includes docker bash completion , so that you don’t have to type long commands. The instructions also includes some docker concepts , along with how to run those labs. https://training.play-with-docker.com/alacart/

PlayWithDocker

Again , this is an online playground lab environment from Docker , but this just enables you to spin up one or more docker hosts , just in case you already have a book or lab guide to practice on. Also includes bash completion for docker. https://labs.play-with-docker.com/

Katacoda Docker Course

This is a complete free online course along with built in lab environment , that takes you from start to finish , beginner to advance docker practice. Although the instructions are just enough to go through those labs , but it doesn’t include detailed theory Docker. https://www.katacoda.com/courses/docker

Docker Enterprise Hosted Trial

You can subscribe to spin up a Docker enterprise edition cluster , in cloud , to play with what docker EE offers , it includes docker swarm , kubernetes , and docker trusted registry. Useful for those who are planning to consider using Docker EE as enterprise production swarm/kubernetes solution. https://trial.docker.com/

Play with Kubernetes

Similar to play with docker , this playground environment enables you to practice your kubernetes labs , on free virtual lab environment. You can add one or more instances to your playground. https://labs.play-with-k8s.com/

Kubernetes ClassRoom

A free online lab environment , just like Docker ClassRoom , that contains instructions on who to do stuff in kubernetes , plus the ability to practice it at the same time. Very useful for getting started with Kubernetes. https://training.play-with-kubernetes.com/kubernetes-workshop/

Katacoda Kubernetes Course

A beginner to advance , interactive kubernetes course along with lab environment , also includes how-to’s related to Kubernetes operations. https://www.katacoda.com/courses/kubernetes

Hello Minikube

An online Minikube environmenet to run example workloads and see them in browser. https://kubernetes.io/docs/tutorials/hello-minikube/

CI/CD with Jenkins and Docker

A scenario based , interactive course along with lab environment from Katacoda. https://www.katacoda.com/courses/cicd

Other useful courses with online labs from Katacoda

  • Terraform
  • Docker Security
  • Running Java in Docker
  • Git Version Control
  • Running .Net in Docker
  • Service Meshes
  • Prometheus
  • Nomad
  • OpenTracing
  • Consul

Setup Multi-Node Secure Kubernetes Clusters in Minutes , Using Opensource Tools

Setting up multi node , secure , production Kubernetes clusters can be tricky and very hard to manage or you have to use expensive enterprise tools from different vendors. In this short post , I will go through a few simple steps to setup and manage the life cycle of Kubernetes clusters using open source tools.

Get some nodes , and setup ssh password-less ssh access , with sudo user.

## Example Nodes with IPs

 node1.example.com    192.168.110.100
 node2.example.com    192.168.110.101
 node3.example.com    192.168.110.102
 node4.example.com    192.168.110.103

On your client machine , setup Kontena Pharos CLI toolchain:

$ curl -s https://get.pharos.sh | bash
$ chpharos login
 Log in using your Kontena Account credentials
 Visit https://account.kontena.io/ to register a new account.
 Username: adam
 Password:
 Logged in.

Once logged in, you can install pharos CLI tool binaries.
Install the latest open source version of Kontena Pharos like this:

$ chpharos install 2.3.3+oss --use

Create the Cluster Configuration File

cluster.yml

hosts:
  - address: 192.168.110.100
    user: vagrant
    role: master
    ssh_key_path: ~/.ssh/my_key
  - address: 192.168.110.101
    user: vagrant
    role: worker
    ssh_key_path: ~/.ssh/my_key
network: {} # Use Weave networking with default config
addons:
  ingress-nginx:
    enabled: true # Enable Nginx ingress controller

Bootstrap your First Pharos Kubernetes Cluster

In the same directory where you created the cluster.yml file, run:

$ pharos up

Interact with the Cluster:

pharos kubeconfig > kubeconfig
export KUBECONFIG="${PWD}/kubeconfig"

To verify everything worked, run:

$ kubectl get nodes

Adding more nodes to Cluster:

Change the cluster.yml file , add more nodes , addons , other options

cluster.yml

hosts:
  - address: 192.168.110.100
    user: vagrant
    role: master
    ssh_key_path: ~/.ssh/my_key
  - address: 192.168.110.101
    user: vagrant
    role: worker
    ssh_key_path: ~/.ssh/my_key
  - address: 192.168.110.102
    user: vagrant
    role: worker
    ssh_key_path: ~/.ssh/my_key

network:
  provider: calico
  pod_network_cidr: 172.31.0.0/16
  service_cidr: 172.32.0.0/16
  calico:
    mtu: 1430

addons:
  helm:
    enabled: true
  ingress-nginx:
    enabled: true # Enable Nginx ingress controller

Apply the new changes:

$ pharos up

Running Clusters on machines behind proxy:

Update the cluster.yml and apply the changes:

hosts:
  - address: 192.168.110.100
    user: vagrant
    role: master
    ssh_key_path: ~/.ssh/my_key
    environment:
      http_proxy: http://example.com:3128
      https_proxy: http://example.com:3128                 
      HTTP_PROXY: http://example.com:3128                 
      no_proxy: localhost,127.0.0.1,172.31.0.0/16,172.32.0.0/16,.example.com,.cluster.local
  - address: 192.168.110.101
    user: vagrant
    role: worker
    ssh_key_path: ~/.ssh/my_key
    environment:
      http_proxy: http://example.com:3128
      https_proxy: http://example.com:3128                 
      HTTP_PROXY: http://example.com:3128                 
      no_proxy: localhost,127.0.0.1,172.31.0.0/16,172.32.0.0/16,.example.com,.cluster.local

  - address: 192.168.110.102
    user: vagrant
    role: worker
    ssh_key_path: ~/.ssh/my_key
    environment:
      http_proxy: http://example.com:3128
      https_proxy: http://example.com:3128                 
      HTTP_PROXY: http://example.com:3128                 
      no_proxy: localhost,127.0.0.1,172.31.0.0/16,172.32.0.0/16,.example.com,.cluster.local

network:
  provider: calico
  pod_network_cidr: 172.31.0.0/16
  service_cidr: 172.32.0.0/16
  calico:
    mtu: 1430

addons:
  helm:
    enabled: true
  ingress-nginx:
    enabled: true # Enable Nginx ingress controller

Apply the new changes:

$ pharos up

Get Nodes

$ kubectl get nodes

NAME        STATUS   ROLES    AGE   VERSION
node-01   Ready    master   46h   v1.13.5
node-02   Ready    worker   46h   v1.13.5
node-03   Ready    worker   46h   v1.13.5
node-04   Ready    worker   22h   v1.13.5

Get Pods

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS      RESTARTS   AGE
calico-kube-controllers-88f69c58f-8xqw8   1/1     Running     1          46h
calico-node-2vq7s                         1/1     Running     1          46h
calico-node-nv6l6                         1/1     Running     1          46h
calico-node-vnm2v                         1/1     Running     1          46h
calico-node-x8p2w                         1/1     Running     0          22h
coredns-7f5dfb7945-dddbx                  1/1     Running     1          46h
coredns-7f5dfb7945-f642g                  1/1     Running     1          46h
etcd		                          1/1     Running     6          46h
kube-apiserver			          1/1     Running     0          22h
kube-controller-manager			  1/1     Running     0          22h
kube-proxy-frr6r                          1/1     Running     0          22h
kube-proxy-grtlq                          1/1     Running     1          46h
kube-proxy-l7cxc                          1/1     Running     1          46h
kube-proxy-ldxwh                          1/1     Running     1          46h
kube-scheduler			          1/1     Running     0          22h
kubelet-rubber-stamp-6f58f64794-275wn     1/1     Running     1          46h
metrics-server-5467b89f97-7f24s           1/1     Running     1          46h
node-local-dns-96xdr                      1/1     Running     1          46h
node-local-dns-c8ngq                      1/1     Running     0          22h
node-local-dns-jxkzm                      1/1     Running     1          46h
node-local-dns-ps52f                      1/1     Running     1          46h
pharos-telemetry-1554386400-b4qxg         0/1     Completed   0          68m
pharos-telemetry-1554390000-n2fc8         0/1     Completed   0          9m2s
tiller-deploy-6cd646b684-2jmnj            1/1     Running     1          46h

Get API resources

$ kubectl api-resources --namespace=true

NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
controllerrevisions                            apps                           true         ControllerRevision
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
replicasets                       rs           apps                           true         ReplicaSet
statefulsets                      sts          apps                           true         StatefulSet
tokenreviews                                   authentication.k8s.io          false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                    true         HorizontalPodAutoscaler
cronjobs                          cj           batch                          true         CronJob
jobs                                           batch                          true         Job
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
leases                                         coordination.k8s.io            true         Lease
bgpconfigurations                              crd.projectcalico.org          false        BGPConfiguration
bgppeers                                       crd.projectcalico.org          false        BGPPeer
blockaffinities                                crd.projectcalico.org          false        BlockAffinity
clusterinformations                            crd.projectcalico.org          false        ClusterInformation
felixconfigurations                            crd.projectcalico.org          false        FelixConfiguration
globalnetworkpolicies                          crd.projectcalico.org          false        GlobalNetworkPolicy
globalnetworksets                              crd.projectcalico.org          false        GlobalNetworkSet
hostendpoints                                  crd.projectcalico.org          false        HostEndpoint
ipamblocks                                     crd.projectcalico.org          false        IPAMBlock
ipamconfigs                                    crd.projectcalico.org          false        IPAMConfig
ipamhandles                                    crd.projectcalico.org          false        IPAMHandle
ippools                                        crd.projectcalico.org          false        IPPool
networkpolicies                                crd.projectcalico.org          true         NetworkPolicy
events                            ev           events.k8s.io                  true         Event
daemonsets                        ds           extensions                     true         DaemonSet
deployments                       deploy       extensions                     true         Deployment
ingresses                         ing          extensions                     true         Ingress
networkpolicies                   netpol       extensions                     true         NetworkPolicy
podsecuritypolicies               psp          extensions                     false        PodSecurityPolicy
replicasets                       rs           extensions                     true         ReplicaSet
nodes                                          metrics.k8s.io                 false        NodeMetrics
pods                                           metrics.k8s.io                 true         PodMetrics
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy
poddisruptionbudgets              pdb          policy                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
roles                                          rbac.authorization.k8s.io      true         Role
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment

See the metrics server is working:

$ kubectl top pod -n kube-system
 NAME                                      CPU(cores)   MEMORY(bytes)   
 calico-kube-controllers-88f69c58f-8xqw8   1m           15Mi            
 calico-node-2vq7s                         13m          51Mi            
 calico-node-nv6l6                         15m          53Mi            
 calico-node-vnm2v                         12m          53Mi            
 calico-node-x8p2w                         12m          34Mi            
 coredns-7f5dfb7945-dddbx                  2m           12Mi            
 coredns-7f5dfb7945-f642g                  2m           12Mi            

Next up , we set local persistent volumes , a storage class and volume claims.

Deploy Atlassian Jira on Docker and Kubernetes

  • Docker file to start from
  • Build and Push the image to Docker registry
  • Run on Single Docker Host using Docker run
  • Run using Docker-Compose
  • Deploy on Kubernetes

Recently we started Jira deployments on Docker as part of making an automated deployment pipeline , where we deploy the docker image to a test environment , and then followed by deploying to production. You can either do the deployments on single standalone Docker hosts , or deploy to Kubernetes cluster when the cluster is available or both. The workflow is simple and straight forward , where you build your image on top of either OpenJDK or Oracle JDK , push to some registry , and then use it in docker/kubernetes deployments.

Get the source and Dockerfile

C02W84XMHTD5:repos iahmad$ git clone https://github.com/cptactionhank/docker-atlassian-jira
Cloning into 'docker-atlassian-jira'...
remote: Enumerating objects: 2946, done.
remote: Total 2946 (delta 0), reused 0 (delta 0), pack-reused 2946
Receiving objects: 100% (2946/2946), 549.08 KiB | 1.33 MiB/s, done.
Resolving deltas: 100% (1834/1834), done.


C02W84XMHTD5:repos iahmad$ cd docker-atlassian-jira/
C02W84XMHTD5:docker-atlassian-jira iahmad$ 

C02W84XMHTD5:docker-atlassian-jira iahmad$ cat Dockerfile 
FROM openjdk:8-alpine

# Configuration variables.
ENV JIRA_HOME     /var/atlassian/jira
ENV JIRA_INSTALL  /opt/atlassian/jira
ENV JIRA_VERSION  8.0.2

# Install Atlassian JIRA and helper tools and setup initial home
# directory structure.
RUN set -x \
    && apk add --no-cache curl xmlstarlet bash ttf-dejavu libc6-compat \
    && mkdir -p                "${JIRA_HOME}" \
    && mkdir -p                "${JIRA_HOME}/caches/indexes" \
    && chmod -R 700            "${JIRA_HOME}" \
    && chown -R daemon:daemon  "${JIRA_HOME}" \
    && mkdir -p                "${JIRA_INSTALL}/conf/Catalina" \
    && curl -Ls                "https://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-core-${JIRA_VERSION}.tar.gz" | tar -xz --directory "${JIRA_INSTALL}" --strip-components=1 --no-same-owner \
    && curl -Ls                "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.45.tar.gz" | tar -xz --directory "${JIRA_INSTALL}/lib" --strip-components=1 --no-same-owner "mysql-connector-java-5.1.45/mysql-connector-java-5.1.45-bin.jar" \
    && rm -f                   "${JIRA_INSTALL}/lib/postgresql-9.1-903.jdbc4-atlassian-hosted.jar" \
    && curl -Ls                "https://jdbc.postgresql.org/download/postgresql-42.2.1.jar" -o "${JIRA_INSTALL}/lib/postgresql-42.2.1.jar" \
    && chmod -R 700            "${JIRA_INSTALL}/conf" \
    && chmod -R 700            "${JIRA_INSTALL}/logs" \
    && chmod -R 700            "${JIRA_INSTALL}/temp" \
    && chmod -R 700            "${JIRA_INSTALL}/work" \
    && chown -R daemon:daemon  "${JIRA_INSTALL}/conf" \
    && chown -R daemon:daemon  "${JIRA_INSTALL}/logs" \
    && chown -R daemon:daemon  "${JIRA_INSTALL}/temp" \
    && chown -R daemon:daemon  "${JIRA_INSTALL}/work" \
    && sed --in-place          "s/java version/openjdk version/g" "${JIRA_INSTALL}/bin/check-java.sh" \
    && echo -e                 "\njira.home=$JIRA_HOME" >> "${JIRA_INSTALL}/atlassian-jira/WEB-INF/classes/jira-application.properties" \
    && touch -d "@0"           "${JIRA_INSTALL}/conf/server.xml"

# Use the default unprivileged account. This could be considered bad practice
# on systems where multiple processes end up being executed by 'daemon' but
# here we only ever run one process anyway.
USER daemon:daemon

# Expose default HTTP connector port.
EXPOSE 8080

# Set volume mount points for installation and home directory. Changes to the
# home directory needs to be persisted as well as parts of the installation
# directory due to eg. logs.
VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]

# Set the default working directory as the installation directory.
WORKDIR /var/atlassian/jira

COPY "docker-entrypoint.sh" "/"
ENTRYPOINT ["/docker-entrypoint.sh"]

# Run Atlassian JIRA as a foreground process by default.
CMD ["/opt/atlassian/jira/bin/start-jira.sh", "-fg"]

You can change the From section to use oracle JDK such as:

From oracle-serverjre:8

Build the image:

docker build -t jira:8.0 .
Sending build context to Docker daemon  796.2kB
Step 1/12 : FROM openjdk:8-alpine
8-alpine: Pulling from library/openjdk
8e402f1a9c57: Pull complete 
4866c822999c: Pull complete 
0b0a34324bda: Pull complete 
5da56be406b8: Pull complete 
Digest: sha256:9209ebd07f09f2e0f0e8da0884573d852b2fccb05d176852aea94923aaeffbfe
Status: Downloaded newer image for openjdk:8-alpine
 ---> 6eb8392704ff

.....
.....
.....


C02W84XMHTD5:docker-atlassian-jira iahmad$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
jira                8.0                 25c09ad91f96        About a minute ago   508MB

Deploy on test Docker Host

docker run -d --restart=always --name jira-test \
    -p 8080:8080 \
    -p 8443:8443 \
    -v /jira_home:/var/atlassian/jira \
    -v /logs:/logs \
    -e CATALINA_OPTS="-Xms8g -Xmx8g -Datlassian.plugins.enable.wait=600 \
    -e X_PROXY_NAME=test.example.com \
    -e X_PROXY_PORT=443 \
    -e X_PROXY_SCHEME=https \
    -e X_PATH=/panda/jira \
       jira:8.0

Deploy to Production Host

docker run -d --restart=always --name jira-prod \
-p 8080:8080 \
-p 8443:8443 \
-v /jira_home/data:/var/atlassian/jira \
-v /jira/prod/logs:/logs \
-e CATALINA_OPTS="-Xms8g -Xmx8g -Datlassian.plugins.enable.wait=600" \
-e X_PROXY_NAME=production.example.com \
-e X_PROXY_PORT=443 \
-e X_PROXY_SCHEME=https \
-e X_PATH=/panda/jira \
   jira:8.0

You can easily convert docker run to docker-compose by using an online tool ( https://composerize.com/) to generate the compose file and then do docker-compose up -d

docker-compose.yml file

version: 3
services:
    jira:
        restart: always
        container_name: jira-prod
        ports:
            - '8080:8080'
            - '8443:8443'
        volumes:
            - '/jira_home/data:/var/atlassian/jira'
            - '/jira/prod/logs:/logs'
        environment:
            - 'CATALINA_OPTS=-Xms8g -Xmx8g -Datlassian.plugins.enable.wait=600'
            - X_PROXY_NAME=production.example.com
            - X_PROXY_PORT=443
            - X_PROXY_SCHEME=https
            - X_PATH=/panda/jira
        image: 'jira:8.0'

Once you are ready to migrate things to Kubernetes , you will just need to provision a persistent volume instead of using that bind mount thing.

The kubernetes manifest should look like:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    io.kompose.service: jira-prod-claim0
  name: jira-prod-claim0
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Gi
...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    io.kompose.service: jira-prod-claim1
  name: jira-prod-claim1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Gi
...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    io.kompose.service: jira-prod
  name: jira-prod
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: jira-prod
    spec:
      containers:
      - env:
        - name: CATALINA_OPTS
          value: -Xms8g -Xmx8g -Datlassian.plugins.enable.wait=600 -Datlassian.mail.senddisabled=true
            -Datlassian.mail.fetchdisabled=true
        - name: X_PATH
          value: /panda/jira
        - name: X_PROXY_NAME
          value: production.example.com
        - name: X_PROXY_PORT
          value: "443"
        - name: X_PROXY_SCHEME
          value: https
        image: jira:8.0
        name: jira
        ports:
        - containerPort: 8080
        - containerPort: 8443
        resources: {}
        volumeMounts:
        - mountPath: /var/atlassian/jira
          name: jira-prod-claim0
        - mountPath: /logs
          name: jira-prod-claim1
      restartPolicy: Always
      volumes:
      - name: jira-prod-claim0
        persistentVolumeClaim:
          claimName: jira-prod-claim0
      - name: jira-prod-claim1
        persistentVolumeClaim:
          claimName: jira-prod-claim1
...
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: jira-prod
  name: jira-prod
spec:
  type: NodePort
  ports:
  - name: "8080"
    port: 8080
    targetPort: 8080
  - name: "8443"
    port: 8443
    targetPort: 8443
  selector:
    io.kompose.service: jira-prod

There might be some changes to the above configuration according to your environment such as the name/address of your external proxy ( if any) and the conext path on which you need to serve the app.

For new version upgrades , you just need to change the Jira version in dockerfile , rebuild, push the image , and trigger your docker/kubernetes deployment.

Finally , you may just put all of the above either into a Jenkins or GitlabCI pipeline.

Looking for helm charts? Here is an example: https://itnext.io/jira-on-kubernetes-by-helm-8a38357da4e

Running Docker Engine behind Proxy on RHEL

Running Docker on systems that are behind corporate proxy , can be a pain and can consume a lot of time doing try and test different options , searching for things around how to make it work. In this short article , I will just describe the exact settings that works on RHEL flavour of Linux systems.

Note that there are two package types on RHEL systems when it comes to installing docker, docker , and docker-latest. The later is going to be deprecated , so its recommended to install docker using the package name docker. However I will describe the proxy settings for both of them, there is just a minor difference to be aware of.

To set the proxy settings for docker on RHEL that will work for sure , is to create a file , under systemd configuration. Note that you will need to create the parent directory if doesn’t exist already. ( you will either need to be root or use sudo to run all these commands )

/etc/systemd/system/docker.service.d/http-proxy.conf

In case of docker-latest package , it should be:

/etc/systemd/system/docker-latest.service.d/http-proxy.conf

The content of the file should be:

[Service] 
 Environment=HTTP_PROXY=http://<your-proxy-fqdn>:3128

Once the file is created , with correct proxy settings , then you need to reload systemd and docker service for the changes to take effect.

systemctl daemon-reload
systemctl restart docker.service

In case of docker-latest package installed , the restart commands are:

systemctl daemon-reload
systemctl restart docker-latest.service

After this configuration is done , you will be able to pull and push docker images to public registries.

DevOps Troubleshooting , Web Stack

A user reports that the web page started to load very slow , and then finally timed out. Also you started getting alerts from your monitoring system , about the same. How to troubleshoot this issue , what could be the possible causes.

  • You reproduced the issue on many web clients , and turned out that there is no client side issues but something going on , on the server side.
  • You just confirmed from the external load balancer settings that the max reply time is 30 seconds , and the page is timing out after the 30 seconds , but why the application started responding so slow that the load balancer timed out the requests. Maybe , its just the new version of the app that has just been rolled out and is a bit slower relative to previous version , possible due to some bug or some other issue , so you just increased the time out to 60 , it still didn’t work , and then you set it to 300 , but still the same issue. At this point you just go straight to the backend servers to dig deep.
  • On the server side , you noticed that the apache httpd services are running , also shown in your monitoring dashboard that the apache, and other services in the stack or up. ( the ports are listening )
  • On the machines , you started doing some commands , such as top and ps aux , free -mh , etc. and noted that system resources such as memory and processor are normal. But you also noticed very high load averages.
  • You started thinking what might be contributing to those high load averages despite the fact resources utilisation seems ok otherwise.
  • While checking the process state using top and ps , you came to know that some application processes are in uninterruptible D state, means , waiting for IO , and causing that high load average numbers. ( in uninterruptible D state , the process is running a portion of the system code , in which it will not respond to any kind of signal as well )
  • By doing the lsof commands or looking at the app configuration you noticed that the application processes waiting for IO are using some NFS mount point.
  • You just had a look at the mount points , and did a df command , which did stuck, or maybe the df command succeeded but when you did an ls on the mount point it reported a stale mount.
  • In both the cases, you found that the issue is actually on the nfs server side. You fixed the nfs , did a remount of the mounts , and the application started/resumed into the normal state.
  • Finally you set the timeout back to 30 seconds in load balancer , and noticed that everything started working normally.
  • The above trouble shooting scenario can be repeated , for all kinds of dependencies due to which the app went into bad state , such as waiting for database connections , waiting for network IO , get out of file descriptors , process descriptors , and waiting for any other such resource.
  • After doing this kind of troubleshooting , you started putting all of the above resources/dependencies in your monitoring dashboard so that next time you just directly see where is the problem , instead of searching for it.
  • Note: If you are repeatedly observing this issue for a short interval of time , for example , in your monitoring system it shows up in the middle of the night for about 3-5 minutes , then maybe your VMware team has scheduled a job that backups the disks , and during the backup/snapshot , the disk is paused for RW operations. At this point you either stop doing snapshots for the those disks , and do the backup/snapshots at the application level or do it some other way or just live with it because its not a problem. ( this problem has been observed on ElasticSearch , and Solr cloud clusters , where every morning you will see some of the shards/indices went red and you need to resotre them , and keep doing this for a while untill you stop snapshoting the disks and use the elasticsearch own snapshot feature instead )