Deploy Elasticsearch Cluster & Kibaba On Kubernetes¶
Nowadays maybe the most advanced and widly used log management and analizis system is the ELK stack. I have to mention Graylog and Grafana Loki which are also great and advanced tools for montioring your environments and collect log files from them.
There is another enterprise ready and feature rich log management system which based on Elasticsearch and Kibana: OpenSearch. If you are looking for a free alternaive to Elasticsearch you may want to give OpenSearch a try. I'm going to post about OpenSearch as well, but at this time I want to show you a method to install Elasticsearch & Kibana on your Kubernetes cluster.
- A working Kubernetes cluster. The current version of my cluster: v1.24.4
- Kubectl cli tool
- Installed and ready to use Persistent Volume solution (Example Longhorn, OpenEBS, rook, etc)
- At least 2GB of free memory for Elasticsearch instances.
vm.max_map_count To At Least 262144¶
This is a strict requirements of Elasticsearch. You have to set this value on each node you are planning to run Elasticsearch. You can select the nodes where to run Elasticsearch with nodeselectors and node labels.
Add the following line to
To apply the setting on a live system, run:
The first and most important thing is to choose a names of your Elasticsearch cluster and Instances. We will deploy Elasticsearch cluster as StatefulSet, so the name of instances will be sequential.
- Create a directory for your certificates:
- Create the
- name: elastic-0 is must mach the StatefulSet name plus the sequence number appended by dash.
The DNS (
- name: elastic-1 ... elastic-n) name must mach the name of StatfulSet:
metadata.name: elastic and the headless service name. [STATFULSET_NAME]-[NUMBER].[STATEFUL_SERVICE_NAME]
The third DNS record is the neme of the Kubernetes (headless) Service. This will be used for Kubernetes internal use, for example for Kibana.
- Generate the certificates
Run a temporary contianer to work in it:
Run the following commands inside the container:
Exit from the container.
After the certificate generation your folder and file should look like that:
/tmp/es-certs/ /tmp/es-certs/certs.zip /tmp/es-certs/elastic-2 /tmp/es-certs/elastic-2/elastic-2.key /tmp/es-certs/elastic-2/elastic-2.crt /tmp/es-certs/ca.zip /tmp/es-certs/elastic-0 /tmp/es-certs/elastic-0/elastic-0.key /tmp/es-certs/elastic-0/elastic-0.crt /tmp/es-certs/instances.yml /tmp/es-certs/elastic-1 /tmp/es-certs/elastic-1/elastic-1.crt /tmp/es-certs/elastic-1/elastic-1.key /tmp/es-certs/ca /tmp/es-certs/ca/ca.key /tmp/es-certs/ca/ca.crt
- Move all files to the
Now your folder should be similar to this:
Create Kubernetes Secrets & Namespace¶
- Elastic Password
You will use this username/password to login to Kiabana.
ElasticSearch StatefulSet & Service¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115
Important Parts Of The Manifests¶
I really recommend to use some kind of hostpath volume, for example OpenEBS, since Elasticsearch operations can be IO heavy. If you decide to use OpenEBS hostpath all the POD will be scheduled to the same host all the time.
This variable is not used direrctly by the pod itself. It is just for this manifest. The value is the name of the StatefulSet.
It's purpose to use in other variables. (
metadata.name could not be nested)
This must mach with the
serviceName: es-cluster in this manifest, and the neme of the headless Service.
Each Elasticsearch instance created by the StatefulSet get the node name like elastic-0.es-clsuster, elastic-1.es-clsuster, etc. This is really important for the next parameters:
Now you can see that how important to decide the names of each component.
As I wrote above the DNS names in the
instances.yml must mach these names.
elastic-0.es-cluster means the [POD_NAME].[HEADLESS_SERVICE:metadata.name]. In our case the pod name is always the name of the StatefulSet + sequence number (because of the StatefulSet). This way the
elastic-[n].es-cluster always points to the actual IP address of the pods create by the StatefulSet.
You can increase or decrease the number of Elasticsearch instances, but keep in mind to modify these values:
- Certificate generation: Modify the
instances.yml, and regenerate the certificates, but only
certs.zipnot the CA! Don't forget to update the Kubernetes secret.
cluster.initial_master_nodesAccording to the
Every node has its own certificate. That's why we need the
$(NODENAME) variable. This way the
certs/$(NODENAME).crt will be
certs/elastic-0.crt for the first pod and
certs/elastic-1.crt for the second one, etc.
You can create a single certificate which holds all of the DNS record for all nodes, but it is antipattern and not recommended for security reason.
This is the password for the built-in
Here we mount the previously created Kubernetes secret which contains all of the necessary certificates.
Get into the
And run the following commands:
HTTP/1.1 200 OK X-elastic-product: Elasticsearch content-type: text/plain; charset=UTF-8 content-length: 302 ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 10.26.6.107 14 83 1 1.57 1.83 1.59 cdfhilmrstw - elastic-0.es-cluster 10.26.4.230 37 83 2 0.58 0.76 0.62 cdfhilmrstw * elastic-1.es-cluster
HTTP/1.1 200 OK X-elastic-product: Elasticsearch content-type: text/plain; charset=UTF-8 content-length: 314 shards disk.indices disk.used disk.avail disk.total disk.percent host ip node 4 39.9mb 19.2gb 89.3gb 108.5gb 17 10.26.4.230 10.26.4.230 elastic-1.es-cluster 4 39.8mb 65.8gb 50.2gb 116.1gb 56 10.26.6.107 10.26.6.107 elastic-0.es-cluster
As you can see I have only two nodes at the moment. But everything looks fine.
First prepare the
kibana_system built-in user password:
Run the following command inside one of your elastic pod!!!
Do not use the cli tools (/usr/share/elasticsearch/bin/elasticsearch-*) to update/reseet paswword. . This will create a file inside the /usr/share/elasticsearch/config directory, and after the pod restart this file will be gone.
Please note that the password (
elastic:Admin1234) comes from the
ELASTIC_PASSWORD environment variable (pre-created secret).
Create a Kuernetes secret:
- Kibana use the same secret to mount the certificate as Elasticsearch. (volumeMounts: es-certs), but different mountPath: /usr/share/kibana/config/certs
SERVER_PUBLICBASEURLto the hostname that you will use in your ingress. If you miss this step Kibana will warn you to correct this.
ELASTICSEARCH_HOSTS: This value points to the headless service. That's why we need to add
es-clusteras DNS record in
ELASTICSEARCH_USERNAME: Do NOT modify this value. Older versions of Elasticsearch used
kibana, but it is deprecated. The username should be
This is only an example ingress, so modify according to your needs.
Send Logs To The Elasticsearch Cluster¶
From inside the Kubernetes cluster it is really simple, just create a headless service:
kind: Service apiVersion: v1 metadata: name: es-cluster namespace: logging spec: ports: - name: rest protocol: TCP port: 9200 targetPort: 9200 - name: inter-node protocol: TCP port: 9300 targetPort: 9300 selector: k8s-app: elastic clusterIP: None type: ClusterIP sessionAffinity: None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster
Now you can use the
Accessing Elasticsearch from outside the Kubernetes cluster a bit more complicated and highly depends on your environment. I have never tried, but you may create an Ingress, since the port 9200 for API calls over HTTP. https://discuss.elastic.co/t/what-are-ports-9200-and-9300-used-for/238578
This way your Elasticsearch cluster may be exposed to the public Internet.
Another way can be using NodePort serivce, or MetalLB LoadBalancer serivce. Example MetalLB service:
kind: Service apiVersion: v1 metadata: name: elasticsearch namespace: logging annotations: metallb.universe.tf/address-pool: default spec: ports: - name: tcp-9200 protocol: TCP port: 9200 targetPort: 9200 selector: k8s-app: elastic type: LoadBalancer sessionAffinity: None externalTrafficPolicy: Cluster ipFamilyPolicy: SingleStack allocateLoadBalancerNodePorts: true internalTrafficPolicy: Cluster
MetalLB will create nodeport(s), as well.
Remember the DNS config in
instances.yaml! When you accessing your Elasticsearch cluster the DNS or IP address must mach the entries in the
instances.yaml. So if you create a DNS entry with
es.example.com domain, this must present in the DNS entries. Or if you accessing the ES cluster over MetalLB service, the ip address of the service must be added to the IP sections.
Because of the Kubernetes service you don't know which pod will get the request that's why all node certificate should contain all possible domain name and/or IP address.
Bonus - Single Node Deployment¶
If you want to test Elasticsarch or you don't need multinode environment you can deploy Elasticearch as a single node environment.
kind: Deployment apiVersion: apps/v1 metadata: name: elastic namespace: logging spec: replicas: 1 selector: matchLabels: k8s-app: elastic template: metadata: name: elastic creationTimestamp: null labels: k8s-app: elastic spec: volumes: - name: es-data persistentVolumeClaim: claimName: es-data containers: - name: elastic image: docker.elastic.co/elasticsearch/elasticsearch:8.5.1 env: - name: discovery.type value: single-node - name: cluster.name value: es-single - name: node.name valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: ES_JAVA_OPTS value: '-Xms2g -Xmx2g' - name: xpack.security.enabled value: 'true' - name: xpack.security.http.ssl.enabled value: 'false' - name: xpack.security.transport.ssl.enabled value: 'false' - name: ELASTIC_PASSWORD valueFrom: secretKeyRef: name: elastic-password key: elastic resources: limits: cpu: 1500m memory: 3Gi requests: cpu: 250m memory: 2Gi volumeMounts: - name: es-data mountPath: /usr/share/elasticsearch/data subPath: data terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: privileged: true restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: fsGroup: 1000 schedulerName: default-scheduler strategy: type: Recreate minReadySeconds: 10 revisionHistoryLimit: 10 progressDeadlineSeconds: 600
This is very similar to the StatefulSet, but notice the following parameters:
discovery.type: single-node--> This indicates that only one ES node will be present.
xpack.security.enabled: true--> Without this you won't be able to create users, and must find another way to protect Kibana. (Example: Ingress basic auth)
xpack.security.*.ssl.enabled: false--> Use plain HTTP. If you set them true, you have to generate certificates and set up them as in the StatefulSet.
- The PersistentVolumeClaim must be pre-created.