tree: 5a9b4c6b6c2bb092dd24bd310b39152aab92ea68 [path history] [tgz]
  1. .helmignore
  2. Chart.yaml
  3. README.md
  4. templates/
  5. values.yaml
redis/README.md

RedisTM Chart packaged by Bitnami

RedisTM is an advanced key-value cache and store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, sorted sets, bitmaps and hyperloglogs.

Disclaimer: REDIS® is a registered trademark of Redis Labs Ltd.Any rights therein are reserved to Redis Labs Ltd. Any use by Bitnami is for referential purposes only and does not indicate any sponsorship, endorsement, or affiliation between Redis Labs Ltd.

TL;DR

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/redis

Introduction

This chart bootstraps a RedisTM deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the BKPR.

Choose between RedisTM Helm Chart and RedisTM Cluster Helm Chart

You can choose any of the two RedisTM Helm charts for deploying a RedisTM cluster. While RedisTM Helm Chart will deploy a master-slave cluster using RedisTM Sentinel, the RedisTM Cluster Helm Chart will deploy a RedisTM Cluster topology with sharding. The main features of each chart are the following:

RedisTMRedisTM Cluster
Supports multiple databasesSupports only one database. Better if you have a big dataset
Single write point (single master)Multiple write points (multiple masters)
RedisTM TopologyRedisTM Cluster Topology

Prerequisites

  • Kubernetes 1.12+
  • Helm 3.1.0
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ helm install my-release bitnami/redis

The command deploys RedisTM on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Parameters

The following table lists the configurable parameters of the RedisTM chart and their default values.

ParameterDescriptionDefault
global.imageRegistryGlobal Docker image registrynil
global.imagePullSecretsGlobal Docker registry secret names as an array[] (does not add image pull secrets to deployed pods)
global.storageClassGlobal storage class for dynamic provisioningnil
global.redis.passwordRedisTM password (overrides password)nil
image.registryRedisTM Image registrydocker.io
image.repositoryRedisTM Image namebitnami/redis
image.tagRedisTM Image tag{TAG_NAME}
image.pullPolicyImage pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an arraynil
nameOverrideString to partially override redis.fullname template with a string (will prepend the release name)nil
fullnameOverrideString to fully override redis.fullname template with a stringnil
cluster.enabledUse master-slave topologytrue
cluster.slaveCountNumber of slaves2
existingSecretName of existing secret object (for password authentication)nil
existingSecretPasswordKeyName of key containing password to be retrieved from the existing secretnil
usePasswordUse passwordtrue
usePasswordFileMount passwords as files instead of environment variablesfalse
passwordRedisTM password (ignored if existingSecret set)Randomly generated
configmapAdditional common RedisTM node configuration (this value is evaluated as a template)See values.yaml
clusterDomainKubernetes DNS Domain name to usecluster.local
networkPolicy.enabledEnable NetworkPolicyfalse
networkPolicy.allowExternalDon't require client label for connectionstrue
networkPolicy.ingressNSMatchLabelsAllow connections from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsFor other namespaces match by pod labels and namespace labels{}
securityContext.*Other pod security context to be included as-is in the pod spec{}
securityContext.enabledEnable security context (both redis master and slave pods)true
securityContext.fsGroupGroup ID for the container (both redis master and slave pods)1001
containerSecurityContext.*Other container security context to be included as-is in the container spec{}
containerSecurityContext.enabledEnable security context (both redis master and slave containers)true
containerSecurityContext.runAsUserUser ID for the container (both redis master and slave containers)1001
serviceAccount.createSpecifies whether a ServiceAccount should be createdfalse
serviceAccount.nameThe name of the ServiceAccount to createGenerated using the fullname template
serviceAccount.annotationsSpecifies annotations to add to ServiceAccount.nil
rbac.createSpecifies whether RBAC resources should be createdfalse
rbac.role.rulesRules to create[]
metrics.enabledStart a side-car prometheus exporterfalse
metrics.image.registryRedisTM exporter image registrydocker.io
metrics.image.repositoryRedisTM exporter image namebitnami/redis-exporter
metrics.image.tagRedisTM exporter image tag{TAG_NAME}
metrics.image.pullPolicyImage pull policyIfNotPresent
metrics.image.pullSecretsSpecify docker-registry secret names as an arraynil
metrics.extraArgsExtra arguments for the binary; possible values here{}
metrics.podLabelsAdditional labels for Metrics exporter pod{}
metrics.podAnnotationsAdditional annotations for Metrics exporter pod{}
metrics.resourcesExporter resource requests/limitMemory: 256Mi, CPU: 100m
metrics.serviceMonitor.enabledif true, creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true)false
metrics.serviceMonitor.namespaceOptional namespace which Prometheus is running innil
metrics.serviceMonitor.intervalHow frequently to scrape metrics (use by default, falling back to Prometheus' default)nil
metrics.serviceMonitor.selectorDefault to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install{ prometheus: kube-prometheus }
metrics.serviceMonitor.relabelingsServiceMonitor relabelings. Value is evaluated as a template[]
metrics.serviceMonitor.metricRelabelingsServiceMonitor metricRelabelings. Value is evaluated as a template[]
metrics.service.typeKubernetes Service type (redis metrics)ClusterIP
metrics.service.externalTrafficPolicyExternal traffic policy (when service type is LoadBalancer)Cluster
metrics.service.annotationsAnnotations for the services to monitor (redis master and redis slave service){}
metrics.service.labelsAdditional labels for the metrics service{}
metrics.service.loadBalancerIPloadBalancerIP if redis metrics service type is LoadBalancernil
metrics.priorityClassNameMetrics exporter pod priorityClassNamenil
metrics.prometheusRule.enabledSet this to true to create prometheusRules for Prometheus operatorfalse
metrics.prometheusRule.additionalLabelsAdditional labels that can be used so prometheusRules will be discovered by Prometheus{}
metrics.prometheusRule.namespacenamespace where prometheusRules resource should be createdSame namespace as redis
metrics.prometheusRule.rulesrules to be created, check values for an example.[]
persistence.existingClaimProvide an existing PersistentVolumeClaimnil
master.persistence.enabledUse a PVC to persist data (master node)true
master.hostAliasesAdd deployment host aliases[]
master.persistence.pathPath to mount the volume at, to use other images/data
master.persistence.subPathSubdirectory of the volume to mount at""
master.persistence.storageClassStorage class of backing PVCgeneric
master.persistence.accessModesPersistent Volume Access Modes[ReadWriteOnce]
master.persistence.sizeSize of data volume8Gi
master.persistence.matchLabelsmatchLabels persistent volume selector{}
master.persistence.matchExpressionsmatchExpressions persistent volume selector{}
master.persistence.volumesAdditional volumes without creating PVC{}
master.statefulset.labelsAdditional labels for redis master StatefulSet{}
master.statefulset.annotationsAdditional annotations for redis master StatefulSet{}
master.statefulset.updateStrategyUpdate strategy for StatefulSetonDelete
master.statefulset.rollingUpdatePartitionPartition update strategynil
master.statefulset.volumeClaimTemplates.labelsAdditional labels for redis master StatefulSet volumeClaimTemplates{}
master.statefulset.volumeClaimTemplates.annotationsAdditional annotations for redis master StatefulSet volumeClaimTemplates{}
master.podLabelsAdditional labels for RedisTM master pod{}
master.podAnnotationsAdditional annotations for RedisTM master pod{}
master.extraEnvVarsAdditional Environment Variables passed to the pod of the master's stateful set set[]
master.extraEnvVarCMsAdditional Environment Variables ConfigMappassed to the pod of the master's stateful set set[]
master.extraEnvVarsSecretAdditional Environment Variables Secret passed to the master's stateful set[]
podDisruptionBudget.enabledPod Disruption Budget togglefalse
podDisruptionBudget.minAvailableMinimum available pods1
podDisruptionBudget.maxUnavailableMaximum unavailablenil
redisPortRedisTM port (in both master and slaves)6379
tls.enabledEnable TLS support for replication trafficfalse
tls.authClientsRequire clients to authenticate or nottrue
tls.certificatesSecretName of the secret that contains the certificatesnil
tls.certFilenameCertificate filenamenil
tls.certKeyFilenameCertificate key filenamenil
tls.certCAFilenameCA Certificate filenamenil
tls.dhParamsFilenameDH params (in order to support DH based ciphers)nil
master.commandRedisTM master entrypoint string. The command redis-server is executed if this is not provided. Note this is prepended with exec/run.sh
master.preExecCmdsText to inset into the startup script immediately prior to master.command. Use this if you need to run other ad-hoc commands as part of startupnil
master.configmapAdditional RedisTM configuration for the master nodes (this value is evaluated as a template)nil
master.disableCommandsArray of RedisTM commands to disable (master)["FLUSHDB", "FLUSHALL"]
master.extraFlagsRedisTM master additional command line flags[]
master.nodeSelectorRedisTM master Node labels for pod assignment{"beta.kubernetes.io/arch": "amd64"}
master.tolerationsToleration labels for RedisTM master pod assignment[]
master.affinityAffinity settings for RedisTM master pod assignment{}
master.schedulerNameName of an alternate schedulernil
master.service.typeKubernetes Service type (redis master)ClusterIP
master.service.externalTrafficPolicyExternal traffic policy (when service type is LoadBalancer)Cluster
master.service.portKubernetes Service port (redis master)6379
master.service.nodePortKubernetes Service nodePort (redis master)nil
master.service.annotationsannotations for redis master service{}
master.service.labelsAdditional labels for redis master service{}
master.service.loadBalancerIPloadBalancerIP if redis master service type is LoadBalancernil
master.service.loadBalancerSourceRangesloadBalancerSourceRanges if redis master service type is LoadBalancernil
master.resourcesRedisTM master CPU/Memory resource requests/limitsMemory: 256Mi, CPU: 100m
master.livenessProbe.enabledTurn on and off liveness probe (redis master pod)true
master.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated (redis master pod)5
master.livenessProbe.periodSecondsHow often to perform the probe (redis master pod)5
master.livenessProbe.timeoutSecondsWhen the probe times out (redis master pod)5
master.livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed (redis master pod)1
master.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.5
master.readinessProbe.enabledTurn on and off readiness probe (redis master pod)true
master.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated (redis master pod)5
master.readinessProbe.periodSecondsHow often to perform the probe (redis master pod)5
master.readinessProbe.timeoutSecondsWhen the probe times out (redis master pod)1
master.readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed (redis master pod)1
master.readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.5
master.shareProcessNamespaceRedisTM Master pod shareProcessNamespace option. Enables /pause reap zombie PIDs.false
master.priorityClassNameRedisTM Master pod priorityClassNamenil
volumePermissions.enabledEnable init container that changes volume permissions in the registry (for cases where the default k8s runAsUser and fsUser values do not work)false
volumePermissions.image.registryInit container volume-permissions image registrydocker.io
volumePermissions.image.repositoryInit container volume-permissions image namebitnami/minideb
volumePermissions.image.tagInit container volume-permissions image tagbuster
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyAlways
volumePermissions.resourcesInit container volume-permissions CPU/Memory resource requests/limits{}
volumePermissions.securityContext.*Security context of the init container{}
volumePermissions.securityContext.runAsUserUserID for the init container (when facing issues in OpenShift or uid unknown, try value "auto")0
slave.hostAliasesAdd deployment host aliases[]
slave.service.typeKubernetes Service type (redis slave)ClusterIP
slave.service.externalTrafficPolicyExternal traffic policy (when service type is LoadBalancer)Cluster
slave.service.nodePortKubernetes Service nodePort (redis slave)nil
slave.service.annotationsannotations for redis slave service{}
slave.service.labelsAdditional labels for redis slave service{}
slave.service.portKubernetes Service port (redis slave)6379
slave.service.loadBalancerIPLoadBalancerIP if RedisTM slave service type is LoadBalancernil
slave.service.loadBalancerSourceRangesloadBalancerSourceRanges if RedisTM slave service type is LoadBalancernil
slave.commandRedisTM slave entrypoint string. The command redis-server is executed if this is not provided. Note this is prepended with exec/run.sh
slave.preExecCmdsText to inset into the startup script immediately prior to slave.command. Use this if you need to run other ad-hoc commands as part of startupnil
slave.configmapAdditional RedisTM configuration for the slave nodes (this value is evaluated as a template)nil
slave.disableCommandsArray of RedisTM commands to disable (slave)[FLUSHDB, FLUSHALL]
slave.extraFlagsRedisTM slave additional command line flags[]
slave.livenessProbe.enabledTurn on and off liveness probe (redis slave pod)true
slave.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated (redis slave pod)5
slave.livenessProbe.periodSecondsHow often to perform the probe (redis slave pod)5
slave.livenessProbe.timeoutSecondsWhen the probe times out (redis slave pod)5
slave.livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed (redis slave pod)1
slave.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.5
slave.readinessProbe.enabledTurn on and off slave.readiness probe (redis slave pod)true
slave.readinessProbe.initialDelaySecondsDelay before slave.readiness probe is initiated (redis slave pod)5
slave.readinessProbe.periodSecondsHow often to perform the probe (redis slave pod)5
slave.readinessProbe.timeoutSecondsWhen the probe times out (redis slave pod)1
slave.readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed (redis slave pod)1
slave.readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded. (redis slave pod)5
slave.shareProcessNamespaceRedisTM slave pod shareProcessNamespace option. Enables /pause reap zombie PIDs.false
slave.persistence.enabledUse a PVC to persist data (slave node)true
slave.persistence.pathPath to mount the volume at, to use other images/data
slave.persistence.subPathSubdirectory of the volume to mount at""
slave.persistence.storageClassStorage class of backing PVCgeneric
slave.persistence.accessModesPersistent Volume Access Modes[ReadWriteOnce]
slave.persistence.sizeSize of data volume8Gi
slave.persistence.matchLabelsmatchLabels persistent volume selector{}
slave.persistence.matchExpressionsmatchExpressions persistent volume selector{}
slave.statefulset.labelsAdditional labels for redis slave StatefulSet{}
slave.statefulset.annotationsAdditional annotations for redis slave StatefulSet{}
slave.statefulset.updateStrategyUpdate strategy for StatefulSetonDelete
slave.statefulset.rollingUpdatePartitionPartition update strategynil
slave.statefulset.volumeClaimTemplates.labelsAdditional labels for redis slave StatefulSet volumeClaimTemplates{}
slave.statefulset.volumeClaimTemplates.annotationsAdditional annotations for redis slave StatefulSet volumeClaimTemplates{}
slave.extraEnvVarsAdditional Environment Variables passed to the pod of the slave's stateful set set[]
slave.extraEnvVarCMsAdditional Environment Variables ConfigMappassed to the pod of the slave's stateful set set[]
masslaveter.extraEnvVarsSecretAdditional Environment Variables Secret passed to the slave's stateful set[]
slave.podLabelsAdditional labels for RedisTM slave podmaster.podLabels
slave.podAnnotationsAdditional annotations for RedisTM slave podmaster.podAnnotations
slave.schedulerNameName of an alternate schedulernil
slave.resourcesRedisTM slave CPU/Memory resource requests/limits{}
slave.affinityEnable node/pod affinity for slaves{}
slave.tolerationsToleration labels for RedisTM slave pod assignment[]
slave.spreadConstraintsTopology Spread Constraints for RedisTM slave pod{}
slave.priorityClassNameRedisTM Slave pod priorityClassNamenil
sentinel.enabledEnable sentinel containersfalse
sentinel.usePasswordUse password for sentinel containerstrue
sentinel.masterSetName of the sentinel master setmymaster
sentinel.initialCheckTimeoutTimeout for querying the redis sentinel service for the active sentinel list5
sentinel.quorumQuorum for electing a new master2
sentinel.downAfterMillisecondsTimeout for detecting a RedisTM node is down60000
sentinel.failoverTimeoutTimeout for performing a election failover18000
sentinel.parallelSyncsNumber of parallel syncs in the cluster1
sentinel.portRedisTM Sentinel port26379
sentinel.configmapAdditional RedisTM configuration for the sentinel nodes (this value is evaluated as a template)nil
sentinel.staticIDEnable static IDs for sentinel replicas (If disabled IDs will be randomly generated on startup)false
sentinel.service.typeKubernetes Service type (redis sentinel)ClusterIP
sentinel.service.externalTrafficPolicyExternal traffic policy (when service type is LoadBalancer)Cluster
sentinel.service.nodePortKubernetes Service nodePort (redis sentinel)nil
sentinel.service.annotationsannotations for redis sentinel service{}
sentinel.service.labelsAdditional labels for redis sentinel service{}
sentinel.service.redisPortKubernetes Service port for RedisTM read only operations6379
sentinel.service.sentinelPortKubernetes Service port for RedisTM sentinel26379
sentinel.service.redisNodePortKubernetes Service node port for RedisTM read only operations``
sentinel.service.sentinelNodePortKubernetes Service node port for RedisTM sentinel``
sentinel.service.loadBalancerIPLoadBalancerIP if RedisTM sentinel service type is LoadBalancernil
sentinel.livenessProbe.enabledTurn on and off liveness probe (redis sentinel pod)true
sentinel.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated (redis sentinel pod)5
sentinel.livenessProbe.periodSecondsHow often to perform the probe (redis sentinel container)5
sentinel.livenessProbe.timeoutSecondsWhen the probe times out (redis sentinel container)5
sentinel.livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed (redis sentinel container)1
sentinel.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.5
sentinel.readinessProbe.enabledTurn on and off sentinel.readiness probe (redis sentinel pod)true
sentinel.readinessProbe.initialDelaySecondsDelay before sentinel.readiness probe is initiated (redis sentinel pod)5
sentinel.readinessProbe.periodSecondsHow often to perform the probe (redis sentinel pod)5
sentinel.readinessProbe.timeoutSecondsWhen the probe times out (redis sentinel container)1
sentinel.readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed (redis sentinel container)1
sentinel.readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded. (redis sentinel container)5
sentinel.resourcesRedisTM sentinel CPU/Memory resource requests/limits{}
sentinel.image.registryRedisTM Sentinel Image registrydocker.io
sentinel.image.repositoryRedisTM Sentinel Image namebitnami/redis-sentinel
sentinel.image.tagRedisTM Sentinel Image tag{TAG_NAME}
sentinel.image.pullPolicyImage pull policyIfNotPresent
sentinel.image.pullSecretsSpecify docker-registry secret names as an arraynil
sentinel.extraEnvVarsAdditional Environment Variables passed to the pod of the sentinel node stateful set set[]
sentinel.extraEnvVarCMsAdditional Environment Variables ConfigMappassed to the pod of the sentinel node stateful set set[]
sentinel.extraEnvVarsSecretAdditional Environment Variables Secret passed to the sentinel node statefulset[]
sentinel.preExecCmdsText to inset into the startup script immediately prior to sentinel.command. Use this if you need to run other ad-hoc commands as part of startupnil
sysctlImage.enabledEnable an init container to modify Kernel settingsfalse
sysctlImage.commandsysctlImage command to execute[]
sysctlImage.registrysysctlImage Init container registrydocker.io
sysctlImage.repositorysysctlImage Init container namebitnami/minideb
sysctlImage.tagsysctlImage Init container tagbuster
sysctlImage.pullPolicysysctlImage Init container pull policyAlways
sysctlImage.mountHostSysMount the host /sys folder to /host-sysfalse
sysctlImage.resourcessysctlImage Init container CPU/Memory resource requests/limits{}
podSecurityPolicy.createSpecifies whether a PodSecurityPolicy should be createdfalse

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install my-release \
  --set password=secretpassword \
    bitnami/redis

The above command sets the RedisTM server password to secretpassword.

NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install my-release -f values.yaml bitnami/redis

Tip: You can use the default values.yaml

Note for minikube users: Current versions of minikube (v0.24.1 at the time of writing) provision hostPath persistent volumes that are only writable by root. Using chart defaults cause pod failure for the RedisTM pod as it attempts to write to the /bitnami directory. Consider installing RedisTM with --set persistence.enabled=false. See minikube issue 1990 for more information.

Configuration and installation details

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Change RedisTM version

To modify the RedisTM version used in this chart you can specify a valid image tag using the image.tag parameter. For example, image.tag=X.Y.Z. This approach is also applicable to other images like exporters.

Cluster topologies

Default: Master-Slave

When installing the chart with cluster.enabled=true, it will deploy a RedisTM master StatefulSet (only one master node allowed) and a RedisTM slave StatefulSet. The slaves will be read-replicas of the master. Two services will be exposed:

  • RedisTM Master service: Points to the master, where read-write operations can be performed
  • RedisTM Slave service: Points to the slaves, where only read operations are allowed.

In case the master crashes, the slaves will wait until the master node is respawned again by the Kubernetes Controller Manager.

Master-Slave with Sentinel

When installing the chart with cluster.enabled=true and sentinel.enabled=true, it will deploy a RedisTM master StatefulSet (only one master allowed) and a RedisTM slave StatefulSet. In this case, the pods will contain an extra container with RedisTM Sentinel. This container will form a cluster of RedisTM Sentinel nodes, which will promote a new master in case the actual one fails. In addition to this, only one service is exposed:

  • RedisTM service: Exposes port 6379 for RedisTM read-only operations and port 26379 for accessing RedisTM Sentinel.

For read-only operations, access the service using port 6379. For write operations, it's necessary to access the RedisTM Sentinel cluster and query the current master using the command below (using redis-cli or similar:

SENTINEL get-master-addr-by-name <name of your MasterSet. Example: mymaster>

This command will return the address of the current master, which can be accessed from inside the cluster.

In case the current master crashes, the Sentinel containers will elect a new master node.

Using password file

To use a password file for RedisTM you need to create a secret containing the password.

NOTE: It is important that the file with the password must be called redis-password

And then deploy the Helm Chart using the secret name as parameter:

usePassword=true
usePasswordFile=true
existingSecret=redis-password-file
sentinels.enabled=true
metrics.enabled=true

Securing traffic using TLS

TLS support can be enabled in the chart by specifying the tls. parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart:

  • tls.enabled: Enable TLS support. Defaults to false
  • tls.certificatesSecret: Name of the secret that contains the certificates. No defaults.
  • tls.certFilename: Certificate filename. No defaults.
  • tls.certKeyFilename: Certificate key filename. No defaults.
  • tls.certCAFilename: CA Certificate filename. No defaults.

For example:

First, create the secret with the cetificates files:

kubectl create secret generic certificates-tls-secret --from-file=./cert.pem --from-file=./cert.key --from-file=./ca.pem

Then, use the following parameters:

tls.enabled="true"
tls.certificatesSecret="certificates-tls-secret"
tls.certFilename="cert.pem"
tls.certKeyFilename="cert.key"
tls.certCAFilename="ca.pem"

Metrics

The chart optionally can start a metrics exporter for prometheus. The metrics endpoint (port 9121) is exposed in the service. Metrics can be scraped from within the cluster using something similar as the described in the example Prometheus scrape configuration. If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.

If you have enabled TLS by specifying tls.enabled=true you also need to specify TLS option to the metrics exporter. You can do that via metrics.extraArgs. You can find the metrics exporter CLI flags for TLS here. For example:

You can either specify metrics.extraArgs.skip-tls-verification=true to skip TLS verification or providing the following values under metrics.extraArgs for TLS client authentication:

tls-client-key-file
tls-client-cert-file
tls-ca-cert-file

Host Kernel Settings

RedisTM may require some changes in the kernel of the host machine to work as expected, in particular increasing the somaxconn value and disabling transparent huge pages. To do so, you can set up a privileged initContainer with the sysctlImage config values, for example:

sysctlImage:
  enabled: true
  mountHostSys: true
  command:
    - /bin/sh
    - -c
    - |-
      install_packages procps
      sysctl -w net.core.somaxconn=10000
      echo never > /host-sys/kernel/mm/transparent_hugepage/enabled

Alternatively, for Kubernetes 1.12+ you can set securityContext.sysctls which will configure sysctls for master and slave pods. Example:

securityContext:
  sysctls:
  - name: net.core.somaxconn
    value: "10000"

Note that this will not disable transparent huge tables.

Persistence

By default, the chart mounts a Persistent Volume at the /data path. The volume is created using dynamic volume provisioning. If a Persistent Volume Claim already exists, specify it during installation.

Existing PersistentVolumeClaim

  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
$ helm install my-release --set persistence.existingClaim=PVC_NAME bitnami/redis

Backup and restore

Backup

To perform a backup you will need to connect to one of the nodes and execute:

$ kubectl exec -it my-redis-master-0 bash

$ redis-cli
127.0.0.1:6379> auth your_current_redis_password
OK
127.0.0.1:6379> save
OK

Then you will need to get the created dump file form the redis node:

$ kubectl cp my-redis-master-0:/data/dump.rdb dump.rdb -c redis

Restore

To restore in a new cluster, you will need to change a parameter in the redis.conf file and then upload the dump.rdb to the volume.

Follow the following steps:

  • First you will need to set in the values.yaml the parameter appendonly to no, if it is already no you can skip this step.
configmap: |-
  # Enable AOF https://redis.io/topics/persistence#append-only-file
  appendonly no
  # Disable RDB persistence, AOF persistence already enabled.
  save ""
  • Start the new cluster to create the PVCs.

For example, :

helm install new-redis  -f values.yaml .  --set cluster.enabled=true  --set cluster.slaveCount=3
  • Now that the PVC were created, stop it and copy the dump.rdp on the persisted data by using a helping pod.
$ helm delete new-redis

$ kubectl run --generator=run-pod/v1 -i --rm --tty volpod --overrides='
{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "redisvolpod"
    },
    "spec": {
        "containers": [{
            "command": [
                "tail",
                "-f",
                "/dev/null"
            ],
            "image": "bitnami/minideb",
            "name": "mycontainer",
            "volumeMounts": [{
                "mountPath": "/mnt",
                "name": "redisdata"
            }]
        }],
        "restartPolicy": "Never",
        "volumes": [{
            "name": "redisdata",
            "persistentVolumeClaim": {
                "claimName": "redis-data-new-redis-master-0"
            }
        }]
    }
}' --image="bitnami/minideb"

$ kubectl cp dump.rdb redisvolpod:/mnt/dump.rdb
$ kubectl delete pod volpod
  • Start again the cluster:
helm install new-redis  -f values.yaml .  --set cluster.enabled=true  --set cluster.slaveCount=3

NetworkPolicy

To enable network policy for RedisTM, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for all pods in the namespace:

kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"

With NetworkPolicy enabled, only pods with the generated client label will be able to connect to RedisTM. This label will be displayed in the output after a successful install.

With networkPolicy.ingressNSMatchLabels pods from other namespaces can connect to redis. Set networkPolicy.ingressNSPodMatchLabels to match pod labels in matched namespace. For example, for a namespace labeled redis=external and pods in that namespace labeled redis-client=true the fields should be set:

networkPolicy:
  enabled: true
  ingressNSMatchLabels:
    redis: external
  ingressNSPodMatchLabels:
    redis-client: true

Troubleshooting

Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.

Upgrading an existing Release to a new major version

A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an incompatible breaking change needing manual actions.

To 11.0.0

When using sentinel, a new statefulset called -node was introduced. This will break upgrading from a previous version where the statefulsets are called master and slave. Hence the PVC will not match the new naming and won't be reused. If you want to keep your data, you will need to perform a backup and then a restore the data in this new version.

To 10.0.0

For releases with usePassword: true, the value sentinel.usePassword controls whether the password authentication also applies to the sentinel port. This defaults to true for a secure configuration, however it is possible to disable to account for the following cases:

  • Using a version of redis-sentinel prior to 5.0.1 where the authentication feature was introduced.
  • Where redis clients need to be updated to support sentinel authentication.

If using a master/slave topology, or with usePassword: false, no action is required.

To 8.0.18

For releases with metrics.enabled: true the default tag for the exporter image is now v1.x.x. This introduces many changes including metrics names. You'll want to use this dashboard now. Please see the redis_exporter github page for more details.

To 7.0.0

This version causes a change in the RedisTM Master StatefulSet definition, so the command helm upgrade would not work out of the box. As an alternative, one of the following could be done:

  • Recommended: Create a clone of the RedisTM Master PVC (for example, using projects like this one). Then launch a fresh release reusing this cloned PVC.

    helm install my-release bitnami/redis --set persistence.existingClaim=<NEW PVC>
    
  • Alternative (not recommended, do at your own risk): helm delete --purge does not remove the PVC assigned to the RedisTM Master StatefulSet. As a consequence, the following commands can be done to upgrade the release

    helm delete --purge <RELEASE>
    helm install <RELEASE> bitnami/redis
    

Previous versions of the chart were not using persistence in the slaves, so this upgrade would add it to them. Another important change is that no values are inherited from master to slaves. For example, in 6.0.0 slaves.readinessProbe.periodSeconds, if empty, would be set to master.readinessProbe.periodSeconds. This approach lacked transparency and was difficult to maintain. From now on, all the slave parameters must be configured just as it is done with the masters.

Some values have changed as well:

  • master.port and slave.port have been changed to redisPort (same value for both master and slaves)
  • master.securityContext and slave.securityContext have been changed to securityContext(same values for both master and slaves)

By default, the upgrade will not change the cluster topology. In case you want to use RedisTM Sentinel, you must explicitly set sentinel.enabled to true.

To 6.0.0

Previous versions of the chart were using an init-container to change the permissions of the volumes. This was done in case the securityContext directive in the template was not enough for that (for example, with cephFS). In this new version of the chart, this container is disabled by default (which should not affect most of the deployments). If your installation still requires that init container, execute helm upgrade with the --set volumePermissions.enabled=true.

To 5.0.0

The default image in this release may be switched out for any image containing the redis-server and redis-cli binaries. If redis-server is not the default image ENTRYPOINT, master.command must be specified.

Breaking changes

  • master.args and slave.args are removed. Use master.command or slave.command instead in order to override the image entrypoint, or master.extraFlags to pass additional flags to redis-server.
  • disableCommands is now interpreted as an array of strings instead of a string of comma separated values.
  • master.persistence.path now defaults to /data.

4.0.0

This version removes the chart label from the spec.selector.matchLabels which is immutable since StatefulSet apps/v1beta2. It has been inadvertently added, causing any subsequent upgrade to fail. See https://github.com/helm/charts/issues/7726.

It also fixes https://github.com/helm/charts/issues/7726 where a deployment extensions/v1beta1 can not be upgraded if spec.selector is not explicitly set.

Finally, it fixes https://github.com/helm/charts/issues/7803 by removing mutable labels in spec.VolumeClaimTemplate.metadata.labels so that it is upgradable.

In order to upgrade, delete the RedisTM StatefulSet before upgrading:

kubectl delete statefulsets.apps --cascade=false my-release-redis-master

And edit the RedisTM slave (and metrics if enabled) deployment:

kubectl patch deployments my-release-redis-slave --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/chart"}]'
kubectl patch deployments my-release-redis-metrics --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/chart"}]'

Upgrading

To 12.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

What changes were introduced in this major version?

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts

Considerations when upgrading to this version

  • If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3

Useful links

To 11.0.0

When deployed with sentinel enabled, only a group of nodes is deployed and the master/slave role is handled in the group. To avoid breaking the compatibility, the settings for this nodes are given through the slave.xxxx parameters in values.yaml

To 9.0.0

The metrics exporter has been changed from a separate deployment to a sidecar container, due to the latest changes in the RedisTM exporter code. Check the official page for more information. The metrics container image was changed from oliver006/redis_exporter to bitnami/redis-exporter (Bitnami's maintained package of oliver006/redis_exporter).

To 7.0.0

In order to improve the performance in case of slave failure, we added persistence to the read-only slaves. That means that we moved from Deployment to StatefulSets. This should not affect upgrades from previous versions of the chart, as the deployments did not contain any persistence at all.

This version also allows enabling RedisTM Sentinel containers inside of the RedisTM Pods (feature disabled by default). In case the master crashes, a new RedisTM node will be elected as master. In order to query the current master (no redis master service is exposed), you need to query first the Sentinel cluster. Find more information in this section.