tree: 1f97e43df92d6c92a882d9d46f5478c7924c597b [path history] [tgz]
  1. README.md
  2. adapters-openolt.yml
  3. adapters-ponsim.yml
  4. adapters-simulated.yml
  5. cli.yml
  6. genie-cni-plugin-1.8.yml
  7. kafka.yml
  8. namespace.yml
  9. ofagent.yml
  10. olt.yml
  11. onos.yml
  12. onu.yml
  13. operator/
  14. rg.yml
  15. ro-core.yml
  16. rw-core-pair1.yml
  17. rw-core-pair2.yml
  18. rw-core-pair3.yml
  19. single-node/
  20. zookeeper.yml
k8s/README.md

How to deploy read/write core pairs on Kubernetes

The current technique installs a separate rw-core deployment to each Kubernetes node, where each deployment consists of a pair (replicas = 2) of co-located rw-cores. Co-location is enforced by making use of the Kubernetes nodeSelector constraint applied at the pod spec level.

In order for node selection to work, a label must be applied to each node. There is a set of built-in node labels that comes with Kubernetes out of the box, one of which is kubernetes.io/hostname. This label can be used to constrain the deployment of a core pair to a node with a specific hostname. Another approach is to take greater control and create new node labels.

The following discussion assumes installation of the voltha-k8s-playground (https://github.com/ciena/voltha-k8s-playground) which configures three Kubernetes nodes named k8s1, k8s2, and k8s3.

Create a "nodename" label for each Kubernetes node:

kubectl label nodes k8s1 nodename=k8s1
kubectl label nodes k8s2 nodename=k8s2
kubectl label nodes k8s3 nodename=k8s3

Verify that the labels have been applied:

kubectl get nodes --show-labels
NAME      STATUS    ROLES         AGE       VERSION   LABELS
k8s1      Ready     master,node   4h        v1.9.5    ...,kubernetes.io/hostname=k8s1,nodename=k8s1
k8s2      Ready     node          4h        v1.9.5    ...,kubernetes.io/hostname=k8s2,nodename=k8s2
k8s3      Ready     node          4h        v1.9.5    ...,kubernetes.io/hostname=k8s3,nodename=k8s3

Ensure that a nodeSelector section appears in the deployment's pod spec (such a section should already exist in each manifest):

      ...
      nodeSelector:
        nodename: k8s1

Once the labels have been applied, deploy the 3 core pairs:

kubectl apply -f k8s/rw-core-pair1.yml
kubectl apply -f k8s/rw-core-pair2.yml
kubectl apply -f k8s/rw-core-pair3.yml