VOL-569: Create kubernetes deployment configuration for each voltha service
This update:
- renames all voltha images referenced in kubernetes deployment files
to the 'voltha-<component>' format
- adds the kubernetes deployment files for grafana, dashd, and shovel
- adds deployment files for an Ingress resource and an nginx-based ingress
controller to allow access to the Consul and Grafana UIs from outside
the cluster
Manifest file ingress/05-namespace.yml sets up a namespace 'ingress-nginx'
for all ingress-related resources. This file will be deleted once we move
all voltha components, including ingress, to a 'voltha' namespace.
Deployment instructions for the ingress resources are provided in README.md.
Change-Id: I0459e838318c43e21f40e83b314f77fc9e0456f8
diff --git a/k8s/README.md b/k8s/README.md
new file mode 100644
index 0000000..01d83e9
--- /dev/null
+++ b/k8s/README.md
@@ -0,0 +1,56 @@
+# How to set up Ingress into Services deployed on a Kubernetes Cluster
+
+1. Create an ingress controller and then an Ingress resource:
+```
+cd incubator/voltha/k8s
+kubectl apply -f ingress/
+```
+2. Add hostnames k8s-consul and k8s-grafana to the DNS (or edit /etc/hosts). Set the IP address of each of these hosts to that of the kubernetes master node.
+
+3. In your favorite browser, enter the following URLs:
+* http://k8s-consul:30080 for Consul UI access
+* http://k8s-grafana:30080 for Grafana UI access
+
+The current solution uses the hostname carried in the HTTP header to map the ingress to the appropriate service.
+That's the reason for the DNS configuration. There is another solution that would do away with this DNS requirement; that solution uses URL paths to perform the service mapping. More investigation is required to
+look into this approach.
+
+The ingress port number is dynamically assigned by Kubernetes from the default NodePort range 30000-32767, which is apparently configurable. The ingress service spec anchors the HTTP port to 30080 and the HTTPS port to 30443.
+
+# How to Deploy an Etcd Cluster on Kubernetes
+
+There may be several ways to deploy an etcd cluster. The following is an example of deploying a cluster using an etcd operator; it was tested on kubernetes 1.8.5. Information about the etcd operator and how to deploy it seems to change frequently; check out the following links:
+* https://coreos.com/blog/introducing-the-etcd-operator.html
+* https://github.com/coreos/etcd-operator/blob/master/README.md
+
+The procedure uses the default namespace and the default ServiceAccount. For voltha we'd likely want to use a voltha-specific namespace and ServiceAccount.
+
+Another issue to explore is role scope. Do we create a role global to the cluster, i.e. ClusterRole, or do we create a more constrained Role.
+
+Set up basic RBAC rules for the etcd operator:
+
+1. Create a ClusterRole called etcd-operator.
+```
+cd incubator/voltha/k8s/operator/etcd
+kubectl create -f cluster_role.yml
+kubectl get clusterrole
+```
+2. Create a ClusterRoleBinding that binds the default service account in the default namespace to the new role.
+```
+kubectl create -f cluster_role_binding.yml
+kubectl get clusterrolebinding
+```
+Deploy the etcd operator.
+```
+kubectl create -f operator.yml
+```
+The etcd operator will automatically create a CustomResourceDefinition (CRD).
+```
+$ kubectl get customresourcedefinitions
+NAME AGE
+etcdclusters.etcd.database.coreos.com 4m
+```
+Deploy the etcd cluster.
+```
+kubectl create -f etcd_cluster.yml
+```
diff --git a/k8s/consul.yml b/k8s/consul.yml
index 86166f0..a750a97 100644
--- a/k8s/consul.yml
+++ b/k8s/consul.yml
@@ -5,6 +5,7 @@
labels:
name: consul
spec:
+ type: ClusterIP
clusterIP: None
ports:
- name: http
@@ -85,12 +86,9 @@
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-config-dir=/consul/config"
- - "-datacenter=dc1"
- "-data-dir=/consul/data"
- - "-domain=cluster.local"
- "-server"
- "-ui"
- - "-disable-host-node-id"
lifecycle:
preStop:
exec:
diff --git a/k8s/envoy_for_consul.yml b/k8s/envoy_for_consul.yml
index b93ee2e..1d9f1e0 100644
--- a/k8s/envoy_for_consul.yml
+++ b/k8s/envoy_for_consul.yml
@@ -35,7 +35,7 @@
spec:
containers:
- name: voltha
- image: "voltha/envoy:latest"
+ image: voltha-envoy
env:
- name: POD_IP
valueFrom:
diff --git a/k8s/envoy_for_etcd.yml b/k8s/envoy_for_etcd.yml
index 247f6f6..2b7537c 100644
--- a/k8s/envoy_for_etcd.yml
+++ b/k8s/envoy_for_etcd.yml
@@ -35,7 +35,7 @@
spec:
containers:
- name: voltha
- image: "voltha/envoy:latest"
+ image: voltha-envoy
env:
- name: POD_IP
valueFrom:
diff --git a/k8s/fluentd.yml b/k8s/fluentd.yml
index 5b535e1..1a7ec0f 100644
--- a/k8s/fluentd.yml
+++ b/k8s/fluentd.yml
@@ -46,7 +46,7 @@
topologyKey: kubernetes.io/hostname
containers:
- name: fluentdactv
- image: cord/fluentd
+ image: voltha-fluentd
imagePullPolicy: Never
volumeMounts:
- name: fluentd-log
@@ -106,7 +106,7 @@
topologyKey: kubernetes.io/hostname
containers:
- name: fluentdstby
- image: cord/fluentd
+ image: voltha-fluentd
imagePullPolicy: Never
volumeMounts:
- name: fluentd-log
@@ -162,7 +162,7 @@
topologyKey: kubernetes.io/hostname
containers:
- name: fluentd
- image: cord/fluentd
+ image: voltha-fluentd
imagePullPolicy: Never
ports:
- containerPort: 24224
diff --git a/k8s/grafana.yml b/k8s/grafana.yml
new file mode 100644
index 0000000..c6fa94d
--- /dev/null
+++ b/k8s/grafana.yml
@@ -0,0 +1,56 @@
+#
+# The grafana service
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: grafana
+spec:
+ clusterIP: None
+ selector:
+ app: grafana
+ ports:
+ - name: ui-port
+ protocol: TCP
+ port: 8883
+ targetPort: 80
+ - name: port-2003
+ protocol: TCP
+ port: 2003
+ targetPort: 2003
+ - name: port-2004
+ protocol: TCP
+ port: 2004
+ targetPort: 2004
+ - name: port-8126
+ protocol: TCP
+ port: 8126
+ targetPort: 8126
+ - name: port-8125
+ protocol: TCP
+ port: 8125
+ targetPort: 8125
+---
+apiVersion: apps/v1beta1
+kind: Deployment
+metadata:
+ name: grafana
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: grafana
+ spec:
+ containers:
+ - name: grafana
+ image: kamon/grafana_graphite:3.0
+ ports:
+ - containerPort: 80
+ - containerPort: 2003
+ - containerPort: 2004
+ - containerPort: 8126
+ - containerPort: 8125
+ env:
+ - name: GR_SERVER_ROOT_URL
+ value: "http://localhost:80/grafana/"
diff --git a/k8s/ingress/05-namespace.yml b/k8s/ingress/05-namespace.yml
new file mode 100644
index 0000000..6878f0b
--- /dev/null
+++ b/k8s/ingress/05-namespace.yml
@@ -0,0 +1,4 @@
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: ingress-nginx
diff --git a/k8s/ingress/10-default-backend.yml b/k8s/ingress/10-default-backend.yml
new file mode 100644
index 0000000..64f6f58
--- /dev/null
+++ b/k8s/ingress/10-default-backend.yml
@@ -0,0 +1,52 @@
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: default-http-backend
+ labels:
+ app: default-http-backend
+ namespace: ingress-nginx
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: default-http-backend
+ spec:
+ terminationGracePeriodSeconds: 60
+ containers:
+ - name: default-http-backend
+ # Any image is permissable as long as:
+ # 1. It serves a 404 page at /
+ # 2. It serves 200 on a /healthz endpoint
+ image: gcr.io/google_containers/defaultbackend:1.4
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 8080
+ scheme: HTTP
+ initialDelaySeconds: 30
+ timeoutSeconds: 5
+ ports:
+ - containerPort: 8080
+ resources:
+ limits:
+ cpu: 10m
+ memory: 20Mi
+ requests:
+ cpu: 10m
+ memory: 20Mi
+---
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: default-http-backend
+ namespace: ingress-nginx
+ labels:
+ app: default-http-backend
+spec:
+ ports:
+ - port: 80
+ targetPort: 8080
+ selector:
+ app: default-http-backend
diff --git a/k8s/ingress/20-configmap.yml b/k8s/ingress/20-configmap.yml
new file mode 100644
index 0000000..08e9101
--- /dev/null
+++ b/k8s/ingress/20-configmap.yml
@@ -0,0 +1,7 @@
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: nginx-configuration
+ namespace: ingress-nginx
+ labels:
+ app: ingress-nginx
diff --git a/k8s/ingress/30-tcp-services-configmap.yml b/k8s/ingress/30-tcp-services-configmap.yml
new file mode 100644
index 0000000..a963085
--- /dev/null
+++ b/k8s/ingress/30-tcp-services-configmap.yml
@@ -0,0 +1,5 @@
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: tcp-services
+ namespace: ingress-nginx
diff --git a/k8s/ingress/40-udp-services-configmap.yml b/k8s/ingress/40-udp-services-configmap.yml
new file mode 100644
index 0000000..1870931
--- /dev/null
+++ b/k8s/ingress/40-udp-services-configmap.yml
@@ -0,0 +1,5 @@
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: udp-services
+ namespace: ingress-nginx
diff --git a/k8s/ingress/50-rbac.yml b/k8s/ingress/50-rbac.yml
new file mode 100644
index 0000000..3018532
--- /dev/null
+++ b/k8s/ingress/50-rbac.yml
@@ -0,0 +1,133 @@
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: nginx-ingress-serviceaccount
+ namespace: ingress-nginx
+
+---
+
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: ClusterRole
+metadata:
+ name: nginx-ingress-clusterrole
+rules:
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - endpoints
+ - nodes
+ - pods
+ - secrets
+ verbs:
+ - list
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - get
+ - apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - "extensions"
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - apiGroups:
+ - "extensions"
+ resources:
+ - ingresses/status
+ verbs:
+ - update
+
+---
+
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: Role
+metadata:
+ name: nginx-ingress-role
+ namespace: ingress-nginx
+rules:
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - pods
+ - secrets
+ - namespaces
+ verbs:
+ - get
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ resourceNames:
+ # Defaults to "<election-id>-<ingress-class>"
+ # Here: "<ingress-controller-leader>-<nginx>"
+ # This has to be adapted if you change either parameter
+ # when launching the nginx-ingress-controller.
+ - "ingress-controller-leader-nginx"
+ verbs:
+ - get
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ verbs:
+ - create
+ - apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - get
+
+---
+
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: RoleBinding
+metadata:
+ name: nginx-ingress-role-nisa-binding
+ namespace: ingress-nginx
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: nginx-ingress-role
+subjects:
+ - kind: ServiceAccount
+ name: nginx-ingress-serviceaccount
+ namespace: ingress-nginx
+
+---
+
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: ClusterRoleBinding
+metadata:
+ name: nginx-ingress-clusterrole-nisa-binding
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: nginx-ingress-clusterrole
+subjects:
+ - kind: ServiceAccount
+ name: nginx-ingress-serviceaccount
+ namespace: ingress-nginx
diff --git a/k8s/ingress/60-cluster-ingress-nginx.yml b/k8s/ingress/60-cluster-ingress-nginx.yml
new file mode 100644
index 0000000..a70a7fa
--- /dev/null
+++ b/k8s/ingress/60-cluster-ingress-nginx.yml
@@ -0,0 +1,72 @@
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: nginx-ingress-controller
+ namespace: ingress-nginx
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: ingress-nginx
+ template:
+ metadata:
+ labels:
+ app: ingress-nginx
+ annotations:
+ prometheus.io/port: '10254'
+ prometheus.io/scrape: 'true'
+ spec:
+ serviceAccountName: nginx-ingress-serviceaccount
+ initContainers:
+ - command:
+ - sh
+ - -c
+ - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
+ image: alpine:3.6
+ imagePullPolicy: IfNotPresent
+ name: sysctl
+ securityContext:
+ privileged: true
+ containers:
+ - name: nginx-ingress-controller
+ image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
+ args:
+ - /nginx-ingress-controller
+ - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
+ - --annotations-prefix=nginx.ingress.kubernetes.io
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ ports:
+ - name: http
+ containerPort: 80
+ - name: https
+ containerPort: 443
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /healthz
+ port: 10254
+ scheme: HTTP
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /healthz
+ port: 10254
+ scheme: HTTP
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
diff --git a/k8s/ingress/70-service-ingress-nginx.yml b/k8s/ingress/70-service-ingress-nginx.yml
new file mode 100644
index 0000000..9a1cfa9
--- /dev/null
+++ b/k8s/ingress/70-service-ingress-nginx.yml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: ingress-nginx
+ namespace: ingress-nginx
+spec:
+ type: NodePort
+ selector:
+ app: ingress-nginx
+ ports:
+ - name: http
+ port: 80
+ nodePort: 30080
+ targetPort: http
+ - name: https
+ port: 443
+ nodePort: 30443
+ targetPort: https
diff --git a/k8s/ingress/80-ingress.yml b/k8s/ingress/80-ingress.yml
new file mode 100644
index 0000000..c665801
--- /dev/null
+++ b/k8s/ingress/80-ingress.yml
@@ -0,0 +1,23 @@
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: voltha-ingress
+ annotations:
+ kubernetes.io/ingress.class: "nginx"
+ ingress.kubernetes.io/rewrite-target: /
+spec:
+ rules:
+ - host: k8s-consul
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: consul
+ servicePort: 8500
+ - host: k8s-grafana
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: grafana
+ servicePort: 8883
diff --git a/k8s/netconf.yml b/k8s/netconf.yml
index 5d999ba..125e3e1 100644
--- a/k8s/netconf.yml
+++ b/k8s/netconf.yml
@@ -34,7 +34,7 @@
topologyKey: kubernetes.io/hostname
containers:
- name: netconf
- image: "cord/netconf:latest"
+ image: voltha-netconf
imagePullPolicy: Never
ports:
- containerPort: 830
diff --git a/k8s/ofagent.yml b/k8s/ofagent.yml
index c282fce..70e78a8 100644
--- a/k8s/ofagent.yml
+++ b/k8s/ofagent.yml
@@ -22,7 +22,7 @@
topologyKey: kubernetes.io/hostname
containers:
- name: ofagent
- image: cord/ofagent
+ image: voltha-ofagent
imagePullPolicy: Never
env:
- name: NAMESPACE
diff --git a/k8s/operator/etcd/README.md b/k8s/operator/etcd/README.md
deleted file mode 100644
index 75cad82..0000000
--- a/k8s/operator/etcd/README.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# How to Deploy an Etcd Cluster on Kubernetes
-
-There may be several ways to deploy an etcd cluster. The following is an example of deploying a cluster using an etcd operator; it was tested on kubernetes 1.8.5. Information about the etcd operator and how to deploy it seems to change frequently; check out the following links:
-* https://coreos.com/blog/introducing-the-etcd-operator.html
-* https://github.com/coreos/etcd-operator/blob/master/README.md
-
-The procedure uses the default namespace and the default ServiceAccount. For voltha we'd likely want to use a voltha-specific namespace and ServiceAccount.
-
-Another issue to explore is role scope. Do we create a role global to the cluster, i.e. ClusterRole, or do we create a more constrained Role.
-
-Set up basic RBAC rules for the etcd operator:
-
-1. Create a ClusterRole called etcd-operator.
-```
-kubectl create -f cluster_role.yml
-kubectl get clusterrole
-```
-2. Create a ClusterRoleBinding that binds the default service account in the default namespace to the new role.
-```
-kubectl create -f cluster_role_binding.yml
-kubectl get clusterrolebinding
-```
-Deploy the etcd operator.
-```
-kubectl create -f operator.yml
-```
-The etcd operator will automatically create a CustomResourceDefinition (CRD).
-```
-$ kubectl get customresourcedefinitions
-NAME AGE
-etcdclusters.etcd.database.coreos.com 4m
-```
-Deploy the etcd cluster.
-```
-kubectl create -f etcd_cluster.yml
-```
\ No newline at end of file
diff --git a/k8s/stats.yml b/k8s/stats.yml
new file mode 100644
index 0000000..13c4655
--- /dev/null
+++ b/k8s/stats.yml
@@ -0,0 +1,59 @@
+#
+# The dashd deployment
+#
+apiVersion: apps/v1beta1
+kind: Deployment
+metadata:
+ name: dashd
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: dashd
+ spec:
+ containers:
+ - name: dashd
+ image: voltha-dashd
+ imagePullPolicy: Never
+ env:
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ args:
+ - "/dashd/dashd/main.py"
+ - "--kafka=kafka.$(NAMESPACE).svc.cluster.local"
+ - "--consul=consul:8500"
+ - "--grafana_url=http://admin:admin@grafana.$(NAMESPACE).svc.cluster.local:80/api"
+ - "--topic=voltha.kpis"
+---
+#
+# The shovel deployment
+#
+apiVersion: apps/v1beta1
+kind: Deployment
+metadata:
+ name: shovel
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: shovel
+ spec:
+ containers:
+ - name: shovel
+ image: voltha-shovel
+ imagePullPolicy: Never
+ env:
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ args:
+ - "/shovel/shovel/main.py"
+ - "--kafka=kafka.$(NAMESPACE).svc.cluster.local"
+ - "--consul=consul:8500"
+ - "--host=grafana.$(NAMESPACE).svc.cluster.local"
+ - "--topic=voltha.kpis"
diff --git a/k8s/vcli.yml b/k8s/vcli.yml
index 1ef3bbd..9debaf4 100644
--- a/k8s/vcli.yml
+++ b/k8s/vcli.yml
@@ -25,7 +25,7 @@
spec:
containers:
- name: vcli
- image: "cord/vcli:latest"
+ image: voltha-cli
env:
- name: POD_IP
valueFrom:
diff --git a/k8s/vcore_for_consul.yml b/k8s/vcore_for_consul.yml
index 40b3631..3784faf 100644
--- a/k8s/vcore_for_consul.yml
+++ b/k8s/vcore_for_consul.yml
@@ -32,7 +32,7 @@
spec:
containers:
- name: voltha
- image: "cord/voltha:latest"
+ image: voltha-voltha
imagePullPolicy: Never
ports:
- containerPort: 8880
diff --git a/k8s/vcore_for_etcd.yml b/k8s/vcore_for_etcd.yml
index ea207fc..4ae89b3 100644
--- a/k8s/vcore_for_etcd.yml
+++ b/k8s/vcore_for_etcd.yml
@@ -32,7 +32,7 @@
spec:
containers:
- name: voltha
- image: "cord/voltha:latest"
+ image: voltha-voltha
env:
- name: NAMESPACE
valueFrom: