VOL-570 : Changed voltha install to use etcd instead of consul
- Updated to kubespray 2.5.0
- Updated to load dependent packages
- Restart k8s nodes after install
VOL-574 : Added instructions on how to install k8s cluster
k
Change-Id: Ie31004f32d1524be3b0c4e80499af7d7b3a6b7e4
diff --git a/Makefile b/Makefile
index 7b6c8f7..f6161d4 100644
--- a/Makefile
+++ b/Makefile
@@ -130,15 +130,18 @@
# naming conventions for the VOLTHA build
FETCH_K8S_IMAGE_LIST = \
alpine:3.6 \
+ busybox:latest \
+ nginx:1.13 \
consul:0.9.2 \
fluent/fluentd:v0.12.42 \
gcr.io/google_containers/defaultbackend:1.4 \
- gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1 \
+ gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 \
k8s.gcr.io/fluentd-gcp:1.30 \
kamon/grafana_graphite:3.0 \
marcelmaatkamp/freeradius:latest \
- quay.io/coreos/hyperkube:v1.9.2_coreos.0 \
+ gcr.io/google-containers/hyperkube:v1.9.5 \
quay.io/coreos/etcd-operator:v0.7.2 \
+ quay.io/coreos/etcd:v3.2.9 \
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2 \
wurstmeister/kafka:1.0.0 \
zookeeper:3.4.11
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
old mode 100755
new mode 100644
index 4d040ed..4474a32
--- a/install/BuildingTheInstaller.md
+++ b/install/BuildingTheInstaller.md
@@ -14,9 +14,9 @@
```
This will ensure that the user you've defined during the installation can run the virsh shell as a standard user rather than as the root user. This is necessary to ensure the installer software operates as designed. Please ensure that ubuntu **server** is installed and ***NOT*** ubuntu desktop.
![Ubuntu Installer Graphic](file:///C:Users/sslobodr/Documents/Works In Progress/2017/voltha/UbuntuInstallLaptop.png)
-**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document ``Building a vOLT-HA Virtual Machine Using Vagrant on QEMU/KVM`` then skip to [Building the Installer](#Building-the-installer).
+**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document `Building a vOLT-HA Virtual Machine Using Vagrant on QEMU/KVM` then skip to [Building the Installer](#Building-the-installer).
-Start with a clean installation of Ubuntu16.04 LTS on a bare metal server that is capable of virtualization. How to determine this is beyond th scope of this document. Ensure that package selection is as outlined above. Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section. Please note use exactly `virsh list` ***NOT*** `sudo virsh list`. If you must use the `sudo`command then the installation was not performed properly and should be repeated. If you're familiar with the KVM environment there are steps to solve this and other issues but this is also beyond the scope of this document. So if unfamiluar with the KVM environment a re-installation exactly as outlined above is required.
+Start with a clean installation of Ubuntu16.04 LTS on a bare metal server that is capable of virtualization. How to determine this is beyond th scope of this document. Ensure that package selection is as outlined above. Once the installation is complete, login to the box and type `virsh list`. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section. Please note use exactly `virsh list` ***NOT*** `sudo virsh list`. If you must use the `sudo`command then the installation was not performed properly and should be repeated. If you're familiar with the KVM environment there are steps to solve this and other issues but this is also beyond the scope of this document. So if unfamiluar with the KVM environment a re-installation exactly as outlined above is required.
###Create the base ubuntu/xenial box
Though there are some flavors of ubuntu boxes available but they usually have additional features installed. It is essential for the installer to start from a base install of ubuntu with absolutely no other software installed. To ensure the base image for the installer is a clean ubuntu server install and nothing but a clean ubuntu server install it is best to just create the image from the ubuntu installation iso image.
@@ -30,7 +30,7 @@
voltha> virt-manager
```
Once the virt manager opens, open the console of the Ubuntu16.04 VM and follow the installation process.
-When promprompted use the hostname ``vinstall``. Also when prompted you should create one user ``vinstall vinstall`` and use the offered up userid of ``vinstall``. When prompted for the password of the vinstall user, use ``vinstall``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install. The last 3 lines of your package selection screen should look likethis. Everything above `standard system utilities` should **not** be selected.
+When promprompted use the hostname `vinstall`. Also when prompted you should create one user `vinstall vinstall` and use the offered up userid of `vinstall`. When prompted for the password of the vinstall user, use `vinstall`. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install. The last 3 lines of your package selection screen should look likethis. Everything above `standard system utilities` should **not** be selected.
```
[*] standard system utilities
[ ] Virtual Machine host
@@ -54,7 +54,7 @@
vinstall@vinstall$ sudo telinit 0
```
###Download the voltha tree
-The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required. If you haven't done so perviously, do the following.
+The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific `Vagrantfile(s)` required. If you haven't done so perviously, do the following.
Create a .gitconfig file using your favorite editor and add the following:
```
@@ -88,7 +88,6 @@
Edit the vagrant configuration in `settings.vagrant.yaml` and ensure that the following variables are set and use the value above for `<yourid>`:
```
----
# The name to use for the server
server_name: "voltha<yourid>"
# Use virtualbox for development
@@ -135,19 +134,29 @@
There are 2 different ways to build the installer in production and in test mode.
### Building the installer in test mode
Test mode is useful for testers and developers. The installer build script will also launch 3 vagrant VMs that will be install targets and configure the installer to use them without having to supply passwords for each. This speeds up the subsequent install/test cycle.
+The installer can be built to deploy a Swarm (default) or Kubernetes cluster.
-To build the installer in test mode go to the installer directory
-``voltha> cd ~/cord/incubator/voltha/install``
+To build the installer in __test mode__ and deploy a __Swarm cluster__ go to the installer directory
+`voltha> cd ~/cord/incubator/voltha/install`
then type
-``voltha> ./CreateInstaller.sh test``.
+`voltha> ./CreateInstaller.sh test`.
+
+or
+
+To build the installer in __test mode__ and deploy a __Kubernetes cluster__ go to the installer
+directory
+`voltha> cd ~/cord/incubator/voltha/install`
+then type
+`voltha> ./CreateInstaller.sh test k8s`.
+
You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
This will take a while so doing something else in the mean-time is recommended.
Once the installation completes, determine the ip-address of one of the cluster VMs.
-``virsh domifaddr install_ha-serv<yourId>-1``
-You can use ``install_ha-serv<yourId>-2`` or ``install_ha-serv<yourId>-3`` in place of ``install_ha-serv<yourId>-1`` above. `<yourId> can be determined by issuing the command:
+`virsh domifaddr install_ha-serv<yourId>-1`
+You can use `install_ha-serv<yourId>-2` or `install_ha-serv<yourId>-3` in place of `install_ha-serv<yourId>-1` above. `<yourId> can be determined by issuing the command:
```
voltha> id -u
```
@@ -170,20 +179,28 @@
### Building the installer in production mode
Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.
-The archive file and a script called ``deployInstaller.sh`` are both placed in a directory named ``volthaInstaller``. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named ``installer.part<XX>`` where XX is a number starting at 00 and going as high as necessary based on the archive size.
+The archive file and a script called `deployInstaller.sh` are both placed in a directory named `volthaInstaller`. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named `installer.part<XX>` where XX is a number starting at 00 and going as high as necessary based on the archive size.
-To build the installer in production mode type:
-``./CreateInstaller.sh``
+The production mode installer can be built to deploy a Swarm (default) or Kubernetes cluster.
+
+To build the installer in __production mode__ and deploy a __Swarm cluster__ type:
+`./CreateInstaller.sh`
+
+or
+
+To build the installer in __production mode__ and deploy a __Kubernetes cluster__ type:
+`./CreateInstaller.sh k8s`
+
You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
-This will take a while and when it completes a directory name ``volthaInstaller`` will have been created. Copy all the files in this directory to a USB Flash drive or other portable media and carry to the installation site.
+This will take a while and when it completes a directory name `volthaInstaller` will have been created. Copy all the files in this directory to a USB Flash drive or other portable media and carry to the installation site.
## Installing Voltha
The targets for the installation can be either bare metal servers or VMs running ubuntu server 16.04 LTS. The he userid used for installation (see below) must have sudo rights. This is automatic for the user created during ubuntu installation. If you've created another user to use for installation, please ensure they have sudo rights.
-To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./deployInstaller.sh`` which should produce the output shown after the *Note*:
+To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type `./deployInstaller.sh` which should produce the output shown after the *Note*:
***Note:*** If you are a tester and are installing to 3 vagrant VMs on the same server as the installer is running and haven't used test mode, please add the network name that your 3 VMs are using to the the `deployInstaller.sh` command. In other words your command should be `./deployInstaller.sh <network-name>`. The network name for a vagrant VM is typically `vagrant-libvirt` under QEMU/KVM. If in doubt type `virsh net-list` and verify this. If a network is not provided then the `default` network is used and the target machines should be reachable directly from the installer.
```
@@ -206,8 +223,6 @@
Waiting for the VM's IP address
Waiting for the VM's IP address
Waiting for the VM's IP address
- .
- :
Waiting for the VM's IP address
Warning: Permanently added '192.168.122.24' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)
@@ -224,9 +239,16 @@
vinstall@vinstall:~$
```
-This might take a little while but once the prompt is presented there are 2 values that need to be configured after which the installer can be launched. (***Note:*** This will change over time as the HA solution evolves. As this happens this document will be updated)
+This might take a little while but once the prompt is presented there are a few entries values that
+need to be configured after which the installer can be launched. The values (***Note:*** This will
+change over time as the HA solution evolves. As this happens this document will be updated)
-Use your favorite editor to edit the file ``install.cfg`` which should contain the following lines:
+### Install on a Swarm cluster
+If you chose to build an installer to deploy a Swarm cluster, then read on. Otherwise, move on to
+ the *__Install on a Kubernetes cluster__* section.
+
+Use your favorite editor to edit the file `install.cfg` which should contain the following lines:
+
```
# Configure the hosts that will make up the cluster
# hosts="192.168.121.195 192.168.121.2 192.168.121.215"
@@ -244,3 +266,43 @@
Once `install.cfg` file has been updated and reachability has been confirmed, start the installation with the command `./installer.sh`.
Once launched, the installer will prompt for the password 3 times for each of the hosts the installation is being performed on. Once these have been provided, the installer will proceed without prompting for anything else.
+
+
+### Install on a Kubernetes cluster
+If you chose to build an installer to deploy a Kubernetes cluster, then read on.
+
+Use your favorite editor to edit the file `install.cfg` which should contain the following lines:
+
+```
+# Configure the hosts that will make up the cluster
+# hosts="192.168.121.195 192.168.121.2 192.168.121.215"
+#
+# Configure the user name to initilly log into those hosts as.
+# iUser="vagrant"
+#
+# Specify the cluster framework type (swarm or kubernetes)
+# cluster_framework="kubernetes"
+#
+# Address range for kubernetes services
+# cluster_service_subnet="192.168.0.0\/18"
+#
+# Address range for kubernetes pods
+# cluster_pod_subnet="192.168.128.0\/18"
+```
+
+Uncomment the `hosts` line and replace the list of ip addresses on the line with the list of ip addresses for your deployment. These can be either VMs or bare metal servers, it makes no difference to the installer.
+
+Uncomment the `iUser` line and change the userid that will be used to log into the target hosts (listed above) and save the file. The installer will create a new user named voltha on each of those hosts and use that account to complete the installation.
+
+Uncomment the `cluster_framework` line to inform the installer that kubernetes was selected.
+
+Uncomment the `cluster_service_subnet` line and adjust the subnet that will be used by the to
+your needs. This subnet will be used by the running services.
+
+Uncomment the `cluster_pod_subnet` line and adjust the subnet that will be used by the to your needs. This subnet will be used by the running pods.
+
+Make sure that all the hosts that are being installed to have Ubuntu server 16.04LTS installed with OpenSSH. Also make sure that they're all reachable by attempting an ssh login to each with the user id provided on the iUser line.
+
+Once `install.cfg` file has been updated and reachability has been confirmed, start the installation with the command `./installer.sh`.
+
+Once launched, the installer will prompt for the password 3 times for each of the hosts the installation is being performed on. Once these have been provided, the installer will proceed without prompting for anything else.
diff --git a/install/CreateInstaller.sh b/install/CreateInstaller.sh
index 870dfa8..ce13915 100755
--- a/install/CreateInstaller.sh
+++ b/install/CreateInstaller.sh
@@ -336,8 +336,7 @@
if [ "$useKubernetes" == "yes" ]; then
echo -e "${lBlue}Cloning ${lCyan}Kubespray${lBlue} repository${NC}"
- ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr "git clone --branch v2.4.0 https://github.com/kubernetes-incubator/kubespray.git /home/vinstall/kubespray"
- #ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr "git clone https://github.com/kubernetes-incubator/kubespray.git /home/vinstall/kubespray"
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr "git clone --branch v2.5.0 https://github.com/kubernetes-incubator/kubespray.git /home/vinstall/kubespray"
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr "sudo chown -R vinstall.vinstall /home/vinstall/kubespray"
fi
diff --git a/install/ansible/roles/voltha-k8s/tasks/deploy.yml b/install/ansible/roles/voltha-k8s/tasks/deploy.yml
index 91cc1ea..a9c35a6 100644
--- a/install/ansible/roles/voltha-k8s/tasks/deploy.yml
+++ b/install/ansible/roles/voltha-k8s/tasks/deploy.yml
@@ -55,21 +55,33 @@
- fluentdstby
run_once: true
-# Consul
-- name: "VOLT-HA Deploy | Start consul"
- command: kubectl apply -f {{ target_voltha_home }}/k8s/consul.yml
+# Etcd
+- name: "VOLT-HA Deploy | Define etcd cluster role"
+ command: kubectl apply -f {{ target_voltha_home }}/k8s/operator/etcd/cluster_role.yml
run_once: true
-- name: "VOLT-HA Deploy | Wait for consul to be ready"
- command: kubectl rollout status statefulset/consul -w -n {{ voltha_namespace }}
+- name: "VOLT-HA Deploy | Define etcd cluster role binding"
+ command: kubectl apply -f {{ target_voltha_home }}/k8s/operator/etcd/cluster_role_binding.yml
run_once: true
-# Voltha Core (for consul)
-- name: "VOLT-HA Deploy | Start VOLT-HA core (for consul)"
- command: kubectl apply -f {{ target_voltha_home }}/k8s/vcore_for_consul.yml
+- name: "VOLT-HA Deploy | Start etcd operator"
+ command: kubectl apply -f {{ target_voltha_home }}/k8s/operator/etcd/operator.yml
run_once: true
-- name: "VOLT-HA Deploy | Wait for VOLT-HA core (for consul) to be ready"
+- name: "VOLT-HA Deploy | Wait for etcd operator to be ready"
+ command: kubectl rollout status deployment/etcd-operator -w -n {{ voltha_namespace }}
+ run_once: true
+
+- name: "VOLT-HA Deploy | Start etcd cluster"
+ command: kubectl apply -f {{ target_voltha_home }}/k8s/operator/etcd/etcd_cluster.yml
+ run_once: true
+
+# Voltha Core (for etcd)
+- name: "VOLT-HA Deploy | Start VOLT-HA core (for etcd)"
+ command: kubectl apply -f {{ target_voltha_home }}/k8s/vcore_for_etcd.yml
+ run_once: true
+
+- name: "VOLT-HA Deploy | Wait for VOLT-HA core (for etcd) to be ready"
command: kubectl rollout status deployment/vcore -w -n {{ voltha_namespace }}
run_once: true
@@ -82,12 +94,12 @@
command: kubectl rollout status deployment/ofagent -w -n {{ voltha_namespace }}
run_once: true
-# Envoy (for consul)
-- name: "VOLT-HA Deploy | Start Envoy (for consul)"
- command: kubectl apply -f {{ target_voltha_home }}/k8s/envoy_for_consul.yml
+# Envoy (for etcd)
+- name: "VOLT-HA Deploy | Start Envoy (for etcd)"
+ command: kubectl apply -f {{ target_voltha_home }}/k8s/envoy_for_etcd.yml
run_once: true
-- name: "VOLT-HA Deploy | Wait for Envoy (for consul) to be ready"
+- name: "VOLT-HA Deploy | Wait for Envoy (for etcd) to be ready"
command: kubectl rollout status deployment/voltha -w -n {{ voltha_namespace }}
run_once: true
diff --git a/install/ansible/roles/voltha-k8s/tasks/teardown.yml b/install/ansible/roles/voltha-k8s/tasks/teardown.yml
index 10fb856..1f99f6a 100644
--- a/install/ansible/roles/voltha-k8s/tasks/teardown.yml
+++ b/install/ansible/roles/voltha-k8s/tasks/teardown.yml
@@ -18,9 +18,9 @@
command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/vcli.yml
run_once: true
-# Envoy (for consul)
-- name: "VOLT-HA Teardown | Stop Envoy (for consul)"
- command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/envoy_for_consul.yml
+# Envoy (for etcd)
+- name: "VOLT-HA Teardown | Stop Envoy (for etcd)"
+ command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/envoy_for_etcd.yml
run_once: true
# OFagent
@@ -28,14 +28,29 @@
command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/ofagent.yml
run_once: true
-# Voltha Core (for consul)
-- name: "VOLT-HA Teardown | Stop VOLT-HA core (for consul)"
- command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/vcore_for_consul.yml
+# Voltha Core (for etcd)
+- name: "VOLT-HA Teardown | Stop VOLT-HA core (for etcd)"
+ command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/vcore_for_etcd.yml
run_once: true
-# Consul
-- name: "VOLT-HA Teardown | Stop consul"
- command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/consul.yml
+# Etcd cluster
+- name: "VOLT-HA Teardown | Stop etcd cluster"
+ command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/operator/etcd/etcd_cluster.yml
+ run_once: true
+
+# Etcd operator
+- name: "VOLT-HA Teardown | Stop etcd operator"
+ command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/operator/etcd/operator.yml
+ run_once: true
+
+# Etcd cluster role binding
+- name: "VOLT-HA Teardown | Stop etcd cluster role binding"
+ command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/operator/etcd/cluster_role_binding.yml
+ run_once: true
+
+# Etcd cluster role
+- name: "VOLT-HA Teardown | Stop etcd cluster role"
+ command: kubectl delete --ignore-not-found=true -f {{ target_voltha_home }}/k8s/operator/etcd/cluster_role.yml
run_once: true
# Fluentd
diff --git a/install/containers.cfg.k8s b/install/containers.cfg.k8s
index 71a39c0..6815e81 100644
--- a/install/containers.cfg.k8s
+++ b/install/containers.cfg.k8s
@@ -1,4 +1,7 @@
voltha_containers:
+ - alpine:3.6
+ - busybox:latest
+ - nginx:1.13
- consul:0.9.2
- fluent/fluentd:v0.12.42
- gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2
@@ -6,17 +9,18 @@
- gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
- gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8
- gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8
- - gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
+ - gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3
- gcr.io/google_containers/pause-amd64:3.0
- k8s.gcr.io/fluentd-gcp:1.30
- kamon/grafana_graphite:3.0
- lyft/envoy:29361deae91575a1d46c7a21e913f19e75622ebe
- - quay.io/calico/cni:v1.11.0
- - quay.io/calico/ctl:v1.6.1
- - quay.io/calico/node:v2.6.2
- - quay.io/calico/routereflector:v0.4.0
+ - quay.io/calico/cni:v1.11.4
+ - quay.io/calico/ctl:v1.6.3
+ - quay.io/calico/node:v2.6.8
- quay.io/coreos/etcd:v3.2.4
- - quay.io/coreos/hyperkube:v1.9.2_coreos.0
+ - quay.io/coreos/etcd-operator:v0.7.2
+ - quay.io/coreos/etcd:v3.2.9
+ - gcr.io/google-containers/hyperkube:v1.9.5
- quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
- voltha-cli:latest
- voltha-dashd:latest
diff --git a/install/installer.sh b/install/installer.sh
index 4ace5a0..ff156aa 100755
--- a/install/installer.sh
+++ b/install/installer.sh
@@ -141,7 +141,7 @@
echo -e "${green}Deploying kubernetes${NC}"
# Remove previously created inventory if it exists
- cp -rfp kubespray/inventory kubespray/inventory/voltha
+ cp -rfp kubespray/inventory/sample kubespray/inventory/voltha
# Adjust kubespray configuration
@@ -169,18 +169,29 @@
sed -i -e "s/or is_atomic)/& and skip_downloads == \"false\" /" \
kubespray/roles/kubernetes/preinstall/tasks/main.yml
+ # Configure failover parameters
+ sed -i -e "s/kube_controller_node_monitor_grace_period: .*/kube_controller_node_monitor_grace_period: 20s/" \
+ kubespray/roles/kubernetes/master/defaults/main.yml
+ sed -i -e "s/kube_controller_pod_eviction_timeout: .*/kube_controller_pod_eviction_timeout: 30s/" \
+ kubespray/roles/kubernetes/master/defaults/main.yml
+
# Construct node inventory
CONFIG_FILE=kubespray/inventory/voltha/hosts.ini python3 \
kubespray/contrib/inventory_builder/inventory.py $hosts
+ # The inventory builder configures 2 masters.
+ # Due to non-stable behaviours, force the use of a single master
+ cat kubespray/inventory/voltha/hosts.ini \
+ | sed -e ':begin;$!N;s/\(\[kube-master\]\)\n/\1/;tbegin;P;D' \
+ | sed -e '/\[kube-master\].*/,/\[kube-node\]/{//!d}' \
+ | sed -e 's/\(\[kube-master\]\)\(.*\)/\1\n\2\n/' \
+ > kubespray/inventory/voltha/hosts.ini.tmp
+
+ mv kubespray/inventory/voltha/hosts.ini.tmp kubespray/inventory/voltha/hosts.ini
+
ordered_nodes=`CONFIG_FILE=kubespray/inventory/voltha/hosts.ini python3 \
kubespray/contrib/inventory_builder/inventory.py print_ips`
- # The inventory defines
- sed -i -e '/\[kube-master\]/a\
- node3
- ' kubespray/inventory/voltha/hosts.ini
-
echo "[k8s-master]" > ansible/hosts/k8s-master
mkdir -p kubespray/inventory/voltha/host_vars
@@ -208,6 +219,58 @@
--become-method=sudo --become-user root -u voltha \
-i kubespray/inventory/voltha/hosts.ini kubespray/cluster.yml
+ # Now all 3 servers need to be rebooted because of software installs.
+ # Reboot them and wait patiently until they all come back.
+ # Note this destroys the registry tunnel wich is no longer needed.
+ hList=""
+ for i in $hosts
+ do
+ echo -e "${lBlue}Rebooting cluster hosts${NC}"
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .keys/$i voltha@$i sudo telinit 6
+ hList="$i $hList"
+ done
+
+ # Give the hosts time to shut down so that pings stop working or the
+ # script just falls through the next loop and the rest fails.
+ echo -e "${lBlue}Waiting for shutdown${NC}"
+ sleep 5
+
+
+ while [ ! -z "$hList" ];
+ do
+ # Attempt to ping the VMs on the list one by one.
+ echo -e "${lBlue}Waiting for hosts to reboot ${yellow}$hList${NC}"
+ for i in $hList
+ do
+ ping -q -c 1 $i > /dev/null 2>&1
+ ret=$?
+ if [ $ret -eq 0 ]; then
+ ipExpr=`echo $i | sed -e "s/\./[.]/g"`
+ hList=`echo $hList | sed -e "s/$ipExpr//" | sed -e "s/^ //" | sed -e "s/ $//"`
+ fi
+ done
+
+ done
+
+ # Wait for kubernetes to settle after reboot
+ k8sIsUp="no"
+ while [ "$k8sIsUp" == "no" ];
+ do
+ # Attempt to ping the VMs on the list one by one.
+ echo -e "${lBlue}Waiting for kubernetes to settle${NC}"
+ for i in $hosts
+ do
+ nc -vz $i 6443 > /dev/null 2>&1
+ ret=$?
+ if [ $ret -eq 0 ]; then
+ k8sIsUp="yes"
+ break
+ fi
+ sleep 1
+ done
+ done
+ echo -e "${lBlue}Kubernetes is up and running${NC}"
+
# Deploy Voltha
ansible-playbook -v ansible/voltha-k8s.yml -i ansible/hosts/k8s-master -e 'deploy_voltha=true'
diff --git a/install/preloadKubernetes.sh b/install/preloadKubernetes.sh
index aab97ba..248fee8 100755
--- a/install/preloadKubernetes.sh
+++ b/install/preloadKubernetes.sh
@@ -9,7 +9,7 @@
# Retrieve stable kubespray repo
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i \
../.vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$ipAddr \
- "git clone --branch v2.4.0 https://github.com/kubernetes-incubator/kubespray.git"
+ "git clone --branch v2.5.0 https://github.com/kubernetes-incubator/kubespray.git"
# Setup a new ansible manifest to only download files
cat <<HERE > download.yml
@@ -29,7 +29,7 @@
# Run the manifest
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i \
../.vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$ipAddr \
- "mkdir -p releases && cd kubespray && ANSIBLE_CONFIG=ansible.cfg ansible-playbook -v -u root -i inventory/local-tests.cfg download.yml"
+ "mkdir -p releases && cd kubespray && ANSIBLE_CONFIG=ansible.cfg ansible-playbook -v -u root -i inventory/local/hosts.ini download.yml"
rtrn=$?
@@ -37,5 +37,3 @@
exit $rtrn
-
-