Added the deployment and initialization of base services required for
vOLT-HA. These are consul, zookeeper, and kafka. All services are
deployed in 3 member clusters. voltha is also started as a single
instance to demonstrate that it interacts with the consul cluster and
creates the keys in the KV store as expected.
Added updates to the documentation to ensure it's crystal clear how the
bare metal server needs to be set up.
This update continues to address VOL-6.

Change-Id: I909f2e70b117589ba7e119a9840a2c810a7991cb
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
index 844c9b4..07916eb 100755
--- a/install/BuildingTheInstaller.md
+++ b/install/BuildingTheInstaller.md
@@ -1,174 +1,183 @@
-# Running the installer

-***

-**++Table of contents++**

-

-[TOC]

-***

-## Set up the Dependencies

-### Bare Metal Setup

-**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document ``Building a vOLT-HA Virtual Machine  Using Vagrant on QEMU/KVM`` then skip to [Running the Installer](#Building-the-installer).

-

-Start with an installation of Ubuntu16.04LTS on a bare metal server that is capable of virtualization. How to determine this is beyond the scope of this document. When installing the image ensure that both "OpenSSH server" and "Virtualization Machine Host" are chosen in addition to the default "standard system utilities". Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section.

-

-###Create the base ubuntu/xenial box

-  Though there are some flavors of ubuntu boxes available but they usually have additional features installed or missing so it's best to just create the image from the ubuntu installation iso image.

-  

-  ```

-  

-  voltha> wget http://releases.ubuntu.com/xenial/ubuntu-16.04.2-server-amd64.iso

-  voltha> echo "virt-install -n Ubuntu1604LTS -r 1024 --vcpus=2 --disk size=50 -c ubuntu-16.04.2-server-amd64.iso --accelerate --network network=default,model=virtio --connect=qemu:///system --vnc --noautoconsole -v" > Ubuntu16.04Vm

-  voltha> . Ubuntu16.04Vm

-  voltha> virt-manager

-```

-Once the virt manager opens, open the console of the Ubuntu16.04 VM and follow the installation process.

-When promprompted use the hostname ``vinstall``. Also when prompted you should create one user ``vinstall vinstall`` and use the offered up userid of ``vinstall``. When prompted for the password of the vagrant user, use ``vinstall``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install.

-Once the installation is complete, run the VM and log in as vagrant password vagrant and install the default vagrant key (this can be done one of two ways, through virt-manager and the console or by uing ssh from the hypervisor host, the virt-manager method is shown below):

-```

-vinstall@voltha$ mkdir -p /home/vinstall/.ssh

-vagrant@voltha$ chmod 0700 /home/vinstall/.ssh

-vagrant@voltha$ chown -R vagrant.vagrant /home/vagrant/.ssh

-```

-Also create a .ssh directory for the root user:

-```

-vagrant@voltha$ sudo mkdir /root/.ssh

-```

-Add a vinstall file to /etc/sudoers.d/vinstall with the following:

-```

-vagrant@voltha$ echo "vinstall ALL=(ALL) NOPASSWD:ALL" > tmp.sudo

-vagrant@voltha$ sudo chown root.root tmp.sudo

-vagrant@voltha$ sudo mv tmp.sudo /etc/sudoers.d/vinstall

-```

-Shut down the VM.

-

-```

-vinstall@voltha$ sudo telinit 0

-```

-###Download the voltha tree

-The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required.

-```

-voltha> sudo apt-get install repo

-voltha> mkdir cord

-voltha>  sudo ln -s /cord `pwd`/cord

-voltha>  cd cord

-voltha>  repo init -u https://gerrit.opencord.org/manifest -g voltha

-voltha>  repo sync

-```

-

-### Run vagrant to Create a Voltha VM

-***Note:*** If you haven't done so, please follow the steps provided in the document `BulindingVolthaOnVagrantUsingKVM.md` to create the base voltha VM box for vagrant.

-

-First create the voltah VM using vagrant.

-```

-voltha> vagrant up

-```

-Finally, if required, log into the vm using vagrant.

-```

-voltha> vagrant ssh

-```

-## Building the Installer

-There are 2 different ways to build the installer in production and in test mode.

-### Building the installer in test mode

-Test mode is useful for testers and developers. The installer build script will also launch 3 vagrant VMs that will be install targets and configure the installer to use them without having to supply passwords for each. This speeds up the subsequent install/test cycle.

-

-To build the installer in test mode go to the installer directory

-``cd /cord/incubator/voltha/install``

-then type

-``./CreateInstaller.sh test``.

-

-You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.

-

-This will take a while so doing something else in the mean-time is recommended.

-

-### Running the installer in test mode

-Once the creation has completed determine the ip address of the VM with the following virsh command:

-``virsh domifaddr vInstaller``

-using the ip address provided log into the installer using

-``ssh -i key.pem vinstall@<ip-address-from-above>``

-

-Finally, start the installer.

-``./installer.sh``

-In test mode it'll just launch with no prompts and install voltha on the 3 VMs created at the same time that the installer was created (ha-serv1, ha-serv2, and ha-serv3). This step takes quite a while since 3 different voltha installs are taking place, one for each of the 3 VMs in the cluster.

-

-Once the installation completes, determine the ip-address of one of the cluster VMs.

-``virsh domifaddr ha-serv1``

-You can use ``ha-serv2`` or ``ha-serv3`` in place of ``ha-serv1`` above. Log into the VM

-``ssh voltha@<ip-address-from-above>``

-The password is `voltha`.

-Once logged into the voltha instance follow the usual procedure to start voltha and validate that it's operating correctly.

-

-### Building the installer in production mode

-Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.

-

-The archive file and a script called ``installVoltha.sh`` are both placed in a directory named ``volthaInstaller``. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named ``installer.part<XX>`` where XX is a number starting at 00 and going as high as necessary based on the archive size.

-

-To build the installer in production mode type:

-``./CreateInstaller.sh``

-

-You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.

-

-This will take a while and when it completes a directory name ``volthaInstaller`` will have been created. Copy all the files in this directory to a USB Flash drive or other portable media and carry to the installation site.

-

-## Installing Voltha

-

-To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./installVoltha.sh`` which should produce the output shown after the *Note*:

-

-***Note:*** If you are a tester and are installing to 3 vagrant VMs on the same server as the installer is running and haven't used test mode, please add the network name that your 3 VMs are using to the the `installVoltha.sh` command. In other words your command should be `./installVoltha.sh <network-name>`. The network name for a vagrant VM is typically `vagrant-libvirt` under QEMU/KVM. If in doubt type `virsh net-list` and verify this. If a network is not provided then the `default` network is used and the target machines should be reachable directly from the installer.

-```

-Checking for the installer archive installer.tar.bz2

-Checking for the installer archive parts installer.part*

-Creating the installer archive installer.tar.bz2

-Extracting the content of the installer archive installer.tar.bz2

-Starting the installer{NC}

-Defining the  vInstaller virtual machine

-Creating the storage for the vInstaller virtual machine

-Pool installer created

-

-Vol vInstaller.qcow2 created from input vol vInstaller.qcow2

-

-Pool installer destroyed

-

-Domain vInstaller defined from tmp.xml

-

-Starting the vInstaller virtual machine

-Waiting for the VM's IP address

-Waiting for the VM's IP address

-Waiting for the VM's IP address

-             .

-             :

-Waiting for the VM's IP address

-Warning: Permanently added '192.168.122.24' (ECDSA) to the list of known hosts.

-Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)

-

- * Documentation:  https://help.ubuntu.com

- * Management:     https://landscape.canonical.com

- * Support:        https://ubuntu.com/advantage

-

-7 packages can be updated.

-7 updates are security updates.

-

-

-Last login: Tue Jun  6 16:55:48 2017 from 192.168.121.1

-vinstall@vinstall:~$

-```

-

-This might take a little while but once the prompt is presented there are 2 values that need to be configured after which the installer can be launched. (***Note:*** This will change over time as the HA solution evolves. As this happens this document will be updated)

-

-Use your favorite editor to edit the file ``install.cfg`` which should contain the following lines:

-```

-# Configure the hosts that will make up the cluster

-# hosts="192.168.121.195 192.168.121.2 192.168.121.215"

-#

-# Configure the user name to initilly log into those hosts as.

-# iUser="vagrant"

-```

-

-Uncomment the `hosts` line and replace the list of ip addresses on the line with the list of ip addresses for your deployment. These can be either VMs or bare metal servers, it makes no difference to the installer.

-

-Next uncomment the iUser line and change the userid that will be used to log into the target hosts (listed above) and save the file. The installer will create a new user named voltha on each of those hosts and use that account to complete the installation.

-

-Make sure that all the hosts that are being installed to have Ubuntu server 16.04LTS installed with OpenSSH. Also make sure that they're all reachable by attempting an ssh login to each with the user id provided on the iUser line.

-

-Once `install.cfg` file has been updated and reachability has been confirmed, start the installation with the command `./installer.sh`.

-

-

-Once launched, the installer will prompt for the password 3 times for each of the hosts the installation is being performed on. Once these have been provided, the installer will proceed without prompting for anything else. 
\ No newline at end of file
+# Running the installer
+***
+**++Table of contents++**
+
+[TOC]
+***
+## Set up the Dependencies
+### Bare Metal Setup
+The bare metal machine MUST have ubuntu server 16.04 LTS installed with the following packages (and only the following packages) selected during installation:
+```
+[*] standard system utilities
+[*] Virtual Machine host
+[*] OpenSSH server
+```
+This will ensure that the user you've defined during the installation can run the virsh shell as a standard user rather than as the root user. This is necessary to ensure the installer software operates as designed. Please ensure that ubuntu **server** is installed and ***NOT*** ubuntu desktop.
+![Ubuntu Installer Graphic](file:///C:Users/sslobodr/Documents/Works In Progress/2017/voltha/UbuntuInstallLaptop.png)
+**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document ``Building a vOLT-HA Virtual Machine  Using Vagrant on QEMU/KVM`` then skip to [Building the Installer](#Building-the-installer).
+
+Start with a clean installation of Ubuntu16.04 LTS on a bare metal server that is capable of virtualization selecting the packages outlined above. How to determine this is beyond the scope of this document. Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section. Please note use exactly `virsh list` ***NOT*** `sudo virsh list`. If  you must use the `sudo`command then the installation was not performed properly and should be repeated. If you're familiar with the KVM environment there are steps to solve this and other issues but this is also beyond the scope of this document.
+
+###Create the base ubuntu/xenial box
+  Though there are some flavors of ubuntu boxes available but they usually have additional features installed. It is essential for the installer to start from a base install of ubuntu with absolutely no other software installed. To ensure the base image for the installer is a clean ubuntu server install and nothing but a clean ubuntu server install it is best to just create the image from the ubuntu installation iso image.
+The primary reason for this requirement is for the installer to determine all the packages that were installed. The only way to guarantee that this list will be correct is to start from a well known image.
+  
+  ```
+  
+  voltha> wget http://releases.ubuntu.com/xenial/ubuntu-16.04.2-server-amd64.iso
+  voltha> echo "virt-install -n Ubuntu1604LTS -r 1024 --vcpus=2 --disk size=50 -c ubuntu-16.04.2-server-amd64.iso --accelerate --network network=default,model=virtio --connect=qemu:///system --vnc --noautoconsole -v" > Ubuntu16.04Vm
+  voltha> . Ubuntu16.04Vm
+  voltha> virt-manager
+```
+Once the virt manager opens, open the console of the Ubuntu16.04 VM and follow the installation process.
+When promprompted use the hostname ``vinstall``. Also when prompted you should create one user ``vinstall vinstall`` and use the offered up userid of ``vinstall``. When prompted for the password of the vagrant user, use ``vinstall``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install.
+Once the installation is complete, run the VM and log in as vagrant password vagrant and install the default vagrant key (this can be done one of two ways, through virt-manager and the console or by uing ssh from the hypervisor host, the virt-manager method is shown below):
+```
+vinstall@voltha$ mkdir -p /home/vinstall/.ssh
+vagrant@voltha$ chmod 0700 /home/vinstall/.ssh
+vagrant@voltha$ chown -R vagrant.vagrant /home/vagrant/.ssh
+```
+Also create a .ssh directory for the root user:
+```
+vagrant@voltha$ sudo mkdir /root/.ssh
+```
+Add a vinstall file to /etc/sudoers.d/vinstall with the following:
+```
+vagrant@voltha$ echo "vinstall ALL=(ALL) NOPASSWD:ALL" > tmp.sudo
+vagrant@voltha$ sudo chown root.root tmp.sudo
+vagrant@voltha$ sudo mv tmp.sudo /etc/sudoers.d/vinstall
+```
+Shut down the VM.
+
+```
+vinstall@voltha$ sudo telinit 0
+```
+###Download the voltha tree
+The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required.
+```
+voltha> sudo apt-get install repo
+voltha> mkdir cord
+voltha>  sudo ln -s /cord `pwd`/cord
+voltha>  cd cord
+voltha>  repo init -u https://gerrit.opencord.org/manifest -g voltha
+voltha>  repo sync
+```
+
+### Run vagrant to Create a Voltha VM
+***Note:*** If you haven't done so, please follow the steps provided in the document `BulindingVolthaOnVagrantUsingKVM.md` to create the base voltha VM box for vagrant.
+
+First create the voltah VM using vagrant.
+```
+voltha> vagrant up
+```
+Finally, if required, log into the vm using vagrant.
+```
+voltha> vagrant ssh
+```
+## Building the Installer
+There are 2 different ways to build the installer in production and in test mode.
+### Building the installer in test mode
+Test mode is useful for testers and developers. The installer build script will also launch 3 vagrant VMs that will be install targets and configure the installer to use them without having to supply passwords for each. This speeds up the subsequent install/test cycle.
+
+To build the installer in test mode go to the installer directory
+``cd /cord/incubator/voltha/install``
+then type
+``./CreateInstaller.sh test``.
+
+You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
+
+This will take a while so doing something else in the mean-time is recommended.
+
+### Running the installer in test mode
+Once the creation has completed determine the ip address of the VM with the following virsh command:
+``virsh domifaddr vInstaller``
+using the ip address provided log into the installer using
+``ssh -i key.pem vinstall@<ip-address-from-above>``
+
+Finally, start the installer.
+``./installer.sh``
+In test mode it'll just launch with no prompts and install voltha on the 3 VMs created at the same time that the installer was created (ha-serv1, ha-serv2, and ha-serv3). This step takes quite a while since 3 different voltha installs are taking place, one for each of the 3 VMs in the cluster.
+
+Once the installation completes, determine the ip-address of one of the cluster VMs.
+``virsh domifaddr ha-serv1``
+You can use ``ha-serv2`` or ``ha-serv3`` in place of ``ha-serv1`` above. Log into the VM
+``ssh voltha@<ip-address-from-above>``
+The password is `voltha`.
+Once logged into the voltha instance follow the usual procedure to start voltha and validate that it's operating correctly.
+
+### Building the installer in production mode
+Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.
+
+The archive file and a script called ``installVoltha.sh`` are both placed in a directory named ``volthaInstaller``. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named ``installer.part<XX>`` where XX is a number starting at 00 and going as high as necessary based on the archive size.
+
+To build the installer in production mode type:
+``./CreateInstaller.sh``
+
+You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
+
+This will take a while and when it completes a directory name ``volthaInstaller`` will have been created. Copy all the files in this directory to a USB Flash drive or other portable media and carry to the installation site.
+
+## Installing Voltha
+
+To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./installVoltha.sh`` which should produce the output shown after the *Note*:
+
+***Note:*** If you are a tester and are installing to 3 vagrant VMs on the same server as the installer is running and haven't used test mode, please add the network name that your 3 VMs are using to the the `installVoltha.sh` command. In other words your command should be `./installVoltha.sh <network-name>`. The network name for a vagrant VM is typically `vagrant-libvirt` under QEMU/KVM. If in doubt type `virsh net-list` and verify this. If a network is not provided then the `default` network is used and the target machines should be reachable directly from the installer.
+```
+Checking for the installer archive installer.tar.bz2
+Checking for the installer archive parts installer.part*
+Creating the installer archive installer.tar.bz2
+Extracting the content of the installer archive installer.tar.bz2
+Starting the installer{NC}
+Defining the  vInstaller virtual machine
+Creating the storage for the vInstaller virtual machine
+Pool installer created
+
+Vol vInstaller.qcow2 created from input vol vInstaller.qcow2
+
+Pool installer destroyed
+
+Domain vInstaller defined from tmp.xml
+
+Starting the vInstaller virtual machine
+Waiting for the VM's IP address
+Waiting for the VM's IP address
+Waiting for the VM's IP address
+             .
+             :
+Waiting for the VM's IP address
+Warning: Permanently added '192.168.122.24' (ECDSA) to the list of known hosts.
+Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)
+
+ * Documentation:  https://help.ubuntu.com
+ * Management:     https://landscape.canonical.com
+ * Support:        https://ubuntu.com/advantage
+
+7 packages can be updated.
+7 updates are security updates.
+
+
+Last login: Tue Jun  6 16:55:48 2017 from 192.168.121.1
+vinstall@vinstall:~$
+```
+
+This might take a little while but once the prompt is presented there are 2 values that need to be configured after which the installer can be launched. (***Note:*** This will change over time as the HA solution evolves. As this happens this document will be updated)
+
+Use your favorite editor to edit the file ``install.cfg`` which should contain the following lines:
+```
+# Configure the hosts that will make up the cluster
+# hosts="192.168.121.195 192.168.121.2 192.168.121.215"
+#
+# Configure the user name to initilly log into those hosts as.
+# iUser="vagrant"
+```
+
+Uncomment the `hosts` line and replace the list of ip addresses on the line with the list of ip addresses for your deployment. These can be either VMs or bare metal servers, it makes no difference to the installer.
+
+Next uncomment the iUser line and change the userid that will be used to log into the target hosts (listed above) and save the file. The installer will create a new user named voltha on each of those hosts and use that account to complete the installation.
+
+Make sure that all the hosts that are being installed to have Ubuntu server 16.04LTS installed with OpenSSH. Also make sure that they're all reachable by attempting an ssh login to each with the user id provided on the iUser line.
+
+Once `install.cfg` file has been updated and reachability has been confirmed, start the installation with the command `./installer.sh`.
+
+
+Once launched, the installer will prompt for the password 3 times for each of the hosts the installation is being performed on. Once these have been provided, the installer will proceed without prompting for anything else. 
diff --git a/install/ansible/group_vars/all b/install/ansible/group_vars/all
index e13d45f..28cd368 100644
--- a/install/ansible/group_vars/all
+++ b/install/ansible/group_vars/all
@@ -35,3 +35,4 @@
   - kamon/grafana_graphite
   - gliderlabs/registrator
   - centurylink/ca-certs
+  - zookeeper
diff --git a/install/ansible/roles/voltha/files/consul_config/base_config.json b/install/ansible/roles/voltha/files/consul_config/base_config.json
new file mode 100644
index 0000000..217fc09
--- /dev/null
+++ b/install/ansible/roles/voltha/files/consul_config/base_config.json
@@ -0,0 +1,9 @@
+{
+	"server": true,
+	"ui": true, 
+	"bootstrap_expect": 3,
+	"client_addr": "0.0.0.0",
+	"disable_update_check": true,
+	"retry_join": ["10.10.10.3", "10.10.10.4", "10.10.10.5"]
+}
+
diff --git a/install/ansible/roles/voltha/tasks/voltha.yml b/install/ansible/roles/voltha/tasks/voltha.yml
index a52b7d9..e55a018 100644
--- a/install/ansible/roles/voltha/tasks/voltha.yml
+++ b/install/ansible/roles/voltha/tasks/voltha.yml
@@ -60,6 +60,13 @@
   when: target == "cluster"

   tags: [voltha]

 

+- name: Configuration files are on the cluster host

+  copy:

+    src: "files/consul_config"

+    dest: "{{ target_voltha_dir }}"

+  when: target == "cluster"

+  tags: [voltha]

+

 - name: Docker containers for Voltha are pulled

   command: docker pull {{ docker_registry }}/{{ item }}

   with_items: "{{ voltha_containers }}"

@@ -70,11 +77,12 @@
   with_items: "{{ voltha_containers }}"

   when: target == "cluster"

   tags: [voltha]

-- name: Old docker image tags are removed

-  command: docker rmi {{ docker_registry }}/{{ item }}

-  with_items: "{{ voltha_containers }}"

-  when: target == "cluster"

-  tags: [voltha]

+#- name: Old docker image tags are removed

+#  command: docker rmi {{ docker_registry }}/{{ item }}

+#  with_items: "{{ voltha_containers }}"

+#  when: target == "cluster"

+#  tags: [voltha]

+

 

 # Update the insecure registry to reflect the current installer.

 # The installer name can change depending on whether test mode

@@ -115,3 +123,48 @@
   with_items: "{{ voltha_containers }}"

   when: target == "installer"

   tags: [voltha]

+

+- name: consul overlay network exists

+  command: docker network create --driver overlay --subnet 10.10.10.0/29 consul_net

+  when: target == "startup"

+  tags: [voltha]

+

+- name: kafka overlay network exists

+  command: docker network create --driver overlay --subnet 10.10.11.0/24 kafka_net

+  when: target == "startup"

+  tags: [voltha]

+

+- name: voltha overlay network exists

+  command: docker network create --driver overlay --subnet 10.10.12.0/24 voltha_net

+  when: target == "startup"

+  tags: [voltha]

+

+- name: consul cluster is running

+  command: docker service create --name consul --network consul_net --network voltha_net -e 'CONSUL_BIND_INTERFACE=eth0' --mode global --publish "8300:8300" --publish "8400:8400" --publish "8500:8500" --publish "8600:8600/udp" --mount type=bind,source=/cord/incubator/voltha/consul_config,destination=/consul/config consul agent -config-dir /consul/config

+  when: target == "startup"

+  tags: [voltha]

+

+- name: zookeeper node zk1 is running

+  command: docker service create --name zk1 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=1' -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888" zookeeper

+  when: target == "startup"

+  tags: [voltha]

+

+- name: zookeeper node zk2 is running

+  command: docker service create --name zk2 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=2' -e "server.1=zk1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zk3:2888:3888" zookeeper

+  when: target == "startup"

+  tags: [voltha]

+

+- name: zookeeper node zk3 is running

+  command: docker service create --name zk3 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=3' -e "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=0.0.0.0:2888:3888" zookeeper

+  when: target == "startup"

+  tags: [voltha]

+

+- name: kafka is running

+  command: docker service create --name kafka --network voltha_net  -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181" -e "KAFKA_HEAP_OPTS=-Xmx256M -Xms128M" --mode global --publish "9092:9092" wurstmeister/kafka

+  when: target == "startup"

+  tags: [voltha]

+

+- name: voltha is running on a single host for testing

+  command: docker service create --name voltha_core --network voltha_net cord/voltha voltha/voltha/main.py -v --consul=consul:8500 --kafka=kafka

+  when: target == "startup"

+  tags: [voltha]

diff --git a/install/ansible/swarm-master-backup.yml b/install/ansible/swarm-master-backup.yml
deleted file mode 100644
index 1e8eb3b..0000000
--- a/install/ansible/swarm-master-backup.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-- hosts: swarm-master-backup
-  remote_user: voltha
-  serial: 1
-  vars:
-    target: swarm-master-backup
-  roles:
-    - swarm
diff --git a/install/ansible/swarm-master.yml b/install/ansible/swarm-master.yml
deleted file mode 100644
index 2c956d2..0000000
--- a/install/ansible/swarm-master.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-- hosts: swarm-master
-  remote_user: voltha
-  serial: 1
-  vars:
-    target: swarm-master
-  roles:
-    - swarm
diff --git a/install/ansible/swarm.yml b/install/ansible/swarm.yml
new file mode 100644
index 0000000..5ed7c8f
--- /dev/null
+++ b/install/ansible/swarm.yml
@@ -0,0 +1,14 @@
+- hosts: swarm-master
+  remote_user: voltha
+  serial: 1
+  vars:
+    target: swarm-master
+  roles:
+    - swarm
+- hosts: swarm-master-backup
+  remote_user: voltha
+  serial: 1
+  vars:
+    target: swarm-master-backup
+  roles:
+    - swarm
diff --git a/install/ansible/voltha.yml b/install/ansible/voltha.yml
index b9a2c24..8216c6b 100644
--- a/install/ansible/voltha.yml
+++ b/install/ansible/voltha.yml
@@ -9,4 +9,10 @@
     - docker
     - docker-compose
     - voltha
-#    - java
+- hosts: swarm-master
+  remote_user: voltha
+  serial: 1
+  vars:
+    target: startup
+  roles:
+    - voltha
diff --git a/install/installVoltha.sh b/install/installVoltha.sh
index a5632cd..49cab5a 100755
--- a/install/installVoltha.sh
+++ b/install/installVoltha.sh
@@ -41,6 +41,6 @@
 # Extract the installer files and bootstrap the installer
 echo -e "${lBlue}Extracting the content of the installer archive ${lCyan}$installerArchive${NC}"
 tar xjf $installerArchive
-echo -e "${lBlue}Starting the installer{NC}"
+echo -e "${lBlue}Starting the installer${NC}"
 chmod u+x BootstrapInstaller.sh
 ./BootstrapInstaller.sh "$@"
diff --git a/install/installer.sh b/install/installer.sh
index 9c5d708..702417a 100755
--- a/install/installer.sh
+++ b/install/installer.sh
@@ -100,6 +100,10 @@
 done
 # Add the dependent software list to the cluster variables
 echo -e "${lBlue}Setting up dependent software${NC}"
+# Delete any grub updates since the boot disk is almost
+# guaranteed not to be the same device as the installer.
+mkdir grub_updates
+sudo mv deb_files/*grub* grub_updates
 echo "deb_files:" >> ansible/group_vars/all
 for i in deb_files/*.deb
 do
@@ -167,6 +171,7 @@
                 echo  $i >> ansible/hosts/swarm-master-backup
         fi
 done
-sudo ansible-playbook ansible/swarm-master.yml -i ansible/hosts/swarm-master
-sudo ansible-playbook ansible/swarm-master-backup.yml -i ansible/hosts/swarm-master-backup
+sudo ansible-playbook ansible/swarm.yml -i ansible/hosts/swarm-master
+sudo ansible-playbook ansible/swarm.yml -i ansible/hosts/swarm-master-backup
+sudo ansible-playbook ansible/voltha.yml -i ansible/hosts/swarm-master