Updated the installer such that in test mode all built containers are
transferred to the install targets. This allows new containers to be
tested before they are added to the production list.
Changed the order in which some install steps were executed in test mode
to allow walking away from the process more quickly by prompting for
information earlier.
Updated the docker service instantiation to use the newly sumbmitted
compose files rather than the command line.
Once the installer is built in test mode it's started automatically
rather than requiring intervention and then exists when it's completed
the deployment of the cluster. This will be useful when integrating into
the automated build process later.

Change-Id: Id978cae69b53c605abefeb9b55f2671f9b9cfd20
diff --git a/install/BuildVoltha.sh b/install/BuildVoltha.sh
index b37a998..bcfb955 100755
--- a/install/BuildVoltha.sh
+++ b/install/BuildVoltha.sh
@@ -47,20 +47,19 @@
 
 
 # Run all the build commands
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$ipAddr "cd /cord/incubator/voltha && . env.sh && make fetch && make production" | tee voltha_build.tmp
-
-rtrn=$#
-
-if [ $rtrn -ne 0 ]; then
-	rm -f voltha_build.tmp
-	exit 1
+if [ $# -eq 1 -a "$1" == "test" ]; then
+	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i \
+		.vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$ipAddr \
+		"cd /cord/incubator/voltha && . env.sh && make fetch && make build"
+	rtrn=$?
+else
+	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i \
+		.vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$ipAddr \
+		"cd /cord/incubator/voltha && . env.sh && make fetch && make production"
+	rtrn=$?
 fi
 
-egrep 'Makefile:[0-9]+: recipe for target .* failed' voltha_build.tmp
+echo "Build return code: $rtrn"
 
-rtrn=$#
-rm -f voltha_build.tmp
-if [ $rtrn -eq 0 ]; then
-	# An error occured, notify the caller
-	exit 1
-fi
+exit $rtrn
+
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
index f8779d3..4d040ed 100755
--- a/install/BuildingTheInstaller.md
+++ b/install/BuildingTheInstaller.md
@@ -55,7 +55,7 @@
 ```
 ###Download the voltha tree
 The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required. If you haven't done so perviously, do the following.
-```
+
 Create a .gitconfig file using your favorite editor and add the following:
 ```
 # This is Git's per-user configuration file.
@@ -69,7 +69,6 @@
 [push]
         default = simple
 
-```
 
 voltha> sudo apt-get install repo
 voltha> mkdir cord
@@ -125,6 +124,7 @@
 
 Also please change the value of the `cord_home` variable in the `install/ansible/group_vars/all` to refer to the location of your cord directory. This is usually in your home directory but it can be anywhere so the installer can't guess at it.
 
+
 Also destroy any running voltha VM by first ensuring your config file `settings.vagrant.yaml` is set as specified above then peforming the following:
 
 ```
@@ -145,22 +145,27 @@
 
 This will take a while so doing something else in the mean-time is recommended.
 
-### Running the installer in test mode
-Once the creation has completed determine the ip address of the VM with the following virsh command:
-``virsh domifaddr vInstaller``
-using the ip address provided log into the installer using
-``ssh -i key.pem vinstall@<ip-address-from-above>``
-
-Finally, start the installer.
-``./installer.sh``
-In test mode it'll just launch with no prompts and install voltha on the 3 VMs created at the same time that the installer was created (ha-serv1, ha-serv2, and ha-serv3). This step takes quite a while since 3 different voltha installs are taking place, one for each of the 3 VMs in the cluster.
-
 Once the installation completes, determine the ip-address of one of the cluster VMs.
-``virsh domifaddr ha-serv1``
-You can use ``ha-serv2`` or ``ha-serv3`` in place of ``ha-serv1`` above. Log into the VM
-``ssh voltha@<ip-address-from-above>``
+``virsh domifaddr install_ha-serv<yourId>-1``
+You can use ``install_ha-serv<yourId>-2`` or ``install_ha-serv<yourId>-3`` in place of ``install_ha-serv<yourId>-1`` above. `<yourId> can be determined by issuing the command:
+```
+voltha> id -u
+```
+Log into the VM
+```
+voltha> ssh voltha@<ip-address-from-above>
+```
 The password is `voltha`.
-Once logged into the voltha instance follow the usual procedure to start voltha and validate that it's operating correctly.
+Once logged into the voltha instance you can validate that the instance is running correctly.
+
+The install process adds information to the build tree which needs to be cleaned up between runs. To clean up after you're done issue the following:
+```
+voltha> cd ~/cord/incubator/voltha/install
+voltha> ./cleanup
+```
+
+This step will not destroy the VMs, only remove files that are created during the install process to facilitate debugging. As the installer stabilizes this may be done automatically at the end of an installation run.
+
 
 ### Building the installer in production mode
 Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.
@@ -176,6 +181,8 @@
 
 ## Installing Voltha
 
+The targets for the installation can be either bare metal servers or VMs running ubuntu server 16.04 LTS. The he userid used for installation (see below) must have sudo rights. This is automatic for the user created during ubuntu installation. If you've created another user to use for installation, please ensure they have sudo rights.
+
 To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./deployInstaller.sh`` which should produce the output shown after the *Note*:
 
 ***Note:*** If you are a tester and are installing to 3 vagrant VMs on the same server as the installer is running and haven't used test mode, please add the network name that your 3 VMs are using to the the `deployInstaller.sh` command. In other words your command should be `./deployInstaller.sh <network-name>`. The network name for a vagrant VM is typically `vagrant-libvirt` under QEMU/KVM. If in doubt type `virsh net-list` and verify this. If a network is not provided then the `default` network is used and the target machines should be reachable directly from the installer.
@@ -236,5 +243,4 @@
 
 Once `install.cfg` file has been updated and reachability has been confirmed, start the installation with the command `./installer.sh`.
 
-
 Once launched, the installer will prompt for the password 3 times for each of the hosts the installation is being performed on. Once these have been provided, the installer will proceed without prompting for anything else. 
diff --git a/install/CreateInstaller.sh b/install/CreateInstaller.sh
index 7f06c09..cabc1e8 100755
--- a/install/CreateInstaller.sh
+++ b/install/CreateInstaller.sh
@@ -47,24 +47,22 @@
 fi
 unset vInst
 
-# Verify if this is intended to be a test environment, if so start 3 VMs
-# to emulate the production installation cluster.
+# Verify if this is intended to be a test environment, if so
+# configure the 3 VMs which will be started later to emulate
+# the production installation cluster.
 if [ $# -eq 1 -a "$1" == "test" ]; then
-	echo -e "${lBlue}Testing, create the ${lCyan}ha-serv${lBlue} VMs${NC}"
+	echo -e "${lBlue}Testing, configure the ${lCyan}ha-serv${lBlue} VMs${NC}"
 	# Update the vagrant settings file
 	sed -i -e '/server_name/s/.*/server_name: "ha-serv'${uId}'-"/' settings.vagrant.yaml
 	sed -i -e '/docker_push_registry/s/.*/docker_push_registry: "vinstall'${uId}':5000"/' ansible/group_vars/all
 	sed -i -e "/vinstall/s/vinstall/vinstall${uId}/" ../ansible/roles/docker/templates/daemon.json
 
 	# Set the insecure registry configuration based on the installer hostname
-	echo -e "${lBlue}Set up the inescure registry hostname ${lCyan}vinstall${uId}${NC}"
+	echo -e "${lBlue}Set up the inescure registry config for hostname ${lCyan}vinstall${uId}${NC}"
 	echo '{' > ansible/roles/voltha/templates/daemon.json
 	echo '"insecure-registries" : ["vinstall'${uId}':5000"]' >> ansible/roles/voltha/templates/daemon.json
 	echo '}' >> ansible/roles/voltha/templates/daemon.json
 
-	vagrant destroy ha-serv${uId}-{1,2,3}
-	vagrant up ha-serv${uId}-{1,2,3}
-	./devSetHostList.sh
 	# Change the installer name
 	iVmName="vInstaller${uId}"
 else
@@ -73,7 +71,7 @@
         # which serve as documentation.
 	sed -i -e '/^#/!d' install.cfg
 	# Set the insecure registry configuration based on the installer hostname
-	echo -e "${lBlue}Set up the inescure registry hostname ${lCyan}vinstall${uId}${NC}"
+	echo -e "${lBlue}Set up the inescure registry config for hostname ${lCyan}vinstall${uId}${NC}"
 	sed -i -e '/docker_push_registry/s/.*/docker_push_registry: "vinstall:5000"/' ansible/group_vars/all
 	echo '{' > ansible/roles/voltha/templates/daemon.json
 	echo '"insecure-registries" : ["vinstall:5000"]' >> ansible/roles/voltha/templates/daemon.json
@@ -171,12 +169,18 @@
 ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vinstall@$ipAddr 
 
 # If we're in test mode, change the hostname of the installer vm
+# also start the 3 vagrant target VMs
 if [ $# -eq 1 -a "$1" == "test" ]; then
 	echo -e "${lBlue}Test mode, change the installer host name to ${yellow}vinstall${uId}${NC}"
 	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr \
 		sudo hostnamectl set-hostname vinstall${uId}
 	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr \
 		sudo service networking restart
+
+	echo -e "${lBlue}Testing, start the ${lCyan}ha-serv${lBlue} VMs${NC}"
+	vagrant destroy ha-serv${uId}-{1,2,3}
+	vagrant up ha-serv${uId}-{1,2,3}
+	./devSetHostList.sh
 fi
 
 # Ensure that the voltha VM is running so that images can be secured
@@ -186,11 +190,11 @@
 if [ -z "$vVM" ]; then
 	if [ $# -eq 1 -a "$1" == "test" ]; then
 		./BuildVoltha.sh $1
-		rtrn=$#
+		rtrn=$?
 	else
 		# Default to installer mode 
 		./BuildVoltha.sh install
-		rtrn=$#
+		rtrn=$?
 	fi
 	if [ $rtrn -ne 0 ]; then
 		echo -e "${red}Voltha build failed!! ${yellow}Please review the log and correct${lBlue} is running${NC}"
@@ -199,18 +203,24 @@
 fi
 
 # Extract all the image names and tags from the running voltha VM
-# No Don't do this, it's too error prone if the voltha VM is not 
-# built correctly, going with a static list for now.
-#echo -e "${lBlue}Extracting the docker image list from the voltha VM${NC}"
-#volIpAddr=`virsh domifaddr $vVmName${uId} | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
-#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ../.vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$volIpAddr "docker image ls" > images.tmp
-#cat images.tmp | grep -v 5000 | tail -n +2 | awk '{printf("  - %s:%s\n", $1, $2)}' > image-list.cfg
-#rm -f images.tmp
-#sed -i -e '/voltha_containers:/,$d' ansible/group_vars/all
-#echo "voltha_containers:" >> ansible/group_vars/all
-echo -e "${lBlue}Set up the docker image list from ${yellow}containers.cfg${NC}"
-sed -i -e '/voltha_containers:/,$d' ansible/group_vars/all
-cat containers.cfg >> ansible/group_vars/all
+# when running in test mode. This will provide the entire suite
+# of available containers to the VM cluster.
+
+if [ $# -eq 1 -a "$1" == "test" ]; then
+	echo -e "${lBlue}Extracting the docker image list from the voltha VM${NC}"
+	volIpAddr=`virsh domifaddr $vVmName${uId} | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ../.vagrant/machines/voltha${uId}/libvirt/private_key vagrant@$volIpAddr "docker image ls" > images.tmp
+	cat images.tmp | grep -v 5000 | tail -n +2 | awk '{printf("  - %s:%s\n", $1, $2)}' > image-list.cfg
+	rm -f images.tmp
+	sed -i -e '/voltha_containers:/,$d' ansible/group_vars/all
+	echo "voltha_containers:" >> ansible/group_vars/all
+	cat image-list.cfg >> ansible/group_vars/all
+	rm -f image-list.cfg
+else
+	echo -e "${lBlue}Set up the docker image list from ${yellow}containers.cfg${NC}"
+	sed -i -e '/voltha_containers:/,$d' ansible/group_vars/all
+	cat containers.cfg >> ansible/group_vars/all
+fi
 
 # Install python which is required for ansible
 echo -e "${lBlue}Installing python${NC}"
@@ -261,6 +271,36 @@
 
 if [ $# -eq 1 -a "$1" == "test" ]; then
 	echo -e "${lBlue}Testing, the install image ${red}WILL NOT${lBlue} be built${NC}"
+
+
+	# Reboot the installer
+	echo -e "${lBlue}Rebooting the installer${NC}"
+	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo telinit 6
+	# Wait for the host to shut down
+	sleep 5
+
+	ctr=0
+	ipAddr=""
+	while [ -z "$ipAddr" ];
+	do
+		echo -e "${lBlue}Waiting for the VM's IP address${NC}"
+		ipAddr=`virsh domifaddr $iVmName | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+		sleep 3
+		if [ $ctr -eq $ipTimeout ]; then
+			echo -e "${red}Tired of waiting, please adjust the ipTimeout if the VM is slow to start${NC}"
+			exit
+		fi
+		ctr=`expr $ctr + 1`
+	done
+
+	echo -e "${lBlue}Running the installer${NC}"
+	echo "~/installer.sh" > tmp_bash_login
+	echo "rm ~/.bash_login" >> tmp_bash_login
+	echo "logout" >> tmp_bash_login
+	scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem tmp_bash_login vinstall@$ipAddr:.bash_login
+	rm -f tmp_bash_login
+	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr
+
 else
 	echo -e "${lBlue}Building, the install image (this can take a while)${NC}"
 	# Create a temporary directory for all the installer files
diff --git a/install/ansible/roles/cluster-host/tasks/main.yml b/install/ansible/roles/cluster-host/tasks/main.yml
index 76d4840..6280b6f 100644
--- a/install/ansible/roles/cluster-host/tasks/main.yml
+++ b/install/ansible/roles/cluster-host/tasks/main.yml
@@ -69,7 +69,7 @@
     dest: "/var/lib/apt"
   tags: [cluster_host]
 
-- name: Dependent software is installed (this takes about 10 Min, DONT'T PANIC, go for coffee instead)
+- name: Dependent software is installed (this cand take about 10 Min, DONT'T PANIC, go for coffee instead)
   command: dpkg -R -i "{{ target_voltha_home }}/deb_files"
 #  ignore_errors: true
   when: target == "cluster"
diff --git a/install/ansible/roles/voltha/tasks/main.yml b/install/ansible/roles/voltha/tasks/main.yml
index 001c837..0f14844 100644
--- a/install/ansible/roles/voltha/tasks/main.yml
+++ b/install/ansible/roles/voltha/tasks/main.yml
@@ -31,18 +31,6 @@
   when: target == "cluster"
   tags: [voltha]
 
-#- name: Required directories are copied
-#  copy:
-#    src: "/home/vinstall/{{ item }}"
-#    dest: "{{ target_voltha_dir }}"
-#    owner: voltha
-#    group: voltha
-#  with_items:
-#    - compose
-#    - nginx_config
-#  when: target == "cluster"
-#  tags: [voltha]
-
 - name: Installer files and directories are copied
   synchronize:
     src: "/home/vinstall/{{ item }}"
@@ -71,16 +59,6 @@
   when: target == "cluster"
   tags: [voltha]
 
-#- name: Nginx module symlink is present
-#  file:
-#    dest: "{{ target_voltha_dir }}/nginx_config/modules"
-#    src: ../../usr/lib/nginx/modules
-#    state: link
-#    follow: no
-#    force: yes
-#  when: target == "cluster"
-#  tags: [voltha]
-
 - name: Nginx statup script is executable
   file:
     path: "{{ target_voltha_dir }}/nginx_config/start_service.sh"
@@ -136,10 +114,10 @@
   when: target == "installer"
   tags: [voltha]
 
-#- name: TEMPORARY RULE TO INSTALL ZOOKEEPER
-#  command: docker pull zookeeper
-#  when: target == "installer"
-#  tags: [voltha]
+- name: TEMPORARY RULE TO INSTALL ZOOKEEPER
+  command: docker pull zookeeper
+  when: target == "installer"
+  tags: [voltha]
 
 - name: Docker images are re-tagged to registry for push
   command: docker tag {{ item }} {{ docker_push_registry }}/{{ item }}
@@ -157,47 +135,17 @@
   when: target == "installer"
   tags: [voltha]
 
-- name: consul overlay network exists
-  command: docker network create --driver overlay --subnet 10.10.10.0/29 consul_net
-  when: target == "startup"
-  tags: [voltha]
-
-- name: kafka overlay network exists
-  command: docker network create --driver overlay --subnet 10.10.11.0/24 kafka_net
-  when: target == "startup"
-  tags: [voltha]
-
 - name: voltha overlay network exists
-  command: docker network create --driver overlay --subnet 10.10.12.0/24 voltha_net
+  command: docker network create --opt encrypted=true --driver overlay --subnet 10.10.12.0/24 voltha_net
   when: target == "startup"
   tags: [voltha]
 
 - name: consul cluster is running
-  command: docker service create --name consul --network consul_net --network voltha_net -e 'CONSUL_BIND_INTERFACE=eth0' --mode global --publish "8300:8300" --publish "8400:8400" --publish "8500:8500" --publish "8600:8600/udp" --mount type=bind,source=/cord/incubator/voltha/consul_config,destination=/consul/config consul agent -config-dir /consul/config
-  when: target == "startup"
-  tags: [voltha]
-
-- name: zookeeper node zk1 is running
-  command: docker service create --name zk1 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=1' -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888" zookeeper
-  when: target == "startup"
-  tags: [voltha]
-
-- name: zookeeper node zk2 is running
-  command: docker service create --name zk2 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=2' -e "server.1=zk1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zk3:2888:3888" zookeeper
-  when: target == "startup"
-  tags: [voltha]
-
-- name: zookeeper node zk3 is running
-  command: docker service create --name zk3 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=3' -e "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=0.0.0.0:2888:3888" zookeeper
+  command: docker stack deploy -c {{ target_voltha_dir }}/compose/docker-compose-consul-cluster.yml consul
   when: target == "startup"
   tags: [voltha]
 
 - name: kafka is running
-  command: docker service create --name kafka --network voltha_net  -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181" -e "KAFKA_HEAP_OPTS=-Xmx256M -Xms128M" --mode global --publish "9092:9092" wurstmeister/kafka
-  when: target == "startup"
-  tags: [voltha]
-
-- name: voltha is running on a single host for testing
-  command: docker service create --name voltha_core --network voltha_net cord/voltha voltha/voltha/main.py -v --consul=consul:8500 --kafka=kafka
+  command: docker stack deploy -c {{ target_voltha_dir}}/compose/docker-compose-kafka-cluster.yml kafka
   when: target == "startup"
   tags: [voltha]
diff --git a/install/ansible/roles/voltha/tasks/voltha.yml b/install/ansible/roles/voltha/tasks/voltha.yml
deleted file mode 100644
index d41f931..0000000
--- a/install/ansible/roles/voltha/tasks/voltha.yml
+++ /dev/null
@@ -1,203 +0,0 @@
-# Note: When the target == "cluster" the installer
-# is running to install voltha in the cluster hosts.
-# Whe the target == "installer" the installer is being
-# created.
-- name: The environment is properly set on login
-  template:
-    src: bashrc.j2
-    dest: "{{ target_voltha_home }}/.bashrc"
-    owner: voltha
-    group: voltha
-    mode: "u=rw,g=r,o=r"
-  when: target == "cluster"
-  tags: [voltha]
-  
-- name: The .bashrc file is executed on ssh login
-  template:
-    src: bash_profile.j2
-    dest: "{{ target_voltha_home }}/.bash_profile"
-    owner: voltha
-    group: voltha
-    mode: "u=rw,g=r,o=r"
-  when: target == "cluster"
-  tags: [voltha]
-  
-- name: Required directory exists
-  file:
-    path: "{{ target_voltha_dir }}"
-    state: directory
-    owner: voltha
-    group: voltha
-  when: target == "cluster"
-  tags: [voltha]
-
-#- name: Required directories are copied
-#  copy:
-#    src: "/home/vinstall/{{ item }}"
-#    dest: "{{ target_voltha_dir }}"
-#    owner: voltha
-#    group: voltha
-#  with_items:
-#    - compose
-#    - nginx_config
-#  when: target == "cluster"
-#  tags: [voltha]
-
-- name: Installer files and directories are copied
-  synchronize:
-    src: "/home/vinstall/{{ item }}"
-    dest: "{{ target_voltha_dir }}"
-    archive: no
-    owner: no
-    perms: no
-    recursive: yes
-    links: yes
-  with_items:
-    - compose
-    - nginx_config
-  when: target == "cluster"
-  tags: [voltha]
-
-- name: Installer directories are owned by voltha
-  file:
-    path: /home/vinstall/{{ item }}
-    owner: voltha
-    group: voltha
-    recurse: yes
-    follow: no
-  with_items:
-    - compose
-    - nginx_config
-  when: target == "cluster"
-  tags: [voltha]
-
-#- name: Nginx module symlink is present
-#  file:
-#    dest: "{{ target_voltha_dir }}/nginx_config/modules"
-#    src: ../../usr/lib/nginx/modules
-#    state: link
-#    follow: no
-#    force: yes
-#  when: target == "cluster"
-#  tags: [voltha]
-
-- name: Nginx statup script is executable
-  file:
-    path: "{{ target_voltha_dir }}/nginx_config/start_service.sh"
-    mode: 0755
-  when: target == "cluster"
-  tags: [voltha]
-
-- name: Configuration files are on the cluster host
-  copy:
-    src: "files/consul_config"
-    dest: "{{ target_voltha_dir }}"
-  when: target == "cluster"
-  tags: [voltha]
-
-- name: Docker containers for Voltha are pulled
-  command: docker pull {{ docker_registry }}/{{ item }}
-  with_items: "{{ voltha_containers }}"
-  when: target == "cluster"
-  tags: [voltha]
-- name: Docker images are re-tagged to expected names
-  command: docker tag {{ docker_registry }}/{{ item }} {{ item }}
-  with_items: "{{ voltha_containers }}"
-  when: target == "cluster"
-  tags: [voltha]
-#- name: Old docker image tags are removed
-#  command: docker rmi {{ docker_registry }}/{{ item }}
-#  with_items: "{{ voltha_containers }}"
-#  when: target == "cluster"
-#  tags: [voltha]
-
-
-# Update the insecure registry to reflect the current installer.
-# The installer name can change depending on whether test mode
-# is being used or not.
-- name: Enable insecure install registry
-  template:
-    src: "{{ docker_daemon_json }}"
-    dest: "{{ docker_daemon_json_dest }}"
-  register: copy_result
-  when: target == "installer"
-  tags: [voltha]
-
-- name: Debain Daemon is reloaded
-  command: systemctl daemon-reload
-  when: copy_result|changed and is_systemd is defined and target == "installer"
-  tags: [voltha]
-
-- name: Debian Docker service is restarted
-  service:
-    name: docker
-    state: restarted
-  when: copy_result|changed or user_result|changed
-  when: target == "installer"
-  tags: [voltha]
-
-- name: TEMPORARY RULE TO INSTALL ZOOKEEPER
-  command: docker pull zookeeper
-  when: target == "installer"
-  tags: [voltha]
-
-- name: Docker images are re-tagged to registry for push
-  command: docker tag {{ item }} {{ docker_push_registry }}/{{ item }}
-  with_items: "{{ voltha_containers }}"
-  when: target == "installer"
-  tags: [voltha]
-- name: Docker containers for Voltha are pushed
-  command: docker push {{ docker_push_registry }}/{{ item }}
-  with_items: "{{ voltha_containers }}"
-  when: target == "installer"
-  tags: [voltha]
-- name: Temporary registry push tags are removed
-  command: docker rmi {{ docker_push_registry }}/{{ item }}
-  with_items: "{{ voltha_containers }}"
-  when: target == "installer"
-  tags: [voltha]
-
-- name: consul overlay network exists
-  command: docker network create --opt encrypted=true --driver overlay --subnet 10.10.10.0/29 consul_net
-  when: target == "startup"
-  tags: [voltha]
-
-- name: kafka overlay network exists
-  command: docker network create --opt encrypted=true --driver overlay --subnet 10.10.11.0/24 kafka_net
-  when: target == "startup"
-  tags: [voltha]
-
-- name: voltha overlay network exists
-  command: docker network create --opt encrypted=true --driver overlay --subnet 10.10.12.0/24 voltha_net
-  when: target == "startup"
-  tags: [voltha]
-
-- name: consul cluster is running
-  command: docker service create --name consul --network consul_net --network voltha_net -e 'CONSUL_BIND_INTERFACE=eth0' --mode global --publish "8300:8300" --publish "8400:8400" --publish "8500:8500" --publish "8600:8600/udp" --mount type=bind,source=/cord/incubator/voltha/consul_config,destination=/consul/config consul agent -config-dir /consul/config
-  when: target == "startup"
-  tags: [voltha]
-
-- name: zookeeper node zk1 is running
-  command: docker service create --name zk1 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=1' -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888" zookeeper
-  when: target == "startup"
-  tags: [voltha]
-
-- name: zookeeper node zk2 is running
-  command: docker service create --name zk2 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=2' -e "server.1=zk1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zk3:2888:3888" zookeeper
-  when: target == "startup"
-  tags: [voltha]
-
-- name: zookeeper node zk3 is running
-  command: docker service create --name zk3 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=3' -e "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=0.0.0.0:2888:3888" zookeeper
-  when: target == "startup"
-  tags: [voltha]
-
-- name: kafka is running
-  command: docker service create --name kafka --network voltha_net  -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181" -e "KAFKA_HEAP_OPTS=-Xmx256M -Xms128M" --mode global --publish "9092:9092" wurstmeister/kafka
-  when: target == "startup"
-  tags: [voltha]
-
-- name: voltha is running on a single host for testing
-  command: docker service create --name voltha_core --network voltha_net cord/voltha voltha/voltha/main.py -v --consul=consul:8500 --kafka=kafka
-  when: target == "startup"
-  tags: [voltha]
diff --git a/install/cleanup.sh b/install/cleanup.sh
index c3e545f..f1e4a68 100755
--- a/install/cleanup.sh
+++ b/install/cleanup.sh
@@ -1,13 +1,18 @@
 #!/bin/bash
 
-rm ansible/host_vars/*
-rm ansible/roles/voltha/templates/daemon.json
+rm -f ansible/host_vars/*
+rm -f ansible/roles/voltha/templates/daemon.json
 rm -fr volthaInstaller-2/
 rm -fr volthaInstaller/
-rm ansible/volthainstall.retry
-rm key.pem
+rm -f ansible/volthainstall.retry
+rm -fr .test
+rm -f key.pem
 sed -i -e '/voltha_containers:/,$d' ansible/group_vars/all
+git checkout ../ansible/roles/docker/templates/daemon.json
 git checkout ansible/hosts/voltha
 git checkout ansible/hosts/installer
 git checkout ../settings.vagrant.yaml
-
+git checkout settings.vagrant.yaml 
+git checkout ansible/group_vars/all
+git checkout ansible/roles/docker/templates/docker.cfg
+git checkout install.cfg