Multiple updates. Converted many of the Yaml files from "dos" format to
"unix" format. Finalized the creation of the final installer file set
that can be copied to a USB Flash drive or other removable media.
Updated the config file with comments to make it more user friendly.
Deleted ansible files that were no longer needed. This update continues
to address the requirements laid out by Jira VOL-6.

Change-Id: I7434d2ec01768121e8d2ec50bb633c515281b37a
diff --git a/BuildingVolthaUsingVagrantOnKVM.md b/BuildingVolthaUsingVagrantOnKVM.md
index befd29d..6009024 100755
--- a/BuildingVolthaUsingVagrantOnKVM.md
+++ b/BuildingVolthaUsingVagrantOnKVM.md
@@ -1,4 +1,10 @@
 # Building a vOLT-HA Virtual Machine  Using Vagrant on QEMU/KVM

+***

+**++Table of Contents++**

+

+[TOC]

+***

+##Bare Metal Setup

 Start with an installation of Ubuntu16.04LTS on a bare metal server that is capable of virtualization. How to determine this is beyond the scope of this document. When installing the image ensure that both "OpenSSH server" and "Virtualization Machine Host" are chosen in addition to the default "standard system utilities". Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section.

 

 ##Create the base ubuntu/xenial box

@@ -84,6 +90,7 @@
 ## Run vagrant to Create a Voltha VM

 First create the voltah VM using vagrant.

 ```

+voltha> cd cord/incubator/voltha

 voltha> vagrant up

 ```

 Finally, log into the vm using vagrant.

diff --git a/install/BootstrapInstaller.sh b/install/BootstrapInstaller.sh
new file mode 100644
index 0000000..c140ab7
--- /dev/null
+++ b/install/BootstrapInstaller.sh
@@ -0,0 +1,56 @@
+#!/bin/bash
+
+baseImage="Ubuntu1604LTS"
+iVmName="vInstaller"
+iVmNetwork="default"
+shutdownTimeout=5
+ipTimeout=20
+
+lBlue='\033[1;34m'
+green='\033[0;32m'
+orange='\033[0;33m'
+NC='\033[0m'
+red='\033[0;31m'
+yellow='\033[1;33m'
+dGrey='\033[1;30m'
+lGrey='\033[1;37m'
+lCyan='\033[1;36m'
+
+wd=`pwd`
+# Update the XML file with the VM information
+echo -e "${lBlue}Defining the  ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+cat vmTemplate.xml | sed -e "s/{{ VMName }}/$iVmName/g" | sed -e "s/{{ VMNetwork }}/$iVmNetwork/g" > tmp.xml
+
+# Copy the vm image to the default storage pool
+echo -e "${lBlue}Creating the storage for the ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+# Copy the vm image to the installer directory
+virsh pool-create-as installer --type dir --target `pwd`
+virsh vol-create-from default ${iVmName}_volume.xml $iVmName.qcow2 --inputpool installer
+virsh pool-destroy installer
+
+# Create the VM using the updated xml file and the uploaded image
+virsh define tmp.xml
+
+rm tmp.xml
+
+# Start the VMm, if it's already running just ignore the error
+echo -e "${lBlue}Starting the ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+virsh start $iVmName > /dev/null 2>&1
+
+# Get the VM's IP address
+ctr=0
+ipAddr=""
+while [ -z "$ipAddr" ];
+do
+	echo -e "${lBlue}Waiting for the VM's IP address${NC}"
+	ipAddr=`virsh domifaddr $iVmName | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+	sleep 3
+	if [ $ctr -eq $ipTimeout ]; then
+		echo -e "${red}Tired of waiting, please adjust the ipTimeout if the VM is slow to start${NC}"
+		exit
+	fi
+	ctr=`expr $ctr + 1`
+done
+
+# Log into the vm
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
index 6bc3ade..f2ef518 100755
--- a/install/BuildingTheInstaller.md
+++ b/install/BuildingTheInstaller.md
@@ -128,11 +128,73 @@
 Once logged into the voltha instance follow the usual procedure to start voltha and validate that it's operating correctly.

 

 ### Building the installer in production mode

-Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the debian vagrant file, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.

+Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.

+

+The archive file and a script called ``installVoltha.sh`` are both placed in a directory named ``volthaInstaller``. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named ``installer.part<XX>`` where XX is a number starting at 00 and going as high as necessary based on the archive size.

 

 To build the installer in production mode type:

 ``./CreateInstaller.sh``

 

-This will take a while and when it completes a file named ``VolthaInstallerV1.0.tar.bz2`` will have been created. Put this file on a usb flash drive that's been formatted using the ext4 filesystem and it's ready to be carried to the installation site.

+This will take a while and when it completes a directory name ``volthaInstaller`` will have been created. Copy all the files in this directory to a USB Flash drive or other portable media and carry to the installation site.

 

-***More to come on this as things evolve.***
\ No newline at end of file
+## Installing Voltha

+

+To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the portable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./installVoltha.sh`` which should produce the following output:

+```

+Checking for the installer archive installer.tar.bz2

+Checking for the installer archive parts installer.part*

+Creating the installer archive installer.tar.bz2

+Extracting the content of the installer archive installer.tar.bz2

+Starting the installer{NC}

+Defining the  vInstaller virtual machine

+Creating the storage for the vInstaller virtual machine

+Pool installer created

+

+Vol vInstaller.qcow2 created from input vol vInstaller.qcow2

+

+Pool installer destroyed

+

+Domain vInstaller defined from tmp.xml

+

+Starting the vInstaller virtual machine

+Waiting for the VM's IP address

+Waiting for the VM's IP address

+Waiting for the VM's IP address

+             .

+             :

+Waiting for the VM's IP address

+Warning: Permanently added '192.168.122.24' (ECDSA) to the list of known hosts.

+Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)

+

+ * Documentation:  https://help.ubuntu.com

+ * Management:     https://landscape.canonical.com

+ * Support:        https://ubuntu.com/advantage

+

+7 packages can be updated.

+7 updates are security updates.

+

+

+Last login: Tue Jun  6 16:55:48 2017 from 192.168.121.1

+vinstall@vinstall:~$

+```

+

+This might take a little while but once the prompt is presented there are 2 values that need to be configured after which the installer can be launched. (***Note:*** This will change over time as the HA solution evolves. As this happens this document will be updated)

+

+Use your favorite editor to edit the file ``install.cfg`` which should contain the following lines:

+```

+# Configure the hosts that will make up the cluster

+# hosts="192.168.121.195 192.168.121.2 192.168.121.215"

+#

+# Configure the user name to initilly log into those hosts as.

+# iUser="vagrant"

+```

+

+Uncomment the `hosts` line and replace the list of ip addresses on the line with the list of ip addresses for your deployment. These can be either VMs or bare metal servers, it makes no difference to the installer.

+

+Next uncomment the iUser line and change the userid that will be used to log into the target hosts (listed above) and save the file. The installer will create a new user named voltha on each of those hosts and use that account to complete the installation.

+

+Make sure that all the hosts that are being installed to have Ubuntu server 16.04LTS installed with OpenSSH. Also make sure that they're all reachable by attempting an ssh login to each with the user id provided on the iUser line.

+

+Once `install.cfg` file has been updated and reachability has been confirmed, start the installation with the command `./installer.sh`.

+

+Once launched, the installer will prompt for the password 3 times for each of the hosts the installation is being performed on. Once these have been provided, the installer will proceed without prompting for anything else. 
\ No newline at end of file
diff --git a/install/ConfigVagrantTesting.sh b/install/ConfigVagrantTesting.sh
deleted file mode 100755
index 3a9b06d..0000000
--- a/install/ConfigVagrantTesting.sh
+++ /dev/null
@@ -1,148 +0,0 @@
-#!/bin/bash
-
-baseImage="Ubuntu1604LTS"
-iVmName="Ubuntu1604LTS-1"
-iVmNetwork="vagrant-libvirt"
-shutdownTimeout=5
-ipTimeout=10
-
-lBlue='\033[1;34m'
-green='\033[0;32m'
-orange='\033[0;33m'
-NC='\033[0m'
-red='\033[0;31m'
-yellow='\033[1;33m'
-dGrey='\033[1;30m'
-lGrey='\033[1;37m'
-lCyan='\033[1;36m'
-
-# Shut down the domain in case it's running.
-#echo -e "${lBlue}Shut down the ${lCyan}$iVmName${lBlue} VM if running${NC}"
-#ctr=0
-#vStat=`virsh list | grep $iVmName`
-#while [ ! -z "$vStat" ];
-#do
-#	virsh shutdown $iVmName
-#	echo "Waiting for $iVmName to shut down"
-#	sleep 2
-#	vStat=`virsh list | grep $iVmName`
-#	ctr=`expr $ctr + 1`
-#	if [ $ctr -eq $shutdownTimeout ]; then
-#		echo -e "${red}Tired of waiting, forcing the VM off${NC}"
-#		virsh destroy $iVmName
-#		vStat=`virsh list | grep $iVmName`
-#	fi
-#done
-
-
-# Delete the VM and ignore any errors should they occur
-#echo -e "${lBlue}Undefining the ${lCyan}$iVmName${lBlue} domain${NC}"
-#virsh undefine $iVmName
-
-# Remove the associated volume
-#echo -e "${lBlue}Removing the ${lCyan}$iVmName.qcow2${lBlue} volume${NC}"
-#virsh vol-delete "${iVmName}.qcow2" default
-
-# Clone the base vanilla ubuntu install
-#echo -e "${lBlue}Cloning the ${lCyan}$baseImage.qcow2${lBlue} to ${lCyan}$iVmName.qcow2${NC}"
-#virsh vol-clone "${baseImage}.qcow2" "${iVmName}.qcow2" default
-
-# Create the xml file and define the VM for virsh
-#echo -e "${lBlue}Defining the  ${lCyan}$iVmName${lBlue} virtual machine${NC}"
-#cat vmTemplate.xml | sed -e "s/{{VMName}}/$iVmName/g" | sed -e "s/{{VMNetwork}}/$iVmNetwork/g" > tmp.xml
-
-#virsh define tmp.xml
-
-#rm tmp.xml
-
-# Start the VMm, if it's already running just ignore the error
-#echo -e "${lBlue}Starting the ${lCyan}$iVmName${lBlue} virtual machine${NC}"
-#virsh start $iVmName > /dev/null 2>&1
-
-
-# Configure ansible's key for communicating with the VMs... Testing only, this will
-# be taken care of by the installer in the future.
-for i in install_ha-serv1 install_ha-serv2 install_ha-serv3
-do
-	ipAddr=`virsh domifaddr $i | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
-	m=`echo $i | sed -e 's/install_//'`
-	echo "ansible_ssh_private_key_file: .vagrant/machines/$m/libvirt/private_key" > ansible/host_vars/$ipAddr
-done
-
-exit
-
-echo -e "${lBlue}Generating the key-pair for communication with the VM${NC}"
-ssh-keygen -f ./key -t rsa -N ''
-
-mv key key.pem
-
-# Clone BashLogin.sh and add the public key to it for later use.
-echo -e "${lBlue}Creating the pre-configuration script${NC}"
-cp BashLogin.sh bash_login.sh
-echo "cat <<HERE > .ssh/authorized_keys" >> bash_login.sh
-cat key.pub >> bash_login.sh
-echo "HERE" >> bash_login.sh
-echo "chmod 400 .ssh/authorized_keys" >> bash_login.sh
-echo "rm .bash_login" >> bash_login.sh
-echo "logout" >> bash_login.sh
-rm key.pub
-
-
-
-# Get the VM's IP address
-ctr=0
-ipAddr=""
-while [ -z "$ipAddr" ];
-do
-	echo -e "${lBlue}Waiting for the VM's IP address${NC}"
-	ipAddr=`virsh domifaddr $iVmName | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
-	sleep 2
-	if [ $ctr -eq $ipTimeout ]; then
-		echo -e "${red}Tired of waiting, please adjust the ipTimeout if the VM is slow to start${NC}"
-		exit
-	fi
-	ctr=`expr $ctr + 1`
-done
-
-echo -e "${lBlue}The IP address is: ${lCyan}$ipAddr${NC}"
-
-# Copy the pre-config file to the VM
-echo -e "${lBlue}Transfering pre-configuration script to the VM${NC}"
-scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no bash_login.sh vinstall@$ipAddr:.bash_login
-
-rm bash_login.sh
-
-# Run the pre-config file on the VM
-echo -e "${lBlue}Running the pre-configuration script on the VM${NC}"
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vinstall@$ipAddr 
-
-# Make sure the VM is up-to-date
-echo -e "${lBlue}Ensure that the VM is up-to-date${NC}"
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update 
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get -y upgrade 
-
-# Create the docker.cfg file in the ansible tree using the VMs IP address
-echo 'DOCKER_OPTS="$DOCKER_OPTS --insecure-registry '$ipAddr':5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=http://'$ipAddr':5001"' > ansible/roles/docker/templates/docker.cfg
-
-# Install ansible on the vm, it'll be used both here and for the install
-echo -e "${lBlue}Installing ansible on the VM${NC}"
-#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get install software-properties-common
-#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-add-repository ppa:ansible/ansible
-#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get -y install ansible
-
-# Copy the ansible files to the VM
-echo -e "${lBlue}Transferring the ansible directory to the VM${NC}"
-scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem -r ansible vinstall@$ipAddr:ansible
-
-# Get the GPG key for docker otherwise ansible calls break
-echo -e "${lBlue}Get the GPG key for docker to allow ansible playbooks to run successfully${NC}"
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr "sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D"
-
-# Bootstrap ansible
-echo -e "${lBlue}Bootstrap ansible${NC}"
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr ansible/scripts/bootstrap_ansible.sh
-
-# Run the ansible script to initialize the installer environment
-echo -e "${lBlue}Run the nsible playbook for the installer${NC}"
-ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo PYTHONUNBUFFERED=1 ansible-playbook /home/vinstall/ansible/volthainstall.yml -c local
diff --git a/install/CreateInstaller.sh b/install/CreateInstaller.sh
index bd253b6..2ae4b44 100755
--- a/install/CreateInstaller.sh
+++ b/install/CreateInstaller.sh
@@ -1,8 +1,11 @@
 #!/bin/bash
 
 baseImage="Ubuntu1604LTS"
-iVmName="Ubuntu1604LTS-1"
+iVmName="vInstaller"
 iVmNetwork="vagrant-libvirt"
+installerArchive="installer.tar.bz2"
+installerDirectory="volthaInstaller"
+installerPart="installer.part"
 shutdownTimeout=5
 ipTimeout=10
 
@@ -34,10 +37,10 @@
 aInst=`which ansible`
 
 if [ -z "$aInst" ]; then
-	sudo apt-get install software-properties-common
+	sudo apt-get install -y software-properties-common
 	sudo apt-add-repository ppa:ansible/ansible
 	sudo apt-get update
-	sudo apt-get install ansible
+	sudo apt-get install -y ansible
 fi
 unset vInst
 
@@ -58,15 +61,18 @@
 	./devSetHostList.sh
 else
 	rm -fr .test
+	# Clean out the install config file keeping only the commented lines
+        # which serve as documentation.
+	sed -i -e '/^#/!d' install.cfg
 fi
 
 # Shut down the domain in case it's running.
 echo -e "${lBlue}Shut down the ${lCyan}$iVmName${lBlue} VM if running${NC}"
 ctr=0
 vStat=`virsh list | grep $iVmName`
+virsh shutdown $iVmName
 while [ ! -z "$vStat" ];
 do
-	virsh shutdown $iVmName
 	echo "Waiting for $iVmName to shut down"
 	sleep 2
 	vStat=`virsh list | grep $iVmName`
@@ -93,7 +99,7 @@
 
 # Create the xml file and define the VM for virsh
 echo -e "${lBlue}Defining the  ${lCyan}$iVmName${lBlue} virtual machine${NC}"
-cat vmTemplate.xml | sed -e "s/{{VMName}}/$iVmName/g" | sed -e "s/{{VMNetwork}}/$iVmNetwork/g" > tmp.xml
+cat vmTemplate.xml | sed -e "s/{{ VMName }}/$iVmName/g" | sed -e "s/{{ VMNetwork }}/$iVmNetwork/g" > tmp.xml
 
 virsh define tmp.xml
 
@@ -186,4 +192,89 @@
 # Launch the ansible playbook
 echo -e "${lBlue}Launching the ansible playbook${NC}"
 ansible-playbook ansible/volthainstall.yml -i ansible/hosts/installer
+if [ $? -ne 0 ]; then
+	echo -e "${red}PLAYBOOK FAILED, Exiting${NC}"
+	exit
+fi
 ansible-playbook ansible/volthainstall.yml -i ansible/hosts/voltha
+if [ $? -ne 0 ]; then
+	echo -e "${red}PLAYBOOK FAILED, Exiting${NC}"
+	exit
+fi
+
+if [ $# -eq 1 -a "$1" == "test" ]; then
+	echo -e "${lBlue}Testing, the install image ${red}WILL NOT#{lBlue} be built${NC}"
+else
+	echo -e "${lBlue}Building, the install image (this can take a while)${NC}"
+	# Create a temporary directory for all the installer files
+        mkdir tmp_installer
+        cp vmTemplate.xml tmp_installer
+	# Shut down the installer vm
+	ctr=0
+	vStat=`virsh list | grep $iVmName`
+	virsh shutdown $iVmName
+	while [ ! -z "$vStat" ];
+	do
+		echo "Waiting for $iVmName to shut down"
+		sleep 2
+		vStat=`virsh list | grep $iVmName`
+		ctr=`expr $ctr + 1`
+		if [ $ctr -eq $shutdownTimeout ]; then
+			echo -e "${red}Tired of waiting, forcing the VM off${NC}"
+			virsh destroy $iVmName
+			vStat=`virsh list | grep $iVmName`
+		fi
+	done
+        # Copy the install bootstrap script to the installer directory
+        cp BootstrapInstaller.sh tmp_installer
+        # Copy the private key to access the VM
+        cp key.pem tmp_installer
+        pushd tmp_installer > /dev/null 2>&1
+        # Copy the vm image to the installer directory
+	virsh vol-dumpxml $iVmName.qcow2 default  | sed -e 's/<key.*key>//' | sed -e '/^[ ]*$/d' > ${iVmName}_volume.xml
+	virsh pool-create-as installer --type dir --target `pwd`
+	virsh vol-create-from installer ${iVmName}_volume.xml $iVmName.qcow2 --inputpool default
+	virsh pool-destroy installer
+	# The image is copied in as root. It needs to have ownership changed
+	# this will result in a password prompt.
+	sudo chown `whoami`.`whoami` $iVmName.qcow2
+	# Now create the installer tar file
+        tar cjf ../$installerArchive .
+        popd > /dev/null 2>&1
+	# Clean up
+	rm -fr tmp_installer
+	# Final location for the installer
+	rm -fr $installerDirectory
+	mkdir $installerDirectory
+	cp installVoltha.sh $installerDirectory
+	# Check the image size and determine if it needs to be split.
+        # To be safe, split the image into chunks smaller than 2G so that
+        # it will fit on a FAT32 volume.
+	fSize=`ls -l $installerArchive | awk '{print $5'}`
+	if [ $fSize -gt 2000000000 ]; then
+		echo -e "${lBlue}Installer file too large, breaking into parts${NC}"
+		# The file is too large, breaking it up into parts
+		sPos=0
+		fnn="00"
+		while dd if=$installerArchive of=${installerDirectory}/${installerPart}$fnn \
+			bs=1900MB count=1 skip=$sPos > /dev/null 2>&1
+		do
+			sPos=`expr $sPos + 1`
+			if [ ! -s ${installerDirectory}/${installerPart}$fnn ]; then
+				rm -f ${installerDirectory}/${installerPart}$fnn
+				break
+			fi
+			if [ $sPos -lt 10 ]; then
+				fnn="0$sPos"
+			else
+				fnn="$sPos"
+			fi
+		done
+	else
+		cp $installerArchive $installerDirectory
+	fi
+	# Clean up
+	rm $installerArchive
+	echo -e "${lBlue}The install image is built and can be found in ${yellow}$installerDirectory${NC}"
+	echo -e "${lBlue}Copy all the files in ${yellow}$installerDirectory${lBlue} to the traasnport media${NC}"
+fi
diff --git a/install/TODO b/install/TODO
index 67a0653..f020e8e 100644
--- a/install/TODO
+++ b/install/TODO
@@ -1,3 +1,7 @@
+- Update the ansible scripts to install docker in swarm mode
+  - 3 Master nodes.
+**** DONE **** DONE **** DONE **** DONE **** DONE **** DONE **** DONE **** DONE **** DONE **** DONE ****
+
 - Create an installer tar file when not run in test mode
   - This file should include:
     - The qcow2 image for the installer
@@ -13,3 +17,4 @@
   - Move the pull and push roles into the voltha role
     - Use the target selector to trigger the appropriate ones
     - OR create voltha-deploy and voltha-create roles (TBD)
+
diff --git a/install/ansible/group_vars/all b/install/ansible/group_vars/all
index f00163a..dfa7529 100644
--- a/install/ansible/group_vars/all
+++ b/install/ansible/group_vars/all
@@ -1,15 +1,11 @@
-ip: "{{ facter_ipaddress_eth1 }}"
-consul_extra: ""
-proxy_url: http://{{ facter_ipaddress_eth1 }}
-proxy_url2: http://{{ facter_ipaddress_eth1 }}
-registry_url: 10.100.198.220:5000/
-jenkins_ip: 10.100.198.220
 debian_version: xenial
 docker_cfg: docker.cfg
 docker_cfg_dest: /etc/default/docker
 docker_registry: "localhost:5000"
 docker_push_registry: "vinstall:5000"
 cord_home: /home/volthainstall/cord
+target_voltha_dir: /cord/incubator/voltha
+target_voltha_home: /home/voltha
 voltha_containers:
   - voltha/nginx
   - voltha/grafana
diff --git a/install/ansible/roles/apt-repository/tasks/debian.yml b/install/ansible/roles/apt-repository/tasks/debian.yml
deleted file mode 100644
index b77a9f6..0000000
--- a/install/ansible/roles/apt-repository/tasks/debian.yml
+++ /dev/null
@@ -1,45 +0,0 @@
-- name: The apt-repository is copied

-  copy:

-    src: "{{ cord_home }}/incubator/voltha/install/apt-mirror"

-    dest: /home/vinstall

-    owner: vinstall

-    group: vinstall

-  tags: [apt]

-- name: Nginx is installed

-  apt:

-    name: nginx

-    state: latest

-  tags: [apt]

-

-- name: Nginx config is copied

-  copy:

-    src: "{{ cord_home }}/incubator/voltha/install/nginx-default"

-    dest: /etc/nginx/sites-enabled/default

-  register: copy_result

-  tags: [apt]

-

-- name: nginx is restarted

-  command: service nginx restart

-  when: copy_result|changed

-  tags: [apt]

-

-#- name: NFS is installed TESTING ONLY REMOVE FOR PRODUCTION

-#  apt:

-#    name: nfs-common

-#    state: latest

-#  tags: [apt]

-#

-#- name: Apt repo is mounted TESTING ONLY REMOVE FOR PRODUCTION

-#  mount:

-#    name: /home/vinstall/apt-mirror

-#    src: "{{ mount_host }}:{{ cord_home }}/incubator/voltha/install/apt-mirror"

-#    fstype: nfs

-#    state: mounted

-#  tags: [apt]

-

-- name: Links to the repos are created

-  file:

-    src: /home/vinstall/apt-mirror/mirror/archive.ubuntu.com/ubuntu

-    dest: /var/www/ubuntu

-    state: link

-  tags: [apt]

diff --git a/install/ansible/roles/apt-repository/tasks/main.yml b/install/ansible/roles/apt-repository/tasks/main.yml
deleted file mode 100644
index 1495847..0000000
--- a/install/ansible/roles/apt-repository/tasks/main.yml
+++ /dev/null
@@ -1,5 +0,0 @@
-- include: debian.yml
-  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
-
-- include: centos.yml
-  when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
diff --git a/install/ansible/roles/cluster-host/tasks/cluster-host.yml b/install/ansible/roles/cluster-host/tasks/cluster-host.yml
new file mode 100644
index 0000000..20dcd15
--- /dev/null
+++ b/install/ansible/roles/cluster-host/tasks/cluster-host.yml
@@ -0,0 +1,49 @@
+# Note: When the target == "cluster" the installer
+# is running to install voltha in the cluster hosts.
+# Whe the target == "installer" the installer is being
+# created.
+- name: Required configuration directories are copied
+  copy:
+    src: "/home/vinstall/{{ item }}"
+    dest: "{{ target_voltha_home }}"
+    owner: voltha
+    group: voltha
+  with_items:
+    - docker-py
+    - netifaces
+    - deb_files
+  when: target == "cluster"
+  tags: [voltha]
+
+- name: Dependent software is installed
+  command: dpkg -i "{{ target_voltha_home }}/deb_files/{{ item }}"
+  with_items: "{{ deb_files }}"
+  when: target == "cluster"
+  ignore_errors: true
+  when: target == "cluster"
+  tags: [voltha]
+
+- name: Dependent software is initialized
+  command: apt-get -f install
+  when: target == "cluster"
+  tags: [voltha]
+
+- name: Python packages are installe
+  command: pip install {{ item }} --no-index --find-links "file://{{ target_voltha_home }}/{{ item }}"
+  with_items:
+    - docker-py
+    - netifaces
+  when: target == "cluster"
+  tags: [voltha]
+
+- name: Configuration directories are deleted
+  file:
+    path: "{{ target_voltha_home }}/{{ item }}"
+    state: absent
+  with_items:
+    - docker-py
+    - netifaces
+    - deb_files
+  when: target == "cluster"
+  tags: [voltha]
+
diff --git a/install/ansible/roles/cluster-host/tasks/main.yml b/install/ansible/roles/cluster-host/tasks/main.yml
new file mode 100644
index 0000000..41acd90
--- /dev/null
+++ b/install/ansible/roles/cluster-host/tasks/main.yml
@@ -0,0 +1,2 @@
+- include: cluster-host.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
diff --git a/install/ansible/roles/docker-registry/tasks/debian.yml b/install/ansible/roles/docker-registry/tasks/debian.yml
index 72903ab..788401f 100644
--- a/install/ansible/roles/docker-registry/tasks/debian.yml
+++ b/install/ansible/roles/docker-registry/tasks/debian.yml
@@ -1,5 +1,5 @@
-- name: The insecure docker registry is started

-  command: docker run -d -p 5000:5000 --name registry registry:2

-  register: result

-  ignore_errors: true

-  tags: [docker]

+- name: The insecure docker registry is started
+  command: docker run --restart=always -d -p 5000:5000 --name registry registry:2
+  register: result
+  ignore_errors: true
+  tags: [docker]
diff --git a/install/ansible/roles/docker/defaults/main.yml b/install/ansible/roles/docker/defaults/main.yml
index 338d16e..1c138f8 100644
--- a/install/ansible/roles/docker/defaults/main.yml
+++ b/install/ansible/roles/docker/defaults/main.yml
@@ -3,4 +3,4 @@
 centos_files: [
   { src: "docker.centos.repo", dest: "/etc/yum.repos.d/docker.repo" },
   { src: "docker.centos.service", dest: "/lib/systemd/system/docker.service" },
-]
\ No newline at end of file
+]
diff --git a/install/ansible/roles/docker/tasks/debian.yml b/install/ansible/roles/docker/tasks/debian.yml
index 081fda9..8eed0ff 100644
--- a/install/ansible/roles/docker/tasks/debian.yml
+++ b/install/ansible/roles/docker/tasks/debian.yml
@@ -1,91 +1,64 @@
-- name: Debian add Docker repository and update apt cache

-  apt_repository:

-    repo: deb https://apt.dockerproject.org/repo ubuntu-{{ debian_version }} main

-    update_cache: yes

-    state: present

-  when: target == "installer"

-  tags: [docker]

-

-- name: Debian Docker is present

-  apt:

-    name: docker-engine

-    state: latest

-    force: yes

-  when: target == "installer"

-  tags: [docker]

-

-#- name: Docker deb install file is present

-#  get_url:

-#    url: https://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_17.05.0~ce-0~ubuntu-xenial_amd64.deb

-#    dest: /home/vinstall

-#    owner: vinstall

-#    group: vinstall

-#  when: target == "installer"

-#  tags: [docker]

-

-#- name: Docker dependencies satisfied

-#  apt:

-#    name: libltdl7

-#    state: latest

-#    force: yes

-#  when: target == "cluster"

-#  tags: [docker]

-

-#- name: Docker install deb file is copied

-#  copy:

-#    src: /home/vinstall/docker-engine_17.05.0~ce-0~ubuntu-xenial_amd64.deb

-#    dest: /home/voltha

-#  when: target == "cluster"

-#  tags: [docker]

-

-#- name: Docker engine is installed

-#  apt:

-#    deb: /home/vinstall/docker-engine_17.05.0~ce-0~ubuntu-xenial_amd64.deb

-#  when: target == "cluster"

-#  tags: [docker]

-

-- name: Debian python-pip is present

-  apt: name=python-pip state=present

-  tags: [docker]

-

-- name: Debian docker-py is present

-  pip:

-    name: docker-py

-    version: 1.6.0

-    state: present

-  when: target == "installer"

-  tags: [docker]

-

-- name: netifaces pip package is present

-  pip:

-    name: netifaces

-    version: 0.10.4

-    state: present

-  when: target == "installer"

-  tags: [docker]

-

-- name: Debian files are present

-  template:

-    src: "{{ docker_cfg }}"

-    dest: "{{ docker_cfg_dest }}"

-  register: copy_result

-  tags: [docker]

-

-- name: Debian Daemon is reloaded

-  command: systemctl daemon-reload

-  when: copy_result|changed and is_systemd is defined

-  tags: [docker]

-

-- name: vagrant user is added to the docker group

-  user:

-    name: "{{ ansible_env['SUDO_USER'] }}"

-    group: docker

-  register: user_result

-  tags: [docker]

-

-- name: Debian Docker service is restarted

-  service:

-    name: docker

-    state: restarted

-  when: copy_result|changed or user_result|changed

-  tags: [docker]

+- name: Debian add Docker repository and update apt cache
+  apt_repository:
+    repo: deb https://apt.dockerproject.org/repo ubuntu-{{ debian_version }} main
+    update_cache: yes
+    state: present
+  when: target == "installer"
+  tags: [docker]
+
+- name: Debian Docker is present
+  apt:
+    name: docker-engine
+    state: latest
+    force: yes
+  when: target == "installer"
+  tags: [docker]
+
+- name: Debian python-pip is present
+  apt:
+    name: python-pip
+    state: present
+  when: target == "installer"
+  tags: [docker]
+
+- name: Debian docker-py is present
+  pip:
+    name: docker-py
+    version: 1.6.0
+    state: present
+  when: target == "installer"
+  tags: [docker]
+
+- name: netifaces pip package is present
+  pip:
+    name: netifaces
+    version: 0.10.4
+    state: present
+  when: target == "installer"
+  tags: [docker]
+
+- name: Docker config files are present
+  template:
+    src: "{{ docker_cfg }}"
+    dest: "{{ docker_cfg_dest }}"
+  register: copy_result
+  tags: [docker]
+
+- name: Debian Daemon is reloaded
+  command: systemctl daemon-reload
+  when: copy_result|changed and is_systemd is defined
+  tags: [docker]
+
+- name: vagrant user is added to the docker group
+  user:
+    name: "{{ ansible_env['SUDO_USER'] }}"
+    group: docker
+  register: user_result
+  tags: [docker]
+
+- name: Debian Docker service is restarted
+  service:
+    name: docker
+    state: restarted
+  when: copy_result|changed or user_result|changed
+  tags: [docker]
diff --git a/install/ansible/roles/installer/tasks/installer.yml b/install/ansible/roles/installer/tasks/installer.yml
index 5d29235..330d512 100644
--- a/install/ansible/roles/installer/tasks/installer.yml
+++ b/install/ansible/roles/installer/tasks/installer.yml
@@ -22,9 +22,10 @@
     - nginx_config

   tags: [installer]

 - name: Determine if test mode is active

+  become: false

   local_action: stat path="{{ cord_home }}/incubator/voltha/install/.test"

   register: file

-  ignore_errors: True

+  ignore_errors: true

 - name: Test mode file is copied

   copy:

     src: "{{ cord_home }}/incubator/voltha/install/.test"

@@ -41,6 +42,11 @@
 - name: Python netifaces 0.10.4 package source is available

   command: pip download -d /home/vinstall/netifaces "netifaces==0.10.4"

   tags: [installer]

+- name: Deb file directory doesn't exist

+  file:

+    path: /home/vinstall/deb_files

+    state: absent

+  tags: [installer]

 - name: Deb files are saved.

   command: cp -r /var/cache/apt/archives /home/vinstall

   tags: [installer]

diff --git a/install/ansible/roles/voltha/tasks/voltha.yml b/install/ansible/roles/voltha/tasks/voltha.yml
index c8cc78d..d6884e5 100644
--- a/install/ansible/roles/voltha/tasks/voltha.yml
+++ b/install/ansible/roles/voltha/tasks/voltha.yml
@@ -1,57 +1,93 @@
+# Note: When the target == "cluster" the installer

+# is running to install voltha in the cluster hosts.

+# Whe the target == "installer" the installer is being

+# created.

+- name: The environment is properly set on login

+  template:

+    src: bashrc.j2

+    dest: "{{ target_voltha_home }}/.bashrc"

+    owner: voltha

+    group: voltha

+    mode: "u=rw,g=r,o=r"

+  when: target == "cluster"

+  tags: [voltha]

+  

+- name: The .bashrc file is executed on ssh login

+  template:

+    src: bash_profile.j2

+    dest: "{{ target_voltha_home }}/.bash_profile"

+    owner: voltha

+    group: voltha

+    mode: "u=rw,g=r,o=r"

+  when: target == "cluster"

+  tags: [voltha]

+  

 - name: Required directory exists

   file:

-    path: /cord/incubator/voltha

+    path: "{{ target_voltha_dir }}"

     state: directory

     owner: voltha

     group: voltha

+  when: target == "cluster"

   tags: [voltha]

 

 - name: Required directories are copied

   copy:

-    src: /home/vinstall/{{ item }}

-    dest: /cord/incubator/voltha

+    src: "/home/vinstall/{{ item }}"

+    dest: "{{ target_voltha_dir }}"

     owner: voltha

     group: voltha

   with_items:

     - compose

     - nginx_config

-    - docker-py

-    - netifaces

-    - deb_files

+  when: target == "cluster"

   tags: [voltha]

 

 - name: Nginx module symlink is present

   file:

-    dest: /cord/incubator/voltha/nginx_config/modules

+    dest: "{{ target_voltha_dir }}/nginx_config/modules"

     src: ../../usr/lib/nginx/modules

     state: link

     follow: no

     force: yes

+  when: target == "cluster"

   tags: [voltha]

 

 - name: Nginx statup script is executable

   file:

-    path: /cord/incubator/voltha/nginx_config/start_service.sh

+    path: "{{ target_voltha_dir }}/nginx_config/start_service.sh"

     mode: 0755

-  tags: [voltha]

-

-- name: Dependent software is installed

-  command: dpkg -i /cord/incubator/voltha/deb_files/{{ item }}

-  with_items: "{{ deb_files }}"

-  when: target == "cluster"

-  ignore_errors: true

-  tags: [voltha]

-

-- name: Dependent software is initialized

-  command: apt-get -f install

   when: target == "cluster"

   tags: [voltha]

 

-- name: Python packages are installe

-  command: pip install {{ item }} --no-index --find-links file:///cord/incubator/voltha/{{ item }}

-  with_items:

-    - docker-py

-    - netifaces

+- name: Docker containers for Voltha are pulled

+  command: docker pull {{ docker_registry }}/{{ item }}

+  with_items: "{{ voltha_containers }}"

+  when: target == "cluster"

+  tags: [voltha]

+- name: Docker images are re-tagged to expected names

+  command: docker tag {{ docker_registry }}/{{ item }} {{ item }}

+  with_items: "{{ voltha_containers }}"

+  when: target == "cluster"

+  tags: [voltha]

+- name: Old docker image tags are removed

+  command: docker rmi {{ docker_registry }}/{{ item }}

+  with_items: "{{ voltha_containers }}"

   when: target == "cluster"

   tags: [voltha]

 

+- name: Docker images are re-tagged to registry for push

+  command: docker tag {{ item }} {{ docker_push_registry }}/{{ item }}

+  with_items: "{{ voltha_containers }}"

+  when: target == "installer"

+  tags: [voltha]

+- name: Docker containers for Voltha are pushed

+  command: docker push {{ docker_push_registry }}/{{ item }}

+  with_items: "{{ voltha_containers }}"

+  when: target == "installer"

+  tags: [voltha]

+- name: Temporary registry push tags are removed

+  command: docker rmi {{ docker_push_registry }}/{{ item }}

+  with_items: "{{ voltha_containers }}"

+  when: target == "installer"

+  tags: [voltha]

diff --git a/install/ansible/roles/voltha/templates/bash_profile.j2 b/install/ansible/roles/voltha/templates/bash_profile.j2
new file mode 100644
index 0000000..45cb87e
--- /dev/null
+++ b/install/ansible/roles/voltha/templates/bash_profile.j2
@@ -0,0 +1,3 @@
+if [ -f ~/.bashrc ]; then
+  . ~/.bashrc
+fi
diff --git a/install/ansible/roles/voltha/templates/bashrc.j2 b/install/ansible/roles/voltha/templates/bashrc.j2
new file mode 100644
index 0000000..73bcc88
--- /dev/null
+++ b/install/ansible/roles/voltha/templates/bashrc.j2
@@ -0,0 +1,2 @@
+DOCKER_HOST_IP={{ ansible_default_ipv4.address }}
+export DOCKER_HOST_IP
diff --git a/install/ansible/voltha.yml b/install/ansible/voltha.yml
index 2dbfaf1..f3a1cf4 100644
--- a/install/ansible/voltha.yml
+++ b/install/ansible/voltha.yml
@@ -5,8 +5,8 @@
     target: cluster
   roles:
     - common
-    - voltha
+    - cluster-host
     - docker
     - docker-compose
-    - pull-images
+    - voltha
     - java
diff --git a/install/ansible/volthainstall.yml b/install/ansible/volthainstall.yml
index 3e8c05a..b0629dd 100644
--- a/install/ansible/volthainstall.yml
+++ b/install/ansible/volthainstall.yml
@@ -9,9 +9,10 @@
     - docker-compose
     - installer
     - docker-registry
-#    - apt-repository
 - hosts: voltha
   remote_user: vagrant
   serial: 1
+  vars:
+    target: installer
   roles:
-    - push-images
+    - voltha
diff --git a/install/devSetHostList.sh b/install/devSetHostList.sh
index ea228be..924667b 100755
--- a/install/devSetHostList.sh
+++ b/install/devSetHostList.sh
@@ -8,7 +8,7 @@
 # usage devCopyTiInstaller.sh <ip-address>
 
 
-rm -f install.cfg
+sed -i -e '/^#/!d' install.cfg
 rm -fr .test
 mkdir .test
 hosts=""
@@ -20,3 +20,4 @@
 	cat .vagrant/machines/$hName/libvirt/private_key > .test/$ipAddr
 done
 echo "hosts=\"$hosts\"" >> install.cfg
+echo 'iUser="vagrant"' >> install.cfg
diff --git a/install/install.cfg b/install/install.cfg
index 4e76718..80bbc67 100644
--- a/install/install.cfg
+++ b/install/install.cfg
@@ -1,2 +1,5 @@
-# List of hosts that will make up the voltha cluster
-# hosts="192.168.121.140 192.168.121.13 192.168.121.238"
+# Configure the hosts that will make up the cluster
+# hosts="192.168.121.195 192.168.121.2 192.168.121.215"
+#
+# Configure the user name to initilly log into those hosts as.
+# iUser="vagrant"
diff --git a/install/installVoltha.sh b/install/installVoltha.sh
new file mode 100755
index 0000000..f5ae9ff
--- /dev/null
+++ b/install/installVoltha.sh
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+baseImage="Ubuntu1604LTS"
+iVmName="vInstaller"
+iVmNetwork="default"
+shutdownTimeout=5
+ipTimeout=10
+installerArchive="installer.tar.bz2"
+installerPart="installer.part"
+
+lBlue='\033[1;34m'
+green='\033[0;32m'
+orange='\033[0;33m'
+NC='\033[0m'
+red='\033[0;31m'
+yellow='\033[1;33m'
+dGrey='\033[1;30m'
+lGrey='\033[1;37m'
+lCyan='\033[1;36m'
+
+wd=`pwd`
+
+# Check if the tar file is available.
+echo -e "${lBlue}Checking for the installer archive ${lCyan}$installerArchive${NC}"
+
+if [ ! -f $installerArchive ]; then
+	# The installer file ins't there, check for parts to re-assemble
+	echo -e "${lBlue}Checking for the installer archive parts ${lCyan}$installerPart*${NC}"
+	fList=`ls ${installerPart}*`
+	if [ -z "$fList" ]; then
+		echo -e "${red} Could not find installer archive or installer archive parts, ABORTING.${NC}"
+		exit
+	else
+		# All is well, concatenate the files together to create the installer archive
+		echo -e "${lBlue}Creating the installer archive ${lCyan}$installerArchive${NC}"
+		cat $fList > installer.tar.bz2
+		rm -fr $fList
+	fi
+fi
+
+# Extract the installer files and bootstrap the installer
+echo -e "${lBlue}Extracting the content of the installer archive ${lCyan}$installerArchive${NC}"
+tar xjf $installerArchive
+echo -e "${lBlue}Starting the installer{NC}"
+chmod u+x BootstrapInstaller.sh
+./BootstrapInstaller.sh
diff --git a/install/installer.sh b/install/installer.sh
index 9ac3e78..61d1927 100755
--- a/install/installer.sh
+++ b/install/installer.sh
@@ -2,7 +2,6 @@
 
 baseImage="Ubuntu1604LTS"
 iVmName="Ubuntu1604LTS-1"
-iVmNetwork="vagrant-libvirt"
 shutdownTimeout=5
 ipTimeout=10
 
@@ -74,9 +73,9 @@
 	echo -e "${lBlue}Transfering pre-configuration script to ${yellow}$i${NC}"
 	if [ -d ".test" ]; then
 		echo -e "${red}Test mode set!!${lBlue} Using pre-populated ssh key for ${yellow}$i${NC}"
-		scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .test/$i bash_login.sh vagrant@$i:.bash_login
+		scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .test/$i bash_login.sh $iUser@$i:.bash_login
 	else
-		scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no bash_login.sh vagrant@$i:.bash_login
+		scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no bash_login.sh $iUser@$i:.bash_login
 	fi
 	rm bash_login.sh
 
@@ -84,9 +83,9 @@
 	echo -e "${lBlue}Running the pre-configuration script on ${yellow}$i${NC}"
 	if [ -d ".test" ]; then
 		echo -e "${red}Test mode set!!${lBlue} Using pre-populated ssh key for ${yellow}$i${NC}"
-		ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .test/$i vagrant@$i
+		ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .test/$i $iUser@$i
 	else
-		ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vagrant@$i
+		ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $iUser@$i
 	fi
 
 	# Configure ansible and ssh for silent operation
diff --git a/install/vmTemplate.xml b/install/vmTemplate.xml
index 02fa558..faf7288 100644
--- a/install/vmTemplate.xml
+++ b/install/vmTemplate.xml
@@ -1,5 +1,5 @@
 <domain type='kvm'>
-  <name>{{VMName}}</name>
+  <name>{{ VMName }}</name>
   <memory unit='KiB'>1048576</memory>
   <currentMemory unit='KiB'>1048576</currentMemory>
   <vcpu placement='static'>2</vcpu>
@@ -30,7 +30,7 @@
     <emulator>/usr/bin/kvm-spice</emulator>
     <disk type='file' device='disk'>
       <driver name='qemu' type='qcow2'/>
-      <source file='/var/lib/libvirt/images/{{VMName}}.qcow2'/>
+      <source file='/var/lib/libvirt/images/{{ VMName }}.qcow2'/>
       <target dev='hda' bus='ide'/>
       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
     </disk>
@@ -64,7 +64,7 @@
     </controller>
     <interface type='network'>
       <mac address='52:54:00:ed:19:74'/>
-      <source network='{{VMNetwork}}'/>
+      <source network='{{ VMNetwork }}'/>
       <model type='virtio'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
     </interface>