Initial commit of the voltha installer. There are several things that
need to be cleaned up but the installer is fully functional in test
mode and creates 3 non-clustered VMs with identical voltha installs
until such time as HA is ready. Once HA is ready, the scripts will be
modified to deploy the full HA cluster.
This update partially addresses Epic VOL-6.

Made changes requested by the reviewers.

Change-Id: I083239e1f349136d2ec1e51e09391da341177076
diff --git a/install/BashLogin.sh b/install/BashLogin.sh
new file mode 100644
index 0000000..9f85351
--- /dev/null
+++ b/install/BashLogin.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+
+echo "vinstall ALL=(ALL) NOPASSWD:ALL" > tmp
+sudo chown root.root tmp
+sudo mv tmp /etc/sudoers.d/vinstall
+mkdir .ssh
+chmod 0700 .ssh
+ssh-keygen -f /home/vinstall/.ssh/id_rsa -t rsa -N ''
diff --git a/install/BuildVoltha.sh b/install/BuildVoltha.sh
new file mode 100755
index 0000000..67a6183
--- /dev/null
+++ b/install/BuildVoltha.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+vmName="voltha_voltha"
+
+# Voltha directory
+cd ..
+
+# Destroy the VM if it's running
+vagrant destroy voltha
+
+# Bring up the VM.
+vagrant up voltha
+
+# Get the VM's ip address
+ipAddr=`virsh domifaddr $vmName | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+
+# Run all the build commands
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .vagrant/machines/voltha/libvirt/private_key vagrant@$ipAddr "cd /cord/incubator/voltha && . env.sh && make fetch && make"
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
new file mode 100755
index 0000000..6bc3ade
--- /dev/null
+++ b/install/BuildingTheInstaller.md
@@ -0,0 +1,138 @@
+# Running the installer

+***

+**++Table of contents++**

+

+[TOC]

+***

+## Bare Metal Setup

+**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document ``Building a vOLT-HA Virtual Machine  Using Vagrant on QEMU/KVM`` then skip to [Running the Installer](#Building-the-installer).

+

+Start with an installation of Ubuntu16.04LTS on a bare metal server that is capable of virtualization. How to determine this is beyond the scope of this document. When installing the image ensure that both "OpenSSH server" and "Virtualization Machine Host" are chosen in addition to the default "standard system utilities". Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section.

+

+##Create the base ubuntu/xenial box

+  Though there are some flavors of ubuntu boxes available but they usually have additional features installed or missing so it's best to just create the image from the ubuntu installation iso image.

+  

+  ```

+  

+  voltha> wget http://releases.ubuntu.com/xenial/ubuntu-16.04.2-server-i386.iso

+  voltha> echo "virt-install -n Ubuntu16.04 -r 1024 --vcpus=2 --disk size=50 -c ubuntu-16.04.2-server-amd64.iso --accelerate --network network=default,model=virtio --connect=qemu:///system --vnc --noautoconsole -v" > Ubuntu16.04Vm

+  voltha> . Ubuntu16.04Vm

+  voltha> virt-manager

+```

+Once the virt manager opens, open the console of the Ubuntu16.04 VM and follow the installation process.

+When promprompted use the hostname ``voltha``. Also when prompted you should create one user ``Vagrant Vagrant`` and use the offered up userid of ``vagrant``. When prompted for the password of the vagrant user, use ``vagrant``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install.

+Once the installation is complete, run the VM and log in as vagrant password vagrant and install the default vagrant key (this can be done one of two ways, through virt-manager and the console or by uing ssh from the hypervisor host, the virt-manager method is shown below):

+```

+vagrant@voltha$ mkdir -p /home/vagrant/.ssh

+vagrant@voltha$ chmod 0700 /home/vagrant/.ssh

+vagrant@voltha$ wget --no-check-certificate \

+    https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub \

+    -O /home/vagrant/.ssh/authorized_keys

+vagrant@voltha$ chmod 0600 /home/vagrant/.ssh/authorized_keys

+vagrant@voltha$ chown -R vagrant /home/vagrant/.ssh

+```

+Also create a .ssh directory for the root user:

+```

+vagrant@voltha$ sudo mkdir /root/.ssh

+```

+Add a vagrant file to /etc/sudoers.d/vagrant with the following:

+```

+vagrant@voltha$ echo "vagrant ALL=(ALL) NOPASSWD:ALL" > tmp.sudo

+vagrant@voltha$ sudo mv tmp.sudo /etc/sudoers.d/vagrant

+```

+

+## Install and configure vagrant

+Vagrant comes with the Ubuntu 16.04 but it doesn't work with kvm. Downloading and installing the version from hashicorp solves the problem.

+```

+voltha> wget https://releases.hashicorp.com/vagrant/1.9.5/vagrant_1.9.3_x86_64.deb

+voltha> sudo dpkg -i vagrant_1.9.3_x86_64.deb

+voltha> vagrant plugin install vagrant-cachier

+voltha> sudo apt-get install libvirt-dev

+voltha> vagrant plugin install vagrant-libvirt

+```

+## Create the default vagrant box

+

+When doing this, be careful that you're not in a directory where a Vagrantfile already exists or you'll trash it. It is recommended that a temporary directory is created to perform these actions and then removed once the new box has been added to vagrant.

+```

+voltha> cp /var/lib/libvirt/images/Ubuntu16.04.qcow2 box.img

+voltha> echo '{

+"provider"     : "libvirt",

+"format"       : "qcow2",

+"virtual_size" : 50

+}' > metadata.json

+voltha> cat <<HERE > Vagrantfile

+Vagrant.configure("2") do |config|

+     config.vm.provider :libvirt do |libvirt|

+     libvirt.driver = "kvm"

+     libvirt.host = 'localhost'

+     libvirt.uri = 'qemu:///system'

+     end

+config.vm.define "new" do |custombox|

+     custombox.vm.box = "custombox"       

+     custombox.vm.provider :libvirt do |test|

+     test.memory = 1024

+     test.cpus = 1

+     end

+     end

+end

+HERE

+voltha> tar czvf ubuntu1604.box ./metadata.json ./Vagrantfile ./box.img

+voltha> vagrant box add ubuntu1604.box

+```

+##Download the voltha tree

+The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required.

+```

+voltha> sudo apt-get install repo

+voltha> mkdir cord

+voltha>  sudo ln -s /cord `pwd`/cord

+voltha>  cd cord

+voltha>  repo init -u https://gerrit.opencord.org/manifest -g voltha

+voltha>  repo sync

+```

+

+## Run vagrant to Create a Voltha VM

+First create the voltah VM using vagrant.

+```

+voltha> vagrant up

+```

+Finally, if required, log into the vm using vagrant.

+```

+voltha> vagrant ssh

+```

+## Building the Installer

+There are 2 different ways to build the installer in production and in test mode.

+### Building the installer in test mode

+Test mode is useful for testers and developers. The installer build script will also launch 3 vagrant VMs that will be install targets and configure the installer to use them without having to supply passwords for each. This speeds up the subsequent install/test cycle.

+

+To build the installer in test mode go to the installer directory

+``cd /cord/incubator/voltha/install``

+then type

+``./CreateInstaller.sh test``.

+

+This will take a while so doing something else in the mean-time is recommended.

+

+### Running the installer in test mode

+Once the creation has completed determine the ip address of the VM with the following virs command:

+``virsh domifaddr Ubuntu16.04LTS-1``

+using the ip address provided log into the installer using

+``ssh -i key.pem vinstall@<ip-address-from-above>``

+

+Finally, start the installer.

+``./installer.sh``

+In test mode it'll just launch with no prompts and install voltha on the 3 VMs created at the same time that the installer was created (ha-serv1, ha-serv2, and ha-serv3). This step takes quite a while since 3 different voltha installs are taking place, one for each of the 3 VMs in the cluster.

+

+Once the installation completes, determine the ip-address of one of the cluster VMs.

+``virsh domifaddr ha-serv1``

+You can use ``ha-serv2`` or ``ha-serv3`` in place of ``ha-serv1`` above. Log into the VM

+``ssh voltah@<ip-address-from-above>``

+Once logged into the voltha instance follow the usual procedure to start voltha and validate that it's operating correctly.

+

+### Building the installer in production mode

+Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the debian vagrant file, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.

+

+To build the installer in production mode type:

+``./CreateInstaller.sh``

+

+This will take a while and when it completes a file named ``VolthaInstallerV1.0.tar.bz2`` will have been created. Put this file on a usb flash drive that's been formatted using the ext4 filesystem and it's ready to be carried to the installation site.

+

+***More to come on this as things evolve.***
\ No newline at end of file
diff --git a/install/ConfigVagrantTesting.sh b/install/ConfigVagrantTesting.sh
new file mode 100755
index 0000000..3a9b06d
--- /dev/null
+++ b/install/ConfigVagrantTesting.sh
@@ -0,0 +1,148 @@
+#!/bin/bash
+
+baseImage="Ubuntu1604LTS"
+iVmName="Ubuntu1604LTS-1"
+iVmNetwork="vagrant-libvirt"
+shutdownTimeout=5
+ipTimeout=10
+
+lBlue='\033[1;34m'
+green='\033[0;32m'
+orange='\033[0;33m'
+NC='\033[0m'
+red='\033[0;31m'
+yellow='\033[1;33m'
+dGrey='\033[1;30m'
+lGrey='\033[1;37m'
+lCyan='\033[1;36m'
+
+# Shut down the domain in case it's running.
+#echo -e "${lBlue}Shut down the ${lCyan}$iVmName${lBlue} VM if running${NC}"
+#ctr=0
+#vStat=`virsh list | grep $iVmName`
+#while [ ! -z "$vStat" ];
+#do
+#	virsh shutdown $iVmName
+#	echo "Waiting for $iVmName to shut down"
+#	sleep 2
+#	vStat=`virsh list | grep $iVmName`
+#	ctr=`expr $ctr + 1`
+#	if [ $ctr -eq $shutdownTimeout ]; then
+#		echo -e "${red}Tired of waiting, forcing the VM off${NC}"
+#		virsh destroy $iVmName
+#		vStat=`virsh list | grep $iVmName`
+#	fi
+#done
+
+
+# Delete the VM and ignore any errors should they occur
+#echo -e "${lBlue}Undefining the ${lCyan}$iVmName${lBlue} domain${NC}"
+#virsh undefine $iVmName
+
+# Remove the associated volume
+#echo -e "${lBlue}Removing the ${lCyan}$iVmName.qcow2${lBlue} volume${NC}"
+#virsh vol-delete "${iVmName}.qcow2" default
+
+# Clone the base vanilla ubuntu install
+#echo -e "${lBlue}Cloning the ${lCyan}$baseImage.qcow2${lBlue} to ${lCyan}$iVmName.qcow2${NC}"
+#virsh vol-clone "${baseImage}.qcow2" "${iVmName}.qcow2" default
+
+# Create the xml file and define the VM for virsh
+#echo -e "${lBlue}Defining the  ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+#cat vmTemplate.xml | sed -e "s/{{VMName}}/$iVmName/g" | sed -e "s/{{VMNetwork}}/$iVmNetwork/g" > tmp.xml
+
+#virsh define tmp.xml
+
+#rm tmp.xml
+
+# Start the VMm, if it's already running just ignore the error
+#echo -e "${lBlue}Starting the ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+#virsh start $iVmName > /dev/null 2>&1
+
+
+# Configure ansible's key for communicating with the VMs... Testing only, this will
+# be taken care of by the installer in the future.
+for i in install_ha-serv1 install_ha-serv2 install_ha-serv3
+do
+	ipAddr=`virsh domifaddr $i | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+	m=`echo $i | sed -e 's/install_//'`
+	echo "ansible_ssh_private_key_file: .vagrant/machines/$m/libvirt/private_key" > ansible/host_vars/$ipAddr
+done
+
+exit
+
+echo -e "${lBlue}Generating the key-pair for communication with the VM${NC}"
+ssh-keygen -f ./key -t rsa -N ''
+
+mv key key.pem
+
+# Clone BashLogin.sh and add the public key to it for later use.
+echo -e "${lBlue}Creating the pre-configuration script${NC}"
+cp BashLogin.sh bash_login.sh
+echo "cat <<HERE > .ssh/authorized_keys" >> bash_login.sh
+cat key.pub >> bash_login.sh
+echo "HERE" >> bash_login.sh
+echo "chmod 400 .ssh/authorized_keys" >> bash_login.sh
+echo "rm .bash_login" >> bash_login.sh
+echo "logout" >> bash_login.sh
+rm key.pub
+
+
+
+# Get the VM's IP address
+ctr=0
+ipAddr=""
+while [ -z "$ipAddr" ];
+do
+	echo -e "${lBlue}Waiting for the VM's IP address${NC}"
+	ipAddr=`virsh domifaddr $iVmName | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+	sleep 2
+	if [ $ctr -eq $ipTimeout ]; then
+		echo -e "${red}Tired of waiting, please adjust the ipTimeout if the VM is slow to start${NC}"
+		exit
+	fi
+	ctr=`expr $ctr + 1`
+done
+
+echo -e "${lBlue}The IP address is: ${lCyan}$ipAddr${NC}"
+
+# Copy the pre-config file to the VM
+echo -e "${lBlue}Transfering pre-configuration script to the VM${NC}"
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no bash_login.sh vinstall@$ipAddr:.bash_login
+
+rm bash_login.sh
+
+# Run the pre-config file on the VM
+echo -e "${lBlue}Running the pre-configuration script on the VM${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vinstall@$ipAddr 
+
+# Make sure the VM is up-to-date
+echo -e "${lBlue}Ensure that the VM is up-to-date${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update 
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get -y upgrade 
+
+# Create the docker.cfg file in the ansible tree using the VMs IP address
+echo 'DOCKER_OPTS="$DOCKER_OPTS --insecure-registry '$ipAddr':5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=http://'$ipAddr':5001"' > ansible/roles/docker/templates/docker.cfg
+
+# Install ansible on the vm, it'll be used both here and for the install
+echo -e "${lBlue}Installing ansible on the VM${NC}"
+#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get install software-properties-common
+#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-add-repository ppa:ansible/ansible
+#ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get -y install ansible
+
+# Copy the ansible files to the VM
+echo -e "${lBlue}Transferring the ansible directory to the VM${NC}"
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem -r ansible vinstall@$ipAddr:ansible
+
+# Get the GPG key for docker otherwise ansible calls break
+echo -e "${lBlue}Get the GPG key for docker to allow ansible playbooks to run successfully${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr "sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D"
+
+# Bootstrap ansible
+echo -e "${lBlue}Bootstrap ansible${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr ansible/scripts/bootstrap_ansible.sh
+
+# Run the ansible script to initialize the installer environment
+echo -e "${lBlue}Run the nsible playbook for the installer${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo PYTHONUNBUFFERED=1 ansible-playbook /home/vinstall/ansible/volthainstall.yml -c local
diff --git a/install/CreateInstaller.sh b/install/CreateInstaller.sh
new file mode 100755
index 0000000..bd253b6
--- /dev/null
+++ b/install/CreateInstaller.sh
@@ -0,0 +1,189 @@
+#!/bin/bash
+
+baseImage="Ubuntu1604LTS"
+iVmName="Ubuntu1604LTS-1"
+iVmNetwork="vagrant-libvirt"
+shutdownTimeout=5
+ipTimeout=10
+
+lBlue='\033[1;34m'
+green='\033[0;32m'
+orange='\033[0;33m'
+NC='\033[0m'
+red='\033[0;31m'
+yellow='\033[1;33m'
+dGrey='\033[1;30m'
+lGrey='\033[1;37m'
+lCyan='\033[1;36m'
+
+wd=`pwd`
+
+# Validate that vagrant is installed.
+echo -e "${lBlue}Ensure that ${lCyan}vagrant${lBlue} is installed${NC}"
+vInst=`which vagrant`
+
+if [ -z "$vInst" ]; then
+	wget https://releases.hashicorp.com/vagrant/1.9.5/vagrant_1.9.5_x86_64.deb
+	sudo dpkg -i vagrant_1.8.5_x86_64.deb
+	rm vagrant_1.8.5_x86_64.deb
+fi
+unset vInst
+
+# Validate that ansible is installed
+echo -e "${lBlue}Ensure that ${lCyan}ansible${lBlue} is installed${NC}"
+aInst=`which ansible`
+
+if [ -z "$aInst" ]; then
+	sudo apt-get install software-properties-common
+	sudo apt-add-repository ppa:ansible/ansible
+	sudo apt-get update
+	sudo apt-get install ansible
+fi
+unset vInst
+
+# Ensure that the voltha VM is running so that images can be secured
+echo -e "${lBlue}Ensure that the ${lCyan}voltha VM${lBlue} is running${NC}"
+vVM=`virsh list | grep voltha_voltha`
+
+if [ -z "$vVM" ]; then
+	./BuildVoltha.sh
+fi
+
+# Verify if this is intended to be a test environment, if so start 3 VMs
+# to emulate the production installation cluster.
+if [ $# -eq 1 -a "$1" == "test" ]; then
+	echo -e "${lBlue}Testing, create the ${lCyan}ha-serv${lBlue} VMs${NC}"
+	vagrant destroy ha-serv{1,2,3}
+	vagrant up ha-serv{1,2,3}
+	./devSetHostList.sh
+else
+	rm -fr .test
+fi
+
+# Shut down the domain in case it's running.
+echo -e "${lBlue}Shut down the ${lCyan}$iVmName${lBlue} VM if running${NC}"
+ctr=0
+vStat=`virsh list | grep $iVmName`
+while [ ! -z "$vStat" ];
+do
+	virsh shutdown $iVmName
+	echo "Waiting for $iVmName to shut down"
+	sleep 2
+	vStat=`virsh list | grep $iVmName`
+	ctr=`expr $ctr + 1`
+	if [ $ctr -eq $shutdownTimeout ]; then
+		echo -e "${red}Tired of waiting, forcing the VM off${NC}"
+		virsh destroy $iVmName
+		vStat=`virsh list | grep $iVmName`
+	fi
+done
+
+
+# Delete the VM and ignore any errors should they occur
+echo -e "${lBlue}Undefining the ${lCyan}$iVmName${lBlue} domain${NC}"
+virsh undefine $iVmName
+
+# Remove the associated volume
+echo -e "${lBlue}Removing the ${lCyan}$iVmName.qcow2${lBlue} volume${NC}"
+virsh vol-delete "${iVmName}.qcow2" default
+
+# Clone the base vanilla ubuntu install
+echo -e "${lBlue}Cloning the ${lCyan}$baseImage.qcow2${lBlue} to ${lCyan}$iVmName.qcow2${NC}"
+virsh vol-clone "${baseImage}.qcow2" "${iVmName}.qcow2" default
+
+# Create the xml file and define the VM for virsh
+echo -e "${lBlue}Defining the  ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+cat vmTemplate.xml | sed -e "s/{{VMName}}/$iVmName/g" | sed -e "s/{{VMNetwork}}/$iVmNetwork/g" > tmp.xml
+
+virsh define tmp.xml
+
+rm tmp.xml
+
+# Start the VMm, if it's already running just ignore the error
+echo -e "${lBlue}Starting the ${lCyan}$iVmName${lBlue} virtual machine${NC}"
+virsh start $iVmName > /dev/null 2>&1
+
+# Generate a keypair for communicating with the VM
+echo -e "${lBlue}Generating the key-pair for communication with the VM${NC}"
+ssh-keygen -f ./key -t rsa -N ''
+
+mv key key.pem
+
+# Clone BashLogin.sh and add the public key to it for later use.
+echo -e "${lBlue}Creating the pre-configuration script${NC}"
+cp BashLogin.sh bash_login.sh
+echo "cat <<HERE > .ssh/authorized_keys" >> bash_login.sh
+cat key.pub >> bash_login.sh
+echo "HERE" >> bash_login.sh
+echo "chmod 400 .ssh/authorized_keys" >> bash_login.sh
+echo "rm .bash_login" >> bash_login.sh
+echo "logout" >> bash_login.sh
+rm key.pub
+
+
+
+# Get the VM's IP address
+ctr=0
+ipAddr=""
+while [ -z "$ipAddr" ];
+do
+	echo -e "${lBlue}Waiting for the VM's IP address${NC}"
+	ipAddr=`virsh domifaddr $iVmName | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+	sleep 3
+	if [ $ctr -eq $ipTimeout ]; then
+		echo -e "${red}Tired of waiting, please adjust the ipTimeout if the VM is slow to start${NC}"
+		exit
+	fi
+	ctr=`expr $ctr + 1`
+done
+
+echo -e "${lBlue}The IP address is: ${lCyan}$ipAddr${NC}"
+
+# Copy the pre-config file to the VM
+echo -e "${lBlue}Transfering pre-configuration script to the VM${NC}"
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no bash_login.sh vinstall@$ipAddr:.bash_login
+
+rm bash_login.sh
+
+# Run the pre-config file on the VM
+echo -e "${lBlue}Running the pre-configuration script on the VM${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vinstall@$ipAddr 
+
+# Install python which is required for ansible
+echo -e "${lBlue}Installing python${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update 
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get -y install python 
+
+# Make sure the VM is up-to-date
+echo -e "${lBlue}Ensure that the VM is up-to-date${NC}"
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update 
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get -y upgrade 
+
+
+
+# Copy the apt repository to the VM because it's way too slow using ansible
+#echo -e "${red}NOT COPYING${lBlue} the apt-repository to the VM, ${red}TESTING ONLY REMOVE FOR PRODUCTION${NC}"
+#echo -e "${lBlue}Copy the apt-repository to the VM${NC}"
+#scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem -r apt-mirror vinstall@$ipAddr:apt-mirror
+
+# Create the docker.cfg file in the ansible tree using the VMs IP address
+echo 'DOCKER_OPTS="$DOCKER_OPTS --insecure-registry '$ipAddr':5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=http://'$ipAddr':5001"' > ansible/roles/docker/templates/docker.cfg
+
+# Add the voltha vm's information to the ansible tree
+echo -e "${lBlue}Add the voltha vm and key to the ansible accessible hosts${NC}"
+vIpAddr=`virsh domifaddr voltha_voltha | tail -n +3 | awk '{ print $4 }' | sed -e 's~/.*~~'`
+echo "[voltha]" > ansible/hosts/voltha
+echo $vIpAddr >> ansible/hosts/voltha
+echo "ansible_ssh_private_key_file: $wd/../.vagrant/machines/voltha/libvirt/private_key" > ansible/host_vars/$vIpAddr
+
+
+# Prepare to launch the ansible playbook to configure the installer VM
+echo -e "${lBlue}Prepare to launch the ansible playbook to configure the VM${NC}"
+echo "[installer]" > ansible/hosts/installer
+echo "$ipAddr" >> ansible/hosts/installer
+echo "ansible_ssh_private_key_file: $wd/key.pem" > ansible/host_vars/$ipAddr
+
+# Launch the ansible playbook
+echo -e "${lBlue}Launching the ansible playbook${NC}"
+ansible-playbook ansible/volthainstall.yml -i ansible/hosts/installer
+ansible-playbook ansible/volthainstall.yml -i ansible/hosts/voltha
diff --git a/install/PullContainers.sh b/install/PullContainers.sh
new file mode 100755
index 0000000..47b9dce
--- /dev/null
+++ b/install/PullContainers.sh
@@ -0,0 +1,13 @@
+#/bin/bash
+
+# This script will pull all the required docker conatiners
+# from the insecure repository.
+
+registry="vinstall:5000"
+
+for i in `cat image-list.cfg`
+do
+docker pull $registry/$i
+docker tag $registry/$i $i
+docker rmi $registry/$i
+done
diff --git a/install/PushContainers.sh b/install/PushContainers.sh
new file mode 100755
index 0000000..26d80c3
--- /dev/null
+++ b/install/PushContainers.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+# This script will push all the container images to a registry
+# named vinstall:5000
+
+registry="vinstall:5000"
+
+for i in `cat image-list.cfg`
+do
+docker tag $i $registry/$i
+docker push $registry/$i
+docker rmi $registry/$i
+done
+
+
diff --git a/install/TODO b/install/TODO
new file mode 100644
index 0000000..67a0653
--- /dev/null
+++ b/install/TODO
@@ -0,0 +1,15 @@
+- Create an installer tar file when not run in test mode
+  - This file should include:
+    - The qcow2 image for the installer
+    - The KVM xml metadata for the installer
+    - The private key to access the VM
+    - The bootstrap script to launch the installer
+  - In the future, it could include (which would make it 1 or 2G larger).
+    - The .deb file to install vagrant
+    - .vagrant.d directory with all the configs and boxes
+- Clean up the ansible scripts
+  - Create a cluster-host role
+    - Install all dependent software using dpkg and pip
+  - Move the pull and push roles into the voltha role
+    - Use the target selector to trigger the appropriate ones
+    - OR create voltha-deploy and voltha-create roles (TBD)
diff --git a/install/Vagrantfile b/install/Vagrantfile
new file mode 100644
index 0000000..a07ec95
--- /dev/null
+++ b/install/Vagrantfile
@@ -0,0 +1,62 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+# This Vagrantfile is used for testing the installer. It creates 3 servers
+# with a vanilla ubutu server image on it.
+Vagrant.configure(2) do |config|
+  config.vm.synced_folder ".", "/vagrant", disabled: true
+  (1..3).each do |i|
+    config.vm.define "ha-serv#{i}" do |d|
+      d.ssh.forward_agent = true
+      d.vm.box = "ubuntu1604"
+      d.vm.hostname = "ha-serv#{i}"
+      d.vm.provider "libvirt" do |v|
+        v.memory = 6144
+      end
+    end
+  end
+
+  if Vagrant.has_plugin?("vagrant-cachier")
+    config.cache.scope = :box
+  end
+
+end
+
+#Vagrant.configure(2) do |config|
+#
+#  config.vm.synced_folder ".", "/vagrant", disabled: true
+#  if /cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM
+#    puts("Configuring for windows")
+#    config.vm.synced_folder "../../..", "/cord", mount_options: ["dmode=700,fmode=600"]
+#    Box = "ubuntu/xenial64"
+#    Provider = "virtualbox"
+#  elsif RUBY_PLATFORM =~ /linux/
+#    puts("Configuring for linux")
+#    config.vm.synced_folder "../../..", "/cord", type: "nfs"
+#    Box = "ubuntu1604"
+#    Provider = "libvirt"
+#  else
+#    puts("Configuring for other")
+#    config.vm.synced_folder "../../..", "/cord"
+#    Box = "ubuntu/xenial64"
+#    Provider = "virtualbox"
+#  end
+#
+#  config.vm.define "voltha" do |d|
+#    d.ssh.forward_agent = true
+#    d.vm.box = Box
+#    d.vm.hostname = "voltha"
+#    d.vm.network "private_network", ip: "10.100.198.220"
+#    #d.vm.provision :shell, path: "ansible/scripts/bootstrap_ansible.sh"
+#    #d.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /cord/incubator/voltha/ansible/voltha.yml -c local"
+#    #d.vm.provision :shell, inline: "cd /cord/incubator/voltha && source env.sh && make install-protoc && chmod 777 /tmp/fluentd"
+#    d.vm.provider Provider do |v|
+#      v.memory = 6144
+#    end
+#  end
+#
+#  if Vagrant.has_plugin?("vagrant-cachier")
+#    config.cache.scope = :box
+#  end
+#
+#end
diff --git a/install/ansible/ansible.cfg b/install/ansible/ansible.cfg
new file mode 100644
index 0000000..bd331b2
--- /dev/null
+++ b/install/ansible/ansible.cfg
@@ -0,0 +1,9 @@
+[defaults]
+callback_plugins=/etc/ansible/callback_plugins/
+host_key_checking=False
+deprecation_warnings=False
+
+[privilege_escalation]
+become=True
+become_method=sudo
+become_user=root
diff --git a/install/ansible/group_vars/all b/install/ansible/group_vars/all
new file mode 100644
index 0000000..f00163a
--- /dev/null
+++ b/install/ansible/group_vars/all
@@ -0,0 +1,37 @@
+ip: "{{ facter_ipaddress_eth1 }}"
+consul_extra: ""
+proxy_url: http://{{ facter_ipaddress_eth1 }}
+proxy_url2: http://{{ facter_ipaddress_eth1 }}
+registry_url: 10.100.198.220:5000/
+jenkins_ip: 10.100.198.220
+debian_version: xenial
+docker_cfg: docker.cfg
+docker_cfg_dest: /etc/default/docker
+docker_registry: "localhost:5000"
+docker_push_registry: "vinstall:5000"
+cord_home: /home/volthainstall/cord
+voltha_containers:
+  - voltha/nginx
+  - voltha/grafana
+  - voltha/portainer
+  - cord/vcli
+  - cord/dashd
+  - cord/config-push
+  - cord/tester
+  - cord/onos
+  - cord/shovel
+  - cord/netconf
+  - cord/podder
+  - cord/ofagent
+  - cord/chameleon
+  - cord/voltha
+  - cord/voltha-base
+  - nginx
+  - consul
+  - fluent/fluentd
+  - portainer/portainer
+  - wurstmeister/kafka
+  - wurstmeister/zookeeper
+  - kamon/grafana_graphite
+  - gliderlabs/registrator
+  - centurylink/ca-certs
diff --git a/install/ansible/hosts/cluster b/install/ansible/hosts/cluster
new file mode 100644
index 0000000..30acd99
--- /dev/null
+++ b/install/ansible/hosts/cluster
@@ -0,0 +1 @@
+[cluster]
diff --git a/install/ansible/hosts/installer b/install/ansible/hosts/installer
new file mode 100644
index 0000000..2514607
--- /dev/null
+++ b/install/ansible/hosts/installer
@@ -0,0 +1 @@
+[installer]
diff --git a/install/ansible/hosts/voltha b/install/ansible/hosts/voltha
new file mode 100644
index 0000000..2ce2a14
--- /dev/null
+++ b/install/ansible/hosts/voltha
@@ -0,0 +1 @@
+[voltha]
diff --git a/install/ansible/java/tasks/main.yml b/install/ansible/java/tasks/main.yml
new file mode 100644
index 0000000..cbeb786
--- /dev/null
+++ b/install/ansible/java/tasks/main.yml
@@ -0,0 +1,5 @@
+- name: Package is present
+  apt:
+    name=openjdk-8-jdk
+    state=present
+  tags: [java]
diff --git a/install/ansible/roles/apt-repository/tasks/debian.yml b/install/ansible/roles/apt-repository/tasks/debian.yml
new file mode 100644
index 0000000..b77a9f6
--- /dev/null
+++ b/install/ansible/roles/apt-repository/tasks/debian.yml
@@ -0,0 +1,45 @@
+- name: The apt-repository is copied

+  copy:

+    src: "{{ cord_home }}/incubator/voltha/install/apt-mirror"

+    dest: /home/vinstall

+    owner: vinstall

+    group: vinstall

+  tags: [apt]

+- name: Nginx is installed

+  apt:

+    name: nginx

+    state: latest

+  tags: [apt]

+

+- name: Nginx config is copied

+  copy:

+    src: "{{ cord_home }}/incubator/voltha/install/nginx-default"

+    dest: /etc/nginx/sites-enabled/default

+  register: copy_result

+  tags: [apt]

+

+- name: nginx is restarted

+  command: service nginx restart

+  when: copy_result|changed

+  tags: [apt]

+

+#- name: NFS is installed TESTING ONLY REMOVE FOR PRODUCTION

+#  apt:

+#    name: nfs-common

+#    state: latest

+#  tags: [apt]

+#

+#- name: Apt repo is mounted TESTING ONLY REMOVE FOR PRODUCTION

+#  mount:

+#    name: /home/vinstall/apt-mirror

+#    src: "{{ mount_host }}:{{ cord_home }}/incubator/voltha/install/apt-mirror"

+#    fstype: nfs

+#    state: mounted

+#  tags: [apt]

+

+- name: Links to the repos are created

+  file:

+    src: /home/vinstall/apt-mirror/mirror/archive.ubuntu.com/ubuntu

+    dest: /var/www/ubuntu

+    state: link

+  tags: [apt]

diff --git a/install/ansible/roles/apt-repository/tasks/main.yml b/install/ansible/roles/apt-repository/tasks/main.yml
new file mode 100644
index 0000000..1495847
--- /dev/null
+++ b/install/ansible/roles/apt-repository/tasks/main.yml
@@ -0,0 +1,5 @@
+- include: debian.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
+
+- include: centos.yml
+  when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
diff --git a/install/ansible/roles/common/defaults/main.yml b/install/ansible/roles/common/defaults/main.yml
new file mode 100644
index 0000000..7be66d2
--- /dev/null
+++ b/install/ansible/roles/common/defaults/main.yml
@@ -0,0 +1,24 @@
+hosts: [
+  { host_ip: "10.100.198.220", host_name: "voltha"},
+]
+
+use_latest_for:
+  - debian-keyring
+  - debian-archive-keyring
+  - python-dev
+  - kafkacat
+  - libssl-dev
+  - libffi-dev
+  - libpcap-dev
+  - libxml2-dev
+  - libxslt1-dev
+  - python-virtualenv
+  - jq
+  - python-nose
+  - python-flake8
+  - python-scapy
+#  - python-libpcap
+
+obsolete_services:
+  - puppet
+  - chef-client
diff --git a/install/ansible/roles/common/files/ssh_config b/install/ansible/roles/common/files/ssh_config
new file mode 100644
index 0000000..990a43d
--- /dev/null
+++ b/install/ansible/roles/common/files/ssh_config
@@ -0,0 +1,3 @@
+Host *
+   StrictHostKeyChecking no
+   UserKnownHostsFile=/dev/null
diff --git a/install/ansible/roles/common/tasks/main.yml b/install/ansible/roles/common/tasks/main.yml
new file mode 100644
index 0000000..8b1c054
--- /dev/null
+++ b/install/ansible/roles/common/tasks/main.yml
@@ -0,0 +1,48 @@
+- name: JQ is present
+  apt:
+    name: jq
+    force: yes
+  tags: [common]
+
+- name: Host is present
+  lineinfile:
+    dest: /etc/hosts
+    regexp: "^{{ item.host_ip }}"
+    line: "{{ item.host_ip }} {{ item.host_name }}"
+  with_items: "{{ hosts }}"
+  tags: [common]
+
+- name: Latest apt packages
+  apt:
+    name: "{{ item }}"
+  with_items: "{{ use_latest_for }}"
+  when: target != "cluster"
+  tags: [common]
+
+- name: Services are not running
+  service:
+    name: "{{ item }}"
+    state: stopped
+  ignore_errors: yes
+  with_items: "{{ obsolete_services }}"
+  tags: [common]
+
+- name: Ensure there is a .ssh directory for /root
+  file:
+    path: "{{ ansible_env['HOME'] }}/.ssh"
+    state: directory
+    owner: root
+    group: root
+
+- name: Ensure known_hosts file is absent
+  file:
+    path: "{{ ansible_env['HOME'] }}/.ssh/known_hosts"
+    state: absent
+
+- name: Disable Known Host Checking
+  copy:
+    src: files/ssh_config
+    dest: "{{ ansible_env['HOME'] }}/.ssh/config"
+    owner: root
+    group: root
+    mode: 0600
diff --git a/install/ansible/roles/docker-compose/tasks/main.yml b/install/ansible/roles/docker-compose/tasks/main.yml
new file mode 100644
index 0000000..4bf56e9
--- /dev/null
+++ b/install/ansible/roles/docker-compose/tasks/main.yml
@@ -0,0 +1,12 @@
+- name: Executable is downloaded
+  get_url:
+    url: https://github.com/docker/compose/releases/download/1.9.0/docker-compose-Linux-x86_64
+    dest: /home/vinstall
+    mode: 0644
+  when: target == "installer"
+- name: Executable is present
+  copy:
+    src: /home/vinstall/docker-compose-Linux-x86_64
+    dest: /usr/local/bin/docker-compose
+    mode: 0755
+  when: target == "cluster"
diff --git a/install/ansible/roles/docker-registry/tasks/debian.yml b/install/ansible/roles/docker-registry/tasks/debian.yml
new file mode 100644
index 0000000..72903ab
--- /dev/null
+++ b/install/ansible/roles/docker-registry/tasks/debian.yml
@@ -0,0 +1,5 @@
+- name: The insecure docker registry is started

+  command: docker run -d -p 5000:5000 --name registry registry:2

+  register: result

+  ignore_errors: true

+  tags: [docker]

diff --git a/install/ansible/roles/docker-registry/tasks/main.yml b/install/ansible/roles/docker-registry/tasks/main.yml
new file mode 100644
index 0000000..1495847
--- /dev/null
+++ b/install/ansible/roles/docker-registry/tasks/main.yml
@@ -0,0 +1,5 @@
+- include: debian.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
+
+- include: centos.yml
+  when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
diff --git a/install/ansible/roles/docker/defaults/main.yml b/install/ansible/roles/docker/defaults/main.yml
new file mode 100644
index 0000000..338d16e
--- /dev/null
+++ b/install/ansible/roles/docker/defaults/main.yml
@@ -0,0 +1,6 @@
+docker_extra: ""
+
+centos_files: [
+  { src: "docker.centos.repo", dest: "/etc/yum.repos.d/docker.repo" },
+  { src: "docker.centos.service", dest: "/lib/systemd/system/docker.service" },
+]
\ No newline at end of file
diff --git a/install/ansible/roles/docker/files/docker.centos.repo b/install/ansible/roles/docker/files/docker.centos.repo
new file mode 100644
index 0000000..b472187
--- /dev/null
+++ b/install/ansible/roles/docker/files/docker.centos.repo
@@ -0,0 +1,6 @@
+[dockerrepo]
+name=Docker Repository
+baseurl=https://yum.dockerproject.org/repo/main/centos/7
+enabled=1
+gpgcheck=1
+gpgkey=https://yum.dockerproject.org/gpg
\ No newline at end of file
diff --git a/install/ansible/roles/docker/files/docker.centos.service b/install/ansible/roles/docker/files/docker.centos.service
new file mode 100644
index 0000000..3bbef84
--- /dev/null
+++ b/install/ansible/roles/docker/files/docker.centos.service
@@ -0,0 +1,17 @@
+[Unit]
+Description=Docker Application Container Engine
+Documentation=https://docs.docker.com
+After=network.target docker.socket
+Requires=docker.socket
+
+[Service]
+EnvironmentFile=-/etc/sysconfig/docker
+Type=notify
+ExecStart=/usr/bin/docker daemon --insecure-registry 10.100.198.200:5000 -H fd://
+MountFlags=slave
+LimitNOFILE=1048576
+LimitNPROC=1048576
+LimitCORE=infinity
+
+[Install]
+WantedBy=multi-user.target
diff --git a/install/ansible/roles/docker/tasks/centos.yml b/install/ansible/roles/docker/tasks/centos.yml
new file mode 100644
index 0000000..a8910d4
--- /dev/null
+++ b/install/ansible/roles/docker/tasks/centos.yml
@@ -0,0 +1,23 @@
+- name: CentOS files are copied
+  copy:
+    src: "{{ item.src }}"
+    dest: "{{ item.dest }}"
+  with_items: centos_files
+  tags: [docker]
+
+- name: CentOS package is installed
+  yum:
+    name: docker-engine
+    state: present
+  tags: [docker]
+
+- name: CentOS Daemon is reloaded
+  command: systemctl daemon-reload
+  tags: [docker]
+
+- name: CentOS service is running
+  service:
+    name: docker
+    state: running
+  tags: [docker]
+
diff --git a/install/ansible/roles/docker/tasks/debian.yml b/install/ansible/roles/docker/tasks/debian.yml
new file mode 100644
index 0000000..081fda9
--- /dev/null
+++ b/install/ansible/roles/docker/tasks/debian.yml
@@ -0,0 +1,91 @@
+- name: Debian add Docker repository and update apt cache

+  apt_repository:

+    repo: deb https://apt.dockerproject.org/repo ubuntu-{{ debian_version }} main

+    update_cache: yes

+    state: present

+  when: target == "installer"

+  tags: [docker]

+

+- name: Debian Docker is present

+  apt:

+    name: docker-engine

+    state: latest

+    force: yes

+  when: target == "installer"

+  tags: [docker]

+

+#- name: Docker deb install file is present

+#  get_url:

+#    url: https://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_17.05.0~ce-0~ubuntu-xenial_amd64.deb

+#    dest: /home/vinstall

+#    owner: vinstall

+#    group: vinstall

+#  when: target == "installer"

+#  tags: [docker]

+

+#- name: Docker dependencies satisfied

+#  apt:

+#    name: libltdl7

+#    state: latest

+#    force: yes

+#  when: target == "cluster"

+#  tags: [docker]

+

+#- name: Docker install deb file is copied

+#  copy:

+#    src: /home/vinstall/docker-engine_17.05.0~ce-0~ubuntu-xenial_amd64.deb

+#    dest: /home/voltha

+#  when: target == "cluster"

+#  tags: [docker]

+

+#- name: Docker engine is installed

+#  apt:

+#    deb: /home/vinstall/docker-engine_17.05.0~ce-0~ubuntu-xenial_amd64.deb

+#  when: target == "cluster"

+#  tags: [docker]

+

+- name: Debian python-pip is present

+  apt: name=python-pip state=present

+  tags: [docker]

+

+- name: Debian docker-py is present

+  pip:

+    name: docker-py

+    version: 1.6.0

+    state: present

+  when: target == "installer"

+  tags: [docker]

+

+- name: netifaces pip package is present

+  pip:

+    name: netifaces

+    version: 0.10.4

+    state: present

+  when: target == "installer"

+  tags: [docker]

+

+- name: Debian files are present

+  template:

+    src: "{{ docker_cfg }}"

+    dest: "{{ docker_cfg_dest }}"

+  register: copy_result

+  tags: [docker]

+

+- name: Debian Daemon is reloaded

+  command: systemctl daemon-reload

+  when: copy_result|changed and is_systemd is defined

+  tags: [docker]

+

+- name: vagrant user is added to the docker group

+  user:

+    name: "{{ ansible_env['SUDO_USER'] }}"

+    group: docker

+  register: user_result

+  tags: [docker]

+

+- name: Debian Docker service is restarted

+  service:

+    name: docker

+    state: restarted

+  when: copy_result|changed or user_result|changed

+  tags: [docker]

diff --git a/install/ansible/roles/docker/tasks/main.yml b/install/ansible/roles/docker/tasks/main.yml
new file mode 100644
index 0000000..1495847
--- /dev/null
+++ b/install/ansible/roles/docker/tasks/main.yml
@@ -0,0 +1,5 @@
+- include: debian.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
+
+- include: centos.yml
+  when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
diff --git a/install/ansible/roles/docker/templates/docker-swarm-master.service b/install/ansible/roles/docker/templates/docker-swarm-master.service
new file mode 100644
index 0000000..b284d4b
--- /dev/null
+++ b/install/ansible/roles/docker/templates/docker-swarm-master.service
@@ -0,0 +1,21 @@
+[Unit]
+Description=Docker Application Container Engine
+Documentation=https://docs.docker.com
+After=network.target docker.socket
+Requires=docker.socket
+
+[Service]
+Type=notify
+ExecStart=/usr/bin/docker daemon -H fd:// \
+          --insecure-registry 10.100.198.220:5000 \
+          --registry-mirror=http://10.100.198.220:5001 \
+          --cluster-store=consul://{{ ip }}:8500/swarm \
+          --cluster-advertise={{ ip }}:2375 {{ docker_extra }}
+MountFlags=master
+LimitNOFILE=1048576
+LimitNPROC=1048576
+LimitCORE=infinity
+
+[Install]
+WantedBy=multi-user.target
+
diff --git a/install/ansible/roles/docker/templates/docker-swarm-node.service b/install/ansible/roles/docker/templates/docker-swarm-node.service
new file mode 100644
index 0000000..55bcc50
--- /dev/null
+++ b/install/ansible/roles/docker/templates/docker-swarm-node.service
@@ -0,0 +1,23 @@
+[Unit]
+Description=Docker Application Container Engine
+Documentation=https://docs.docker.com
+After=network.target docker.socket
+Requires=docker.socket
+
+[Service]
+Type=notify
+ExecStart=/usr/bin/docker daemon -H fd:// \
+          -H tcp://0.0.0.0:2375 \
+          -H unix:///var/run/docker.sock \
+          --insecure-registry 10.100.198.220:5000 \
+          --registry-mirror=http://10.100.198.220:5001 \
+          --cluster-store=consul://{{ ip }}:8500/swarm \
+          --cluster-advertise={{ ip }}:2375 {{ docker_extra }}
+MountFlags=slave
+LimitNOFILE=1048576
+LimitNPROC=1048576
+LimitCORE=infinity
+
+[Install]
+WantedBy=multi-user.target
+
diff --git a/install/ansible/roles/docker/templates/docker.cfg b/install/ansible/roles/docker/templates/docker.cfg
new file mode 100644
index 0000000..d59db12
--- /dev/null
+++ b/install/ansible/roles/docker/templates/docker.cfg
@@ -0,0 +1 @@
+DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 192.168.121.91:5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=http://192.168.121.91:5001"
diff --git a/install/ansible/roles/installer/tasks/installer.yml b/install/ansible/roles/installer/tasks/installer.yml
new file mode 100644
index 0000000..5d29235
--- /dev/null
+++ b/install/ansible/roles/installer/tasks/installer.yml
@@ -0,0 +1,49 @@
+- name: Ansible repository is available

+  apt_repository:

+    repo: 'ppa:ansible/ansible'

+  tags: [installer]

+- name: Debian ansible is present

+  apt:

+    name: ansible

+    state: latest

+    force: yes

+  tags: [installer]

+- name: Installer files and directories are copied

+  copy:

+    src: "{{ cord_home }}/incubator/voltha/{{ item }}"

+    dest: /home/vinstall

+    owner: vinstall

+    group: vinstall

+  with_items:

+    - install/installer.sh

+    - install/install.cfg

+    - install/ansible

+    - compose

+    - nginx_config

+  tags: [installer]

+- name: Determine if test mode is active

+  local_action: stat path="{{ cord_home }}/incubator/voltha/install/.test"

+  register: file

+  ignore_errors: True

+- name: Test mode file is copied

+  copy:

+    src: "{{ cord_home }}/incubator/voltha/install/.test"

+    dest: /home/vinstall

+  when: file.stat.exists

+- name: The installer is made executable

+  file:

+    path: /home/vinstall/installer.sh

+    mode: 0744

+  tags: [installer]

+- name: Python docker-py 1.6.0 package source is available

+  command: pip download -d /home/vinstall/docker-py "docker-py==1.6.0"

+  tags: [installer]

+- name: Python netifaces 0.10.4 package source is available

+  command: pip download -d /home/vinstall/netifaces "netifaces==0.10.4"

+  tags: [installer]

+- name: Deb files are saved.

+  command: cp -r /var/cache/apt/archives /home/vinstall

+  tags: [installer]

+- name: Deb file directory is renamed

+  command: mv /home/vinstall/archives /home/vinstall/deb_files

+  tags: [installer]

diff --git a/install/ansible/roles/installer/tasks/main.yml b/install/ansible/roles/installer/tasks/main.yml
new file mode 100644
index 0000000..005734b
--- /dev/null
+++ b/install/ansible/roles/installer/tasks/main.yml
@@ -0,0 +1,2 @@
+- include: installer.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
diff --git a/install/ansible/roles/pull-images/tasks/main.yml b/install/ansible/roles/pull-images/tasks/main.yml
new file mode 100644
index 0000000..dde3d78
--- /dev/null
+++ b/install/ansible/roles/pull-images/tasks/main.yml
@@ -0,0 +1,2 @@
+- include: pull.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
diff --git a/install/ansible/roles/pull-images/tasks/pull.yml b/install/ansible/roles/pull-images/tasks/pull.yml
new file mode 100644
index 0000000..9b2044d
--- /dev/null
+++ b/install/ansible/roles/pull-images/tasks/pull.yml
@@ -0,0 +1,12 @@
+- name: Docker containers for Voltha are pulled
+  command: docker pull {{ docker_registry }}/{{ item }}
+  with_items: "{{ voltha_containers }}"
+  tags: [pull]
+- name: Docker images are re-tagged to expected names
+  command: docker tag {{ docker_registry }}/{{ item }} {{ item }}
+  with_items: "{{ voltha_containers }}"
+  tags: [pull]
+- name: Old docker image tags are removed
+  command: docker rmi {{ docker_registry }}/{{ item }}
+  with_items: "{{ voltha_containers }}"
+  tags: [pull]
diff --git a/install/ansible/roles/push-images/tasks/main.yml b/install/ansible/roles/push-images/tasks/main.yml
new file mode 100644
index 0000000..8c8d827
--- /dev/null
+++ b/install/ansible/roles/push-images/tasks/main.yml
@@ -0,0 +1,2 @@
+- include: push.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
diff --git a/install/ansible/roles/push-images/tasks/push.yml b/install/ansible/roles/push-images/tasks/push.yml
new file mode 100644
index 0000000..88dbc52
--- /dev/null
+++ b/install/ansible/roles/push-images/tasks/push.yml
@@ -0,0 +1,12 @@
+- name: Docker images are re-tagged to registry for push
+  command: docker tag {{ item }} {{ docker_push_registry }}/{{ item }}
+  with_items: "{{ voltha_containers }}"
+  tags: [push]
+- name: Docker containers for Voltha are pushed
+  command: docker push {{ docker_push_registry }}/{{ item }}
+  with_items: "{{ voltha_containers }}"
+  tags: [push]
+- name: Temporary registry push tags are removed
+  command: docker rmi {{ docker_push_registry }}/{{ item }}
+  with_items: "{{ voltha_containers }}"
+  tags: [push]
diff --git a/install/ansible/roles/voltha/tasks/main.yml b/install/ansible/roles/voltha/tasks/main.yml
new file mode 100644
index 0000000..597ecd1
--- /dev/null
+++ b/install/ansible/roles/voltha/tasks/main.yml
@@ -0,0 +1,2 @@
+- include: voltha.yml
+  when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
diff --git a/install/ansible/roles/voltha/tasks/voltha.yml b/install/ansible/roles/voltha/tasks/voltha.yml
new file mode 100644
index 0000000..c8cc78d
--- /dev/null
+++ b/install/ansible/roles/voltha/tasks/voltha.yml
@@ -0,0 +1,57 @@
+- name: Required directory exists

+  file:

+    path: /cord/incubator/voltha

+    state: directory

+    owner: voltha

+    group: voltha

+  tags: [voltha]

+

+- name: Required directories are copied

+  copy:

+    src: /home/vinstall/{{ item }}

+    dest: /cord/incubator/voltha

+    owner: voltha

+    group: voltha

+  with_items:

+    - compose

+    - nginx_config

+    - docker-py

+    - netifaces

+    - deb_files

+  tags: [voltha]

+

+- name: Nginx module symlink is present

+  file:

+    dest: /cord/incubator/voltha/nginx_config/modules

+    src: ../../usr/lib/nginx/modules

+    state: link

+    follow: no

+    force: yes

+  tags: [voltha]

+

+- name: Nginx statup script is executable

+  file:

+    path: /cord/incubator/voltha/nginx_config/start_service.sh

+    mode: 0755

+  tags: [voltha]

+

+- name: Dependent software is installed

+  command: dpkg -i /cord/incubator/voltha/deb_files/{{ item }}

+  with_items: "{{ deb_files }}"

+  when: target == "cluster"

+  ignore_errors: true

+  tags: [voltha]

+

+- name: Dependent software is initialized

+  command: apt-get -f install

+  when: target == "cluster"

+  tags: [voltha]

+

+- name: Python packages are installe

+  command: pip install {{ item }} --no-index --find-links file:///cord/incubator/voltha/{{ item }}

+  with_items:

+    - docker-py

+    - netifaces

+  when: target == "cluster"

+  tags: [voltha]

+

diff --git a/install/ansible/scripts/bootstrap_ansible.sh b/install/ansible/scripts/bootstrap_ansible.sh
new file mode 100755
index 0000000..6b1fa39
--- /dev/null
+++ b/install/ansible/scripts/bootstrap_ansible.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+set -e
+
+echo "Installing Ansible..."
+rm /var/lib/dpkg/lock
+apt-get install -y software-properties-common
+apt-add-repository ppa:ansible/ansible
+apt-get update
+apt-get install -y ansible apt-transport-https
+cp /vinstall/ansible/ansible.cfg /etc/ansible/ansible.cfg
diff --git a/install/ansible/voltha.yml b/install/ansible/voltha.yml
new file mode 100644
index 0000000..2dbfaf1
--- /dev/null
+++ b/install/ansible/voltha.yml
@@ -0,0 +1,12 @@
+- hosts: cluster
+  remote_user: voltha
+  serial: 1
+  vars:
+    target: cluster
+  roles:
+    - common
+    - voltha
+    - docker
+    - docker-compose
+    - pull-images
+    - java
diff --git a/install/ansible/volthainstall.yml b/install/ansible/volthainstall.yml
new file mode 100644
index 0000000..3e8c05a
--- /dev/null
+++ b/install/ansible/volthainstall.yml
@@ -0,0 +1,17 @@
+- hosts: installer
+  remote_user: vinstall
+  serial: 1
+  vars:
+    target: installer
+  roles:
+    - common
+    - docker
+    - docker-compose
+    - installer
+    - docker-registry
+#    - apt-repository
+- hosts: voltha
+  remote_user: vagrant
+  serial: 1
+  roles:
+    - push-images
diff --git a/install/devCopyToInstaller.sh b/install/devCopyToInstaller.sh
new file mode 100755
index 0000000..5791bd1
--- /dev/null
+++ b/install/devCopyToInstaller.sh
@@ -0,0 +1,27 @@
+#!/bin/bash
+
+# This script is for development use. It copies all of the
+# required files and directories to the installer VM to
+# allow changes to be made without having to rebuild the
+# VM and it's registry which is time consuming.
+
+# usage devCopyTiInstaller.sh <ip-address>
+
+
+rm -f install.cfg
+hosts=""
+for i in `virsh list | awk '{print $2}' | grep ha-serv`
+do
+hosts="$hosts "`virsh domifaddr $i | tail -n +3 | head -n 1 | awk '{print $4}' | sed -e 's~/.*~~'`
+done
+echo "hosts=\"$hosts\"" >> install.cfg
+
+
+ipAddr=`virsh domifaddr "Ubuntu1604LTS-1" | tail -n +3 | head -n 1 | awk '{print $4}' | sed -e 's~/.*~~'`
+
+ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr rm -fr *
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem installer.sh vinstall@$ipAddr:installer.sh
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem install.cfg vinstall@$ipAddr:install.cfg
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem -r ansible vinstall@$ipAddr:ansible
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem -r ~/cord/incubator/voltha/compose vinstall@$ipAddr:compose
+scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem -r ~/cord/incubator/voltha/nginx_config vinstall@$ipAddr:nginx_config
diff --git a/install/devSetHostList.sh b/install/devSetHostList.sh
new file mode 100755
index 0000000..ea228be
--- /dev/null
+++ b/install/devSetHostList.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+# This script is for development use. It copies all of the
+# required files and directories to the installer VM to
+# allow changes to be made without having to rebuild the
+# VM and it's registry which is time consuming.
+
+# usage devCopyTiInstaller.sh <ip-address>
+
+
+rm -f install.cfg
+rm -fr .test
+mkdir .test
+hosts=""
+for i in `virsh list | awk '{print $2}' | grep ha-serv`
+do
+	ipAddr=`virsh domifaddr $i | tail -n +3 | head -n 1 | awk '{print $4}' | sed -e 's~/.*~~'`
+	hosts="$hosts $ipAddr"
+	hName=`echo $i | sed -e 's/install_//'`
+	cat .vagrant/machines/$hName/libvirt/private_key > .test/$ipAddr
+done
+echo "hosts=\"$hosts\"" >> install.cfg
diff --git a/install/image-list.cfg b/install/image-list.cfg
new file mode 100644
index 0000000..54570c6
--- /dev/null
+++ b/install/image-list.cfg
@@ -0,0 +1,27 @@
+voltha/nginx
+voltha/grafana
+voltha/portainer
+cord/vcli
+cord/dashd
+cord/config-push
+cord/tester
+cord/onos
+cord/shovel
+cord/netconf
+cord/podder
+cord/ofagent
+cord/chameleon
+cord/voltha
+cord/voltha-base
+nginx
+consul
+fluent/fluentd
+alpine
+portainer/portainer
+wurstmeister/kafka
+ubuntu
+onosproject/onos
+wurstmeister/zookeeper
+kamon/grafana_graphite
+gliderlabs/registrator
+centurylink/ca-certs
diff --git a/install/install.cfg b/install/install.cfg
new file mode 100644
index 0000000..4e76718
--- /dev/null
+++ b/install/install.cfg
@@ -0,0 +1,2 @@
+# List of hosts that will make up the voltha cluster
+# hosts="192.168.121.140 192.168.121.13 192.168.121.238"
diff --git a/install/installer.sh b/install/installer.sh
new file mode 100755
index 0000000..9ac3e78
--- /dev/null
+++ b/install/installer.sh
@@ -0,0 +1,114 @@
+#!/bin/bash
+
+baseImage="Ubuntu1604LTS"
+iVmName="Ubuntu1604LTS-1"
+iVmNetwork="vagrant-libvirt"
+shutdownTimeout=5
+ipTimeout=10
+
+lBlue='\033[1;34m'
+green='\033[0;32m'
+orange='\033[0;33m'
+NC='\033[0m'
+red='\033[0;31m'
+yellow='\033[1;33m'
+dGrey='\033[1;30m'
+lGrey='\033[1;37m'
+lCyan='\033[1;36m'
+wd=`pwd`
+
+
+# Clean up any prior executions
+rm -fr .keys
+rm -f ansible/hosts/cluster
+rm -f ansible/host_vars/*
+
+# Source the configuration information
+. install.cfg
+
+# Create the key directory
+mkdir .keys
+
+# Create the host list
+echo "[cluster]" > ansible/hosts/cluster
+
+# Silence SSH and avoid prompts
+rm -f ~/.ssh/config
+echo "Host *" > ~/.ssh/config
+echo "	StrictHostKeyChecking no" >> ~/.ssh/config
+echo "	UserKnownHostsFile /dev/null" >> ~/.ssh/config
+
+sudo cp ~/.ssh/config /root/.ssh/config
+
+
+for i in $hosts
+do
+	# Generate the key for the host
+	echo -e "${lBlue}Generating the key-pair for communication with host ${yellow}$i${NC}"
+	ssh-keygen -f ./$i -t rsa -N ''
+	mv $i .keys
+
+	# Generate the pre-configuration script
+	echo -e "${lBlue}Creating the pre-configuration script${NC}"
+	cat <<HERE > bash_login.sh
+#!/bin/bash
+	echo "voltha ALL=(ALL) NOPASSWD:ALL" > tmp
+	sudo chown root.root tmp
+	sudo mv tmp /etc/sudoers.d/voltha
+	sudo mkdir /home/voltha
+	mkdir voltha_ssh
+	ssh-keygen -f ~/voltha_ssh/id_rsa -t rsa -N ''
+	sudo mv voltha_ssh /home/voltha/.ssh
+HERE
+	echo "sudo cat <<HERE > /home/voltha/.ssh/authorized_keys" >> bash_login.sh
+	cat $i.pub >> bash_login.sh
+	echo "HERE" >> bash_login.sh
+	echo "chmod 400 /home/voltha/.ssh/authorized_keys" >> bash_login.sh
+	echo "sudo useradd -b /home -d /home/voltha voltha -s /bin/bash" >> bash_login.sh
+	echo "sudo chown -R voltha.voltha /home/voltha" >> bash_login.sh
+	echo "echo 'voltha:voltha' | sudo chpasswd" >> bash_login.sh
+	echo "rm .bash_login" >> bash_login.sh
+	echo "logout" >> bash_login.sh
+	rm $i.pub
+	# Copy the pre-config file to the VM
+	echo -e "${lBlue}Transfering pre-configuration script to ${yellow}$i${NC}"
+	if [ -d ".test" ]; then
+		echo -e "${red}Test mode set!!${lBlue} Using pre-populated ssh key for ${yellow}$i${NC}"
+		scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .test/$i bash_login.sh vagrant@$i:.bash_login
+	else
+		scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no bash_login.sh vagrant@$i:.bash_login
+	fi
+	rm bash_login.sh
+
+	# Run the pre-config file on the VM
+	echo -e "${lBlue}Running the pre-configuration script on ${yellow}$i${NC}"
+	if [ -d ".test" ]; then
+		echo -e "${red}Test mode set!!${lBlue} Using pre-populated ssh key for ${yellow}$i${NC}"
+		ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .test/$i vagrant@$i
+	else
+		ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vagrant@$i
+	fi
+
+	# Configure ansible and ssh for silent operation
+	echo -e "${lBlue}Configuring ansible${NC}"
+	echo $i >> ansible/hosts/cluster
+	echo "ansible_ssh_private_key_file: $wd/.keys/$i" > ansible/host_vars/$i
+
+	# Create the tunnel to the registry to allow pulls from localhost
+	echo -e "${lBlue}Creating a secure shell tunnel to the registry for ${yellow}$i${NC}"
+	ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .keys/$i -f voltha@$i -R 5000:localhost:5000 -N
+	
+done
+# Add the dependent software list to the cluster variables
+echo -e "${lBlue}Setting up dependent software${NC}"
+echo "deb_files:" >> ansible/group_vars/all
+for i in deb_files/*.deb
+do
+echo "  - `basename $i`" >> ansible/group_vars/all
+done
+
+# Running ansible
+echo -e "${lBlue}Running ansible${NC}"
+cp ansible/ansible.cfg .ansible.cfg
+sudo ansible-playbook ansible/voltha.yml -i ansible/hosts/cluster
+
diff --git a/install/unconfig.sh b/install/unconfig.sh
new file mode 100644
index 0000000..58e4145
--- /dev/null
+++ b/install/unconfig.sh
@@ -0,0 +1,12 @@
+#!/bin/bash
+
+# This is a transient development script
+# it should be deleted before the final
+# upload.
+
+rm -f .ssh/*
+sudo rm /etc/sudoers.d/vinstall
+sudo apt-get -y remove ansible
+sudo apt-get -y autoremove
+rm -fr ansible
+
diff --git a/install/vmTemplate.xml b/install/vmTemplate.xml
new file mode 100644
index 0000000..02fa558
--- /dev/null
+++ b/install/vmTemplate.xml
@@ -0,0 +1,86 @@
+<domain type='kvm'>
+  <name>{{VMName}}</name>
+  <memory unit='KiB'>1048576</memory>
+  <currentMemory unit='KiB'>1048576</currentMemory>
+  <vcpu placement='static'>2</vcpu>
+  <os>
+    <type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type>
+    <boot dev='hd'/>
+  </os>
+  <features>
+    <acpi/>
+    <apic/>
+  </features>
+  <cpu mode='custom' match='exact'>
+    <model fallback='allow'>Haswell-noTSX</model>
+  </cpu>
+  <clock offset='utc'>
+    <timer name='rtc' tickpolicy='catchup'/>
+    <timer name='pit' tickpolicy='delay'/>
+    <timer name='hpet' present='no'/>
+  </clock>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>restart</on_crash>
+  <pm>
+    <suspend-to-mem enabled='no'/>
+    <suspend-to-disk enabled='no'/>
+  </pm>
+  <devices>
+    <emulator>/usr/bin/kvm-spice</emulator>
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='qcow2'/>
+      <source file='/var/lib/libvirt/images/{{VMName}}.qcow2'/>
+      <target dev='hda' bus='ide'/>
+      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+    </disk>
+    <disk type='file' device='cdrom'>
+      <driver name='qemu' type='raw'/>
+      <target dev='hdb' bus='ide'/>
+      <readonly/>
+      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
+    </disk>
+    <controller type='usb' index='0' model='ich9-ehci1'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
+    </controller>
+    <controller type='usb' index='0' model='ich9-uhci1'>
+      <master startport='0'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='usb' index='0' model='ich9-uhci2'>
+      <master startport='2'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
+    </controller>
+    <controller type='usb' index='0' model='ich9-uhci3'>
+      <master startport='4'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
+    </controller>
+    <controller type='pci' index='0' model='pci-root'/>
+    <controller type='ide' index='0'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
+    </controller>
+    <controller type='virtio-serial' index='0'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
+    </controller>
+    <interface type='network'>
+      <mac address='52:54:00:ed:19:74'/>
+      <source network='{{VMNetwork}}'/>
+      <model type='virtio'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+    </interface>
+    <console type='pty'>
+      <target type='virtio' port='0'/>
+    </console>
+    <input type='mouse' bus='ps2'/>
+    <input type='keyboard' bus='ps2'/>
+    <graphics type='vnc' port='-1' autoport='yes'/>
+    <video>
+      <model type='cirrus' vram='16384' heads='1'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
+    </video>
+    <memballoon model='virtio'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
+    </memballoon>
+  </devices>
+</domain>
+