VOL-234. This update fixes issues seen with unresolved symbolic links
in the voltha tree when building and using the installer. There are
also updates to the documentation based on feedback receive from
multiple parties.
Change-Id: I21c7920cd52c42c7d5f4b48e064eafd04dd52203
diff --git a/BuildingVolthaUsingVagrantOnKVM.md b/BuildingVolthaUsingVagrantOnKVM.md
index b982a62..2e4a0aa 100755
--- a/BuildingVolthaUsingVagrantOnKVM.md
+++ b/BuildingVolthaUsingVagrantOnKVM.md
@@ -4,8 +4,17 @@
[TOC]
***
-##Bare Metal Setup
-Start with an installation of Ubuntu16.04LTS on a bare metal server that is capable of virtualization. How to determine this is beyond the scope of this document. When installing the image ensure that both "OpenSSH server" and "Virtualization Machine Host" are chosen in addition to the default "standard system utilities". Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section.
+### Bare Metal Setup
+The bare metal machine MUST have ubuntu server 16.04 LTS installed with the following packages (and only the following packages) selected during installation:
+```
+[*] standard system utilities
+[*] Virtual Machine host
+[*] OpenSSH server
+```
+This will ensure that the user you've defined during the installation can run the virsh shell as a standard user rather than as the root user. This is necessary to ensure the installer software operates as designed. Please ensure that ubuntu **server** is installed and ***NOT*** ubuntu desktop.
+![Ubuntu Installer Graphic](file:///C:Users/sslobodr/Documents/Works In Progress/2017/voltha/UbuntuInstallLaptop.png)
+
+Start with a clean installation of Ubuntu16.04 LTS on a bare metal server that is capable of virtualization. How to determine this is beyond th scope of this document. Ensure that package selection is as outlined above. Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section. Please note use exactly `virsh list` ***NOT*** `sudo virsh list`. If you must use the `sudo`command then the installation was not performed properly and should be repeated. If you're familiar with the KVM environment there are steps to solve this and other issues but this is also beyond the scope of this document. So if unfamiluar with the KVM environment a re-installation exactly as outlined above is required.
##Create the base ubuntu/xenial box
Though there are some flavors of ubuntu boxes available but they usually have additional features installed or missing so it's best to just create the image from the ubuntu installation iso image.
@@ -18,7 +27,12 @@
voltha> virt-manager
```
Once the virt manager opens, open the console of the Ubuntu16.04 VM and follow the installation process.
-When promprompted use the hostname ``voltha``. Also when prompted you should create one user ``Vagrant Vagrant`` and use the offered up userid of ``vagrant``. When prompted for the password of the vagrant user, use ``vagrant``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install.
+When promprompted use the hostname ``voltha``. Also when prompted you should create one user ``Vagrant Vagrant`` and use the offered up userid of ``vagrant``. When prompted for the password of the vagrant user, use ``vagrant``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install. The last 3 lines of your package selection screen should look likethis. Everything above `standard system utilities` should **not** be selected.
+```
+[*] standard system utilities
+[ ] Virtual Machine host
+[*] OpenSSH server
+```
Once the installation is complete, run the VM and log in as vagrant password vagrant and install the default vagrant key (this can be done one of two ways, through virt-manager and the console or by uing ssh from the hypervisor host, the virt-manager method is shown below):
```
vagrant@voltha$ mkdir -p /home/vagrant/.ssh
@@ -142,7 +156,7 @@
## Run vagrant to Create a Voltha VM
First create the voltah VM using vagrant.
```
-voltha> cd cord/incubator/voltha
+voltha> cd ~/cord/incubator/voltha
voltha> vagrant up
```
Finally, log into the vm using vagrant.
diff --git a/Vagrantfile b/Vagrantfile
index 517ee29..8ea095d 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -23,9 +23,9 @@
puts("Using the QEMU/KVM configuration");
Box = "ubuntu1604"
Provider = "libvirt"
- if settings['testMode'] == "true"
+ if settings['testMode'] == "true" or settings['installMode'] == "true"
config.vm.synced_folder ".", "/vagrant", disabled: true
- config.vm.synced_folder "../..", "/cord", type: "rsync", rsync__exclude: [".git", "venv-linux"]
+ config.vm.synced_folder "../..", "/cord", type: "rsync", rsync__exclude: [".git", "venv-linux", "install/volthaInstaller", "install/volthaInstaller-2"], rsync__args: ["--verbose", "--archive", "--delete", "-z", "--links"]
else
config.vm.synced_folder "../..", "/cord", type: "nfs"
end
diff --git a/install/BuildVoltha.sh b/install/BuildVoltha.sh
index 8c3af94..7a08577 100755
--- a/install/BuildVoltha.sh
+++ b/install/BuildVoltha.sh
@@ -6,11 +6,34 @@
# Voltha directory
cd ..
+# Blow away the settings file, we're going to set all the settings below
+rm -f settings.vagrant.yaml
+
# Rename voltha for multi-user support
-sed -i -e '/server_name/s/.*/server_name: "voltha'${uId}'"/' settings.vagrant.yaml
-# Build voltha in test mode
+echo "---" > settings.vagrant.yaml
+echo "# The name to use for the server" >> settings.vagrant.yaml
+echo 'server_name: "voltha'${uId}'"' >> settings.vagrant.yaml
+# Make sure that we're using KVM and not virtualbox
+echo '# Use KVM as the VM provider' >> settings.vagrant.yaml
+echo 'vProvider: "KVM"' >> settings.vagrant.yaml
+echo '# Use virtualbox as the VM provider' >> settings.vagrant.yaml
+echo '#vProvider: "virtualbox"' >> settings.vagrant.yaml
+# Build voltha in the specified mode if any
if [ $# -eq 1 -a "$1" == "test" ]; then
- sed -i -e '/test_mode/s/.*/test_mode: "true"/' settings.vagrant.yaml
+ echo '# This determines if test mode is active' >> settings.vagrant.yaml
+ echo 'testMode: "true"' >> settings.vagrant.yaml
+ echo '# This determines if installer mode is active' >> settings.vagrant.yaml
+ echo 'installMode: "false"' >> settings.vagrant.yaml
+elif [ $# -eq 1 -a "$1" == "install" ]; then
+ echo '# This determines if installer mode is active' >> settings.vagrant.yaml
+ echo 'installMode: "true"' >> settings.vagrant.yaml
+ echo '# This determines if test mode is active' >> settings.vagrant.yaml
+ echo 'testMode: "false"' >> settings.vagrant.yaml
+else
+ echo '# This determines if installer mode is active' >> settings.vagrant.yaml
+ echo 'installMode: "false"' >> settings.vagrant.yaml
+ echo '# This determines if test mode is active' >> settings.vagrant.yaml
+ echo 'testMode: "false"' >> settings.vagrant.yaml
fi
# Destroy the VM if it's running
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
index 07916eb..91198ce 100755
--- a/install/BuildingTheInstaller.md
+++ b/install/BuildingTheInstaller.md
@@ -16,7 +16,7 @@
![Ubuntu Installer Graphic](file:///C:Users/sslobodr/Documents/Works In Progress/2017/voltha/UbuntuInstallLaptop.png)
**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document ``Building a vOLT-HA Virtual Machine Using Vagrant on QEMU/KVM`` then skip to [Building the Installer](#Building-the-installer).
-Start with a clean installation of Ubuntu16.04 LTS on a bare metal server that is capable of virtualization selecting the packages outlined above. How to determine this is beyond the scope of this document. Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section. Please note use exactly `virsh list` ***NOT*** `sudo virsh list`. If you must use the `sudo`command then the installation was not performed properly and should be repeated. If you're familiar with the KVM environment there are steps to solve this and other issues but this is also beyond the scope of this document.
+Start with a clean installation of Ubuntu16.04 LTS on a bare metal server that is capable of virtualization. How to determine this is beyond th scope of this document. Ensure that package selection is as outlined above. Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section. Please note use exactly `virsh list` ***NOT*** `sudo virsh list`. If you must use the `sudo`command then the installation was not performed properly and should be repeated. If you're familiar with the KVM environment there are steps to solve this and other issues but this is also beyond the scope of this document. So if unfamiluar with the KVM environment a re-installation exactly as outlined above is required.
###Create the base ubuntu/xenial box
Though there are some flavors of ubuntu boxes available but they usually have additional features installed. It is essential for the installer to start from a base install of ubuntu with absolutely no other software installed. To ensure the base image for the installer is a clean ubuntu server install and nothing but a clean ubuntu server install it is best to just create the image from the ubuntu installation iso image.
@@ -30,31 +30,47 @@
voltha> virt-manager
```
Once the virt manager opens, open the console of the Ubuntu16.04 VM and follow the installation process.
-When promprompted use the hostname ``vinstall``. Also when prompted you should create one user ``vinstall vinstall`` and use the offered up userid of ``vinstall``. When prompted for the password of the vagrant user, use ``vinstall``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install.
-Once the installation is complete, run the VM and log in as vagrant password vagrant and install the default vagrant key (this can be done one of two ways, through virt-manager and the console or by uing ssh from the hypervisor host, the virt-manager method is shown below):
+When promprompted use the hostname ``vinstall``. Also when prompted you should create one user ``vinstall vinstall`` and use the offered up userid of ``vinstall``. When prompted for the password of the vinstall user, use ``vinstall``. When asked if a weak password should be used, select yes. Don't encrypt the home directory. Select the OpenSSH server when prompted for packages to install. The last 3 lines of your package selection screen should look likethis. Everything above `standard system utilities` should **not** be selected.
```
-vinstall@voltha$ mkdir -p /home/vinstall/.ssh
-vagrant@voltha$ chmod 0700 /home/vinstall/.ssh
-vagrant@voltha$ chown -R vagrant.vagrant /home/vagrant/.ssh
+[*] standard system utilities
+[ ] Virtual Machine host
+[*] OpenSSH server
```
-Also create a .ssh directory for the root user:
+
+Once the installation is complete, run the VM and log in as vinstall password vinstall.
+Create a .ssh directory for the root user:
```
-vagrant@voltha$ sudo mkdir /root/.ssh
+vinstall@vinstall$ sudo mkdir /root/.ssh
```
Add a vinstall file to /etc/sudoers.d/vinstall with the following:
```
-vagrant@voltha$ echo "vinstall ALL=(ALL) NOPASSWD:ALL" > tmp.sudo
-vagrant@voltha$ sudo chown root.root tmp.sudo
-vagrant@voltha$ sudo mv tmp.sudo /etc/sudoers.d/vinstall
+vinstallvinstallvoltha$ echo "vinstall ALL=(ALL) NOPASSWD:ALL" > tmp.sudo
+vinstall@vinstall$ sudo chown root.root tmp.sudo
+vinstall@vinstall$ sudo mv tmp.sudo /etc/sudoers.d/vinstall
```
Shut down the VM.
```
-vinstall@voltha$ sudo telinit 0
+vinstall@vinstall$ sudo telinit 0
```
###Download the voltha tree
-The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required.
+The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required. If you haven't done so perviously, do the following.
```
+Create a .gitconfig file using your favorite editor and add the following:
+```
+# This is Git's per-user configuration file.
+[user]
+ name = Your Name
+ email = your.email@your.organization.com
+[color]
+ ui = auto
+[review "https://gerrit.opencord.org/"]
+ username=yourusername
+[push]
+ default = simple
+
+```
+
voltha> sudo apt-get install repo
voltha> mkdir cord
voltha> sudo ln -s /cord `pwd`/cord
@@ -66,6 +82,24 @@
### Run vagrant to Create a Voltha VM
***Note:*** If you haven't done so, please follow the steps provided in the document `BulindingVolthaOnVagrantUsingKVM.md` to create the base voltha VM box for vagrant.
+Determine your numberic id using the following command:
+```
+voltha> id -u
+```
+
+Edit the vagrant configuration in `settings.vagrant.yaml` and ensure that the following variables are set and use the value above for `<yourid>`:
+```
+---
+# The name to use for the server
+server_name: "voltha<yourid>"
+# Use virtualbox for development
+# vProvider: "virtualbox"
+# This determines if test mode is active
+testMode: "true"
+# Use KVM for production
+vProvider: "KVM"
+```
+
First create the voltah VM using vagrant.
```
voltha> vagrant up
@@ -74,15 +108,24 @@
```
voltha> vagrant ssh
```
+If you were able to start the voltha VM using Vagrant you can proceed to the next step. If you weren't able to start a voltha VM using vagrant please troubleshoot the issue before proceeding any further.
+
## Building the Installer
+Before building the installer, destroy any running voltha VM by first ensuring your config file `settings.vagrant.yaml` is set as specified above then peforming the following:
+
+```
+voltha> cd ~/cord/incubator/voltha
+voltha> vagrant destroy
+```
+
There are 2 different ways to build the installer in production and in test mode.
### Building the installer in test mode
Test mode is useful for testers and developers. The installer build script will also launch 3 vagrant VMs that will be install targets and configure the installer to use them without having to supply passwords for each. This speeds up the subsequent install/test cycle.
To build the installer in test mode go to the installer directory
-``cd /cord/incubator/voltha/install``
+``voltha> cd ~/cord/incubator/voltha/install``
then type
-``./CreateInstaller.sh test``.
+``voltha> ./CreateInstaller.sh test``.
You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
@@ -108,7 +151,7 @@
### Building the installer in production mode
Production mode should be used if the installer created is going to be used in a production environment. In this case, an archive file is created that contains the VM image, the KVM xml metadata file for the VM, the private key to access the vM, and a bootstrap script that sets up the VM, fires it up, and logs into it.
-The archive file and a script called ``installVoltha.sh`` are both placed in a directory named ``volthaInstaller``. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named ``installer.part<XX>`` where XX is a number starting at 00 and going as high as necessary based on the archive size.
+The archive file and a script called ``deployInstaller.sh`` are both placed in a directory named ``volthaInstaller``. If the resulting archive file is greater than 2G, it's broken into 1.8G parts named ``installer.part<XX>`` where XX is a number starting at 00 and going as high as necessary based on the archive size.
To build the installer in production mode type:
``./CreateInstaller.sh``
@@ -119,9 +162,9 @@
## Installing Voltha
-To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./installVoltha.sh`` which should produce the output shown after the *Note*:
+To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./deployInstaller.sh`` which should produce the output shown after the *Note*:
-***Note:*** If you are a tester and are installing to 3 vagrant VMs on the same server as the installer is running and haven't used test mode, please add the network name that your 3 VMs are using to the the `installVoltha.sh` command. In other words your command should be `./installVoltha.sh <network-name>`. The network name for a vagrant VM is typically `vagrant-libvirt` under QEMU/KVM. If in doubt type `virsh net-list` and verify this. If a network is not provided then the `default` network is used and the target machines should be reachable directly from the installer.
+***Note:*** If you are a tester and are installing to 3 vagrant VMs on the same server as the installer is running and haven't used test mode, please add the network name that your 3 VMs are using to the the `deployInstaller.sh` command. In other words your command should be `./deployInstaller.sh <network-name>`. The network name for a vagrant VM is typically `vagrant-libvirt` under QEMU/KVM. If in doubt type `virsh net-list` and verify this. If a network is not provided then the `default` network is used and the target machines should be reachable directly from the installer.
```
Checking for the installer archive installer.tar.bz2
Checking for the installer archive parts installer.part*
diff --git a/install/CreateInstaller.sh b/install/CreateInstaller.sh
index 15a5cd0..8816db6 100755
--- a/install/CreateInstaller.sh
+++ b/install/CreateInstaller.sh
@@ -46,14 +46,6 @@
fi
unset vInst
-# Ensure that the voltha VM is running so that images can be secured
-echo -e "${lBlue}Ensure that the ${lCyan}voltha VM${lBlue} is running${NC}"
-vVM=`virsh list | grep voltha_voltha${uId}`
-
-if [ -z "$vVM" ]; then
- ./BuildVoltha.sh $1
-fi
-
# Verify if this is intended to be a test environment, if so start 3 VMs
# to emulate the production installation cluster.
if [ $# -eq 1 -a "$1" == "test" ]; then
@@ -185,6 +177,19 @@
sudo service networking restart
fi
+# Ensure that the voltha VM is running so that images can be secured
+echo -e "${lBlue}Ensure that the ${lCyan}voltha VM${lBlue} is running${NC}"
+vVM=`virsh list | grep voltha_voltha${uId}`
+
+if [ -z "$vVM" ]; then
+ if [ $# -eq 1 -a "$1" == "test" ]; then
+ ./BuildVoltha.sh $1
+ else
+ # Default to installer mode
+ ./BuildVoltha.sh install
+ fi
+fi
+
# Install python which is required for ansible
echo -e "${lBlue}Installing python${NC}"
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i key.pem vinstall@$ipAddr sudo apt-get update
@@ -276,7 +281,7 @@
# Final location for the installer
rm -fr $installerDirectory
mkdir $installerDirectory
- cp installVoltha.sh $installerDirectory
+ cp deployInstaller.sh $installerDirectory
# Check the image size and determine if it needs to be split.
# To be safe, split the image into chunks smaller than 2G so that
# it will fit on a FAT32 volume.
diff --git a/install/ansible/roles/cluster-host/tasks/cluster-host.yml b/install/ansible/roles/cluster-host/tasks/cluster-host.yml
index b9b2146..d1648f5 100644
--- a/install/ansible/roles/cluster-host/tasks/cluster-host.yml
+++ b/install/ansible/roles/cluster-host/tasks/cluster-host.yml
@@ -35,18 +35,33 @@
mode: 0600
tags: [cluster_host]
+#- name: Required configuration directories are copied
+# copy:
+# src: "/home/vinstall/{{ item }}"
+# dest: "{{ target_voltha_home }}"
+# owner: voltha
+# group: voltha
+# with_items:
+# - docker-py
+# - netifaces
+# - deb_files
+# when: target == "cluster"
+# tags: [cluster_host]
+
- name: Required configuration directories are copied
- copy:
+ synchronize:
src: "/home/vinstall/{{ item }}"
dest: "{{ target_voltha_home }}"
- owner: voltha
- group: voltha
+ archive: no
+ owner: no
+ perms: no
+ recursive: yes
+ links: yes
with_items:
- docker-py
- netifaces
- deb_files
- when: target == "cluster"
- tags: [cluster_host]
+ tags: [cluster-host]
- name: apt lists are up-to-date
copy:
diff --git a/install/ansible/roles/installer/tasks/installer.yml b/install/ansible/roles/installer/tasks/installer.yml
index 6be27ae..a958cb0 100644
--- a/install/ansible/roles/installer/tasks/installer.yml
+++ b/install/ansible/roles/installer/tasks/installer.yml
@@ -8,12 +8,28 @@
state: latest
force: yes
tags: [installer]
+#- name: Installer files and directories are copied
+# copy:
+# src: "{{ cord_home }}/incubator/voltha/{{ item }}"
+# dest: /home/vinstall
+# owner: vinstall
+# group: vinstall
+# follow: no
+# with_items:
+# - install/installer.sh
+# - install/install.cfg
+# - install/ansible
+# - compose
+# - nginx_config
- name: Installer files and directories are copied
- copy:
+ synchronize:
src: "{{ cord_home }}/incubator/voltha/{{ item }}"
dest: /home/vinstall
- owner: vinstall
- group: vinstall
+ archive: no
+ owner: no
+ perms: no
+ recursive: yes
+ links: yes
with_items:
- install/installer.sh
- install/install.cfg
@@ -21,6 +37,29 @@
- compose
- nginx_config
tags: [installer]
+- name: Installer directories are owned by vinstall
+ file:
+ path: /home/vinstall/{{ item }}
+ owner: vinstall
+ group: vinstall
+ recurse: yes
+ follow: no
+ with_items:
+ - ansible
+ - compose
+ - nginx_config
+ tags: [installer]
+- name: Installer files are owned by vinstall
+ file:
+ path: /home/vinstall/{{ item }}
+ owner: vinstall
+ group: vinstall
+ follow: no
+ with_items:
+ - installer.sh
+ - install.cfg
+ tags: [installer]
+
- name: Determine if test mode is active
become: false
local_action: stat path="{{ cord_home }}/incubator/voltha/install/.test"
diff --git a/install/ansible/roles/voltha/tasks/voltha.yml b/install/ansible/roles/voltha/tasks/voltha.yml
index e55a018..aa42aa1 100644
--- a/install/ansible/roles/voltha/tasks/voltha.yml
+++ b/install/ansible/roles/voltha/tasks/voltha.yml
@@ -1,170 +1,203 @@
-# Note: When the target == "cluster" the installer
-# is running to install voltha in the cluster hosts.
-# Whe the target == "installer" the installer is being
-# created.
-- name: The environment is properly set on login
- template:
- src: bashrc.j2
- dest: "{{ target_voltha_home }}/.bashrc"
- owner: voltha
- group: voltha
- mode: "u=rw,g=r,o=r"
- when: target == "cluster"
- tags: [voltha]
-
-- name: The .bashrc file is executed on ssh login
- template:
- src: bash_profile.j2
- dest: "{{ target_voltha_home }}/.bash_profile"
- owner: voltha
- group: voltha
- mode: "u=rw,g=r,o=r"
- when: target == "cluster"
- tags: [voltha]
-
-- name: Required directory exists
- file:
- path: "{{ target_voltha_dir }}"
- state: directory
- owner: voltha
- group: voltha
- when: target == "cluster"
- tags: [voltha]
-
-- name: Required directories are copied
- copy:
- src: "/home/vinstall/{{ item }}"
- dest: "{{ target_voltha_dir }}"
- owner: voltha
- group: voltha
- with_items:
- - compose
- - nginx_config
- when: target == "cluster"
- tags: [voltha]
-
-- name: Nginx module symlink is present
- file:
- dest: "{{ target_voltha_dir }}/nginx_config/modules"
- src: ../../usr/lib/nginx/modules
- state: link
- follow: no
- force: yes
- when: target == "cluster"
- tags: [voltha]
-
-- name: Nginx statup script is executable
- file:
- path: "{{ target_voltha_dir }}/nginx_config/start_service.sh"
- mode: 0755
- when: target == "cluster"
- tags: [voltha]
-
-- name: Configuration files are on the cluster host
- copy:
- src: "files/consul_config"
- dest: "{{ target_voltha_dir }}"
- when: target == "cluster"
- tags: [voltha]
-
-- name: Docker containers for Voltha are pulled
- command: docker pull {{ docker_registry }}/{{ item }}
- with_items: "{{ voltha_containers }}"
- when: target == "cluster"
- tags: [voltha]
-- name: Docker images are re-tagged to expected names
- command: docker tag {{ docker_registry }}/{{ item }} {{ item }}
- with_items: "{{ voltha_containers }}"
- when: target == "cluster"
- tags: [voltha]
-#- name: Old docker image tags are removed
-# command: docker rmi {{ docker_registry }}/{{ item }}
-# with_items: "{{ voltha_containers }}"
-# when: target == "cluster"
-# tags: [voltha]
-
-
-# Update the insecure registry to reflect the current installer.
-# The installer name can change depending on whether test mode
-# is being used or not.
-- name: Enable insecure install registry
- template:
- src: "{{ docker_daemon_json }}"
- dest: "{{ docker_daemon_json_dest }}"
- register: copy_result
- when: target == "installer"
- tags: [voltha]
-
-- name: Debain Daemon is reloaded
- command: systemctl daemon-reload
- when: copy_result|changed and is_systemd is defined and target == "installer"
- tags: [voltha]
-
-- name: Debian Docker service is restarted
- service:
- name: docker
- state: restarted
- when: copy_result|changed or user_result|changed
- when: target == "installer"
- tags: [voltha]
-
-- name: Docker images are re-tagged to registry for push
- command: docker tag {{ item }} {{ docker_push_registry }}/{{ item }}
- with_items: "{{ voltha_containers }}"
- when: target == "installer"
- tags: [voltha]
-- name: Docker containers for Voltha are pushed
- command: docker push {{ docker_push_registry }}/{{ item }}
- with_items: "{{ voltha_containers }}"
- when: target == "installer"
- tags: [voltha]
-- name: Temporary registry push tags are removed
- command: docker rmi {{ docker_push_registry }}/{{ item }}
- with_items: "{{ voltha_containers }}"
- when: target == "installer"
- tags: [voltha]
-
-- name: consul overlay network exists
- command: docker network create --driver overlay --subnet 10.10.10.0/29 consul_net
- when: target == "startup"
- tags: [voltha]
-
-- name: kafka overlay network exists
- command: docker network create --driver overlay --subnet 10.10.11.0/24 kafka_net
- when: target == "startup"
- tags: [voltha]
-
-- name: voltha overlay network exists
- command: docker network create --driver overlay --subnet 10.10.12.0/24 voltha_net
- when: target == "startup"
- tags: [voltha]
-
-- name: consul cluster is running
- command: docker service create --name consul --network consul_net --network voltha_net -e 'CONSUL_BIND_INTERFACE=eth0' --mode global --publish "8300:8300" --publish "8400:8400" --publish "8500:8500" --publish "8600:8600/udp" --mount type=bind,source=/cord/incubator/voltha/consul_config,destination=/consul/config consul agent -config-dir /consul/config
- when: target == "startup"
- tags: [voltha]
-
-- name: zookeeper node zk1 is running
- command: docker service create --name zk1 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=1' -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888" zookeeper
- when: target == "startup"
- tags: [voltha]
-
-- name: zookeeper node zk2 is running
- command: docker service create --name zk2 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=2' -e "server.1=zk1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zk3:2888:3888" zookeeper
- when: target == "startup"
- tags: [voltha]
-
-- name: zookeeper node zk3 is running
- command: docker service create --name zk3 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=3' -e "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=0.0.0.0:2888:3888" zookeeper
- when: target == "startup"
- tags: [voltha]
-
-- name: kafka is running
- command: docker service create --name kafka --network voltha_net -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181" -e "KAFKA_HEAP_OPTS=-Xmx256M -Xms128M" --mode global --publish "9092:9092" wurstmeister/kafka
- when: target == "startup"
- tags: [voltha]
-
-- name: voltha is running on a single host for testing
- command: docker service create --name voltha_core --network voltha_net cord/voltha voltha/voltha/main.py -v --consul=consul:8500 --kafka=kafka
- when: target == "startup"
- tags: [voltha]
+# Note: When the target == "cluster" the installer
+# is running to install voltha in the cluster hosts.
+# Whe the target == "installer" the installer is being
+# created.
+- name: The environment is properly set on login
+ template:
+ src: bashrc.j2
+ dest: "{{ target_voltha_home }}/.bashrc"
+ owner: voltha
+ group: voltha
+ mode: "u=rw,g=r,o=r"
+ when: target == "cluster"
+ tags: [voltha]
+
+- name: The .bashrc file is executed on ssh login
+ template:
+ src: bash_profile.j2
+ dest: "{{ target_voltha_home }}/.bash_profile"
+ owner: voltha
+ group: voltha
+ mode: "u=rw,g=r,o=r"
+ when: target == "cluster"
+ tags: [voltha]
+
+- name: Required directory exists
+ file:
+ path: "{{ target_voltha_dir }}"
+ state: directory
+ owner: voltha
+ group: voltha
+ when: target == "cluster"
+ tags: [voltha]
+
+#- name: Required directories are copied
+# copy:
+# src: "/home/vinstall/{{ item }}"
+# dest: "{{ target_voltha_dir }}"
+# owner: voltha
+# group: voltha
+# with_items:
+# - compose
+# - nginx_config
+# when: target == "cluster"
+# tags: [voltha]
+
+- name: Installer files and directories are copied
+ synchronize:
+ src: "/home/vinstall/{{ item }}"
+ dest: "{{ target_voltha_dir }}"
+ archive: no
+ owner: no
+ perms: no
+ recursive: yes
+ links: yes
+ with_items:
+ - compose
+ - nginx_config
+ when: target == "cluster"
+ tags: [voltha]
+
+- name: Installer directories are owned by voltha
+ file:
+ path: /home/vinstall/{{ item }}
+ owner: voltha
+ group: voltha
+ recurse: yes
+ follow: no
+ with_items:
+ - compose
+ - nginx_config
+ when: target == "cluster"
+ tags: [voltha]
+
+#- name: Nginx module symlink is present
+# file:
+# dest: "{{ target_voltha_dir }}/nginx_config/modules"
+# src: ../../usr/lib/nginx/modules
+# state: link
+# follow: no
+# force: yes
+# when: target == "cluster"
+# tags: [voltha]
+
+- name: Nginx statup script is executable
+ file:
+ path: "{{ target_voltha_dir }}/nginx_config/start_service.sh"
+ mode: 0755
+ when: target == "cluster"
+ tags: [voltha]
+
+- name: Configuration files are on the cluster host
+ copy:
+ src: "files/consul_config"
+ dest: "{{ target_voltha_dir }}"
+ when: target == "cluster"
+ tags: [voltha]
+
+- name: Docker containers for Voltha are pulled
+ command: docker pull {{ docker_registry }}/{{ item }}
+ with_items: "{{ voltha_containers }}"
+ when: target == "cluster"
+ tags: [voltha]
+- name: Docker images are re-tagged to expected names
+ command: docker tag {{ docker_registry }}/{{ item }} {{ item }}
+ with_items: "{{ voltha_containers }}"
+ when: target == "cluster"
+ tags: [voltha]
+#- name: Old docker image tags are removed
+# command: docker rmi {{ docker_registry }}/{{ item }}
+# with_items: "{{ voltha_containers }}"
+# when: target == "cluster"
+# tags: [voltha]
+
+
+# Update the insecure registry to reflect the current installer.
+# The installer name can change depending on whether test mode
+# is being used or not.
+- name: Enable insecure install registry
+ template:
+ src: "{{ docker_daemon_json }}"
+ dest: "{{ docker_daemon_json_dest }}"
+ register: copy_result
+ when: target == "installer"
+ tags: [voltha]
+
+- name: Debain Daemon is reloaded
+ command: systemctl daemon-reload
+ when: copy_result|changed and is_systemd is defined and target == "installer"
+ tags: [voltha]
+
+- name: Debian Docker service is restarted
+ service:
+ name: docker
+ state: restarted
+ when: copy_result|changed or user_result|changed
+ when: target == "installer"
+ tags: [voltha]
+
+- name: TEMPORARY RULE TO INSTALL ZOOKEEPER
+ command: docker pull zookeeper
+ when: target == "installer"
+ tags: [voltha]
+
+- name: Docker images are re-tagged to registry for push
+ command: docker tag {{ item }} {{ docker_push_registry }}/{{ item }}
+ with_items: "{{ voltha_containers }}"
+ when: target == "installer"
+ tags: [voltha]
+- name: Docker containers for Voltha are pushed
+ command: docker push {{ docker_push_registry }}/{{ item }}
+ with_items: "{{ voltha_containers }}"
+ when: target == "installer"
+ tags: [voltha]
+- name: Temporary registry push tags are removed
+ command: docker rmi {{ docker_push_registry }}/{{ item }}
+ with_items: "{{ voltha_containers }}"
+ when: target == "installer"
+ tags: [voltha]
+
+- name: consul overlay network exists
+ command: docker network create --driver overlay --subnet 10.10.10.0/29 consul_net
+ when: target == "startup"
+ tags: [voltha]
+
+- name: kafka overlay network exists
+ command: docker network create --driver overlay --subnet 10.10.11.0/24 kafka_net
+ when: target == "startup"
+ tags: [voltha]
+
+- name: voltha overlay network exists
+ command: docker network create --driver overlay --subnet 10.10.12.0/24 voltha_net
+ when: target == "startup"
+ tags: [voltha]
+
+- name: consul cluster is running
+ command: docker service create --name consul --network consul_net --network voltha_net -e 'CONSUL_BIND_INTERFACE=eth0' --mode global --publish "8300:8300" --publish "8400:8400" --publish "8500:8500" --publish "8600:8600/udp" --mount type=bind,source=/cord/incubator/voltha/consul_config,destination=/consul/config consul agent -config-dir /consul/config
+ when: target == "startup"
+ tags: [voltha]
+
+- name: zookeeper node zk1 is running
+ command: docker service create --name zk1 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=1' -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888" zookeeper
+ when: target == "startup"
+ tags: [voltha]
+
+- name: zookeeper node zk2 is running
+ command: docker service create --name zk2 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=2' -e "server.1=zk1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zk3:2888:3888" zookeeper
+ when: target == "startup"
+ tags: [voltha]
+
+- name: zookeeper node zk3 is running
+ command: docker service create --name zk3 --network kafka_net --network voltha_net -e 'ZOO_MY_ID=3' -e "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=0.0.0.0:2888:3888" zookeeper
+ when: target == "startup"
+ tags: [voltha]
+
+- name: kafka is running
+ command: docker service create --name kafka --network voltha_net -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181" -e "KAFKA_HEAP_OPTS=-Xmx256M -Xms128M" --mode global --publish "9092:9092" wurstmeister/kafka
+ when: target == "startup"
+ tags: [voltha]
+
+- name: voltha is running on a single host for testing
+ command: docker service create --name voltha_core --network voltha_net cord/voltha voltha/voltha/main.py -v --consul=consul:8500 --kafka=kafka
+ when: target == "startup"
+ tags: [voltha]
diff --git a/install/installVoltha.sh b/install/deployInstaller.sh
similarity index 100%
rename from install/installVoltha.sh
rename to install/deployInstaller.sh
diff --git a/settings.vagrant.yaml b/settings.vagrant.yaml
index 0b5f4da..0bfc1d4 100644
--- a/settings.vagrant.yaml
+++ b/settings.vagrant.yaml
@@ -5,5 +5,7 @@
vProvider: "virtualbox"
# This determines if test mode is active
testMode: "false"
+# This determines if installer mode is active
+installMode: "false"
# Use KVM for production
#vProvider: "KVM"