Added the required initialization to start the docker swarm cluster.
Converted yet more dos format files to unix format. Updated the
document on building voltha on a QEMU/KVM virtualization environment
using vagrant. Added a configuration file for the install vagrant to
eventually enable multiple users on a single compute node all able to
run independent voltha docker swarm clusters as VMs. More work is
required to finalize multi-user test mode which will be submitted in a
subsequent update.
This update continues to address Jira VOL-6
Change-Id: I88bc41aa6484877cb76ad38f8bab894f141cebdb
diff --git a/install/BuildingTheInstaller.md b/install/BuildingTheInstaller.md
index f2ef518..27da4b7 100755
--- a/install/BuildingTheInstaller.md
+++ b/install/BuildingTheInstaller.md
@@ -4,17 +4,18 @@
[TOC]
***
-## Bare Metal Setup
+## Set up the Dependencies
+### Bare Metal Setup
**Note:** *If you've already prepared the bare metal machine and have the voltha tree downloaded from haing followed the document ``Building a vOLT-HA Virtual Machine Using Vagrant on QEMU/KVM`` then skip to [Running the Installer](#Building-the-installer).
Start with an installation of Ubuntu16.04LTS on a bare metal server that is capable of virtualization. How to determine this is beyond the scope of this document. When installing the image ensure that both "OpenSSH server" and "Virtualization Machine Host" are chosen in addition to the default "standard system utilities". Once the installation is complete, login to the box and type ``virsh list``. If this doesnt work then you'll need to troubleshoot the installation. If it works, then proceed to the next section.
-##Create the base ubuntu/xenial box
+###Create the base ubuntu/xenial box
Though there are some flavors of ubuntu boxes available but they usually have additional features installed or missing so it's best to just create the image from the ubuntu installation iso image.
```
- voltha> wget http://releases.ubuntu.com/xenial/ubuntu-16.04.2-server-i386.iso
+ voltha> wget http://releases.ubuntu.com/xenial/ubuntu-16.04.2-server-amd64.iso
voltha> echo "virt-install -n Ubuntu16.04 -r 1024 --vcpus=2 --disk size=50 -c ubuntu-16.04.2-server-amd64.iso --accelerate --network network=default,model=virtio --connect=qemu:///system --vnc --noautoconsole -v" > Ubuntu16.04Vm
voltha> . Ubuntu16.04Vm
voltha> virt-manager
@@ -41,7 +42,7 @@
vagrant@voltha$ sudo mv tmp.sudo /etc/sudoers.d/vagrant
```
-## Install and configure vagrant
+### Install and configure vagrant
Vagrant comes with the Ubuntu 16.04 but it doesn't work with kvm. Downloading and installing the version from hashicorp solves the problem.
```
voltha> wget https://releases.hashicorp.com/vagrant/1.9.5/vagrant_1.9.3_x86_64.deb
@@ -50,7 +51,7 @@
voltha> sudo apt-get install libvirt-dev
voltha> vagrant plugin install vagrant-libvirt
```
-## Create the default vagrant box
+### Create the default vagrant box
When doing this, be careful that you're not in a directory where a Vagrantfile already exists or you'll trash it. It is recommended that a temporary directory is created to perform these actions and then removed once the new box has been added to vagrant.
```
@@ -79,7 +80,7 @@
voltha> tar czvf ubuntu1604.box ./metadata.json ./Vagrantfile ./box.img
voltha> vagrant box add ubuntu1604.box
```
-##Download the voltha tree
+###Download the voltha tree
The voltha tree contains the Vagrant files required to build a multitude of VMs required to both run, test, and also to deploy voltha. The easiest approach is to download the entire tree rather than trying to extract the specific ``Vagrantfile(s)`` required.
```
voltha> sudo apt-get install repo
@@ -90,7 +91,7 @@
voltha> repo sync
```
-## Run vagrant to Create a Voltha VM
+### Run vagrant to Create a Voltha VM
First create the voltah VM using vagrant.
```
voltha> vagrant up
@@ -109,11 +110,13 @@
then type
``./CreateInstaller.sh test``.
+You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
+
This will take a while so doing something else in the mean-time is recommended.
### Running the installer in test mode
-Once the creation has completed determine the ip address of the VM with the following virs command:
-``virsh domifaddr Ubuntu16.04LTS-1``
+Once the creation has completed determine the ip address of the VM with the following virsh command:
+``virsh domifaddr vInstaller``
using the ip address provided log into the installer using
``ssh -i key.pem vinstall@<ip-address-from-above>``
@@ -124,7 +127,8 @@
Once the installation completes, determine the ip-address of one of the cluster VMs.
``virsh domifaddr ha-serv1``
You can use ``ha-serv2`` or ``ha-serv3`` in place of ``ha-serv1`` above. Log into the VM
-``ssh voltah@<ip-address-from-above>``
+``ssh voltha@<ip-address-from-above>``
+The password is `voltha`.
Once logged into the voltha instance follow the usual procedure to start voltha and validate that it's operating correctly.
### Building the installer in production mode
@@ -135,11 +139,13 @@
To build the installer in production mode type:
``./CreateInstaller.sh``
+You will be prompted for a password 3 times early in the installation as the installer bootstraps itself. The password is `vinstall` in each case. After this, the installer can run un-attended for the remainder of the installation.
+
This will take a while and when it completes a directory name ``volthaInstaller`` will have been created. Copy all the files in this directory to a USB Flash drive or other portable media and carry to the installation site.
## Installing Voltha
-To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the portable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./installVoltha.sh`` which should produce the following output:
+To install voltha access to a bare metal server running Ubuntu Server 16.04LTS with QEMU/KVM virtualization and OpenSSH installed is required. If the server meets these basic requirements then insert the removable media, mount it, and copy all the files on the media to a directory on the server. Change into that directory and type ``./installVoltha.sh`` which should produce the following output:
```
Checking for the installer archive installer.tar.bz2
Checking for the installer archive parts installer.part*
diff --git a/install/CreateInstaller.sh b/install/CreateInstaller.sh
index 2ae4b44..e8da9a8 100755
--- a/install/CreateInstaller.sh
+++ b/install/CreateInstaller.sh
@@ -203,7 +203,7 @@
fi
if [ $# -eq 1 -a "$1" == "test" ]; then
- echo -e "${lBlue}Testing, the install image ${red}WILL NOT#{lBlue} be built${NC}"
+ echo -e "${lBlue}Testing, the install image ${red}WILL NOT${lBlue} be built${NC}"
else
echo -e "${lBlue}Building, the install image (this can take a while)${NC}"
# Create a temporary directory for all the installer files
diff --git a/install/Vagrantfile b/install/Vagrantfile
index a07ec95..dccee81 100644
--- a/install/Vagrantfile
+++ b/install/Vagrantfile
@@ -3,13 +3,18 @@
# This Vagrantfile is used for testing the installer. It creates 3 servers
# with a vanilla ubutu server image on it.
+require 'yaml'
+
+# Load the settings which are tweaked by the installer to avoid naming conflicts
+settings = YAML.load_file 'settings.vagrant.yaml'
+
Vagrant.configure(2) do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
(1..3).each do |i|
- config.vm.define "ha-serv#{i}" do |d|
+ config.vm.define "#{settings['server_name']}#{i}" do |d|
d.ssh.forward_agent = true
- d.vm.box = "ubuntu1604"
- d.vm.hostname = "ha-serv#{i}"
+ d.vm.box = settings["box_source"]
+ d.vm.hostname = "#{settings['server_name']}#{i}"
d.vm.provider "libvirt" do |v|
v.memory = 6144
end
@@ -21,42 +26,3 @@
end
end
-
-#Vagrant.configure(2) do |config|
-#
-# config.vm.synced_folder ".", "/vagrant", disabled: true
-# if /cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM
-# puts("Configuring for windows")
-# config.vm.synced_folder "../../..", "/cord", mount_options: ["dmode=700,fmode=600"]
-# Box = "ubuntu/xenial64"
-# Provider = "virtualbox"
-# elsif RUBY_PLATFORM =~ /linux/
-# puts("Configuring for linux")
-# config.vm.synced_folder "../../..", "/cord", type: "nfs"
-# Box = "ubuntu1604"
-# Provider = "libvirt"
-# else
-# puts("Configuring for other")
-# config.vm.synced_folder "../../..", "/cord"
-# Box = "ubuntu/xenial64"
-# Provider = "virtualbox"
-# end
-#
-# config.vm.define "voltha" do |d|
-# d.ssh.forward_agent = true
-# d.vm.box = Box
-# d.vm.hostname = "voltha"
-# d.vm.network "private_network", ip: "10.100.198.220"
-# #d.vm.provision :shell, path: "ansible/scripts/bootstrap_ansible.sh"
-# #d.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /cord/incubator/voltha/ansible/voltha.yml -c local"
-# #d.vm.provision :shell, inline: "cd /cord/incubator/voltha && source env.sh && make install-protoc && chmod 777 /tmp/fluentd"
-# d.vm.provider Provider do |v|
-# v.memory = 6144
-# end
-# end
-#
-# if Vagrant.has_plugin?("vagrant-cachier")
-# config.cache.scope = :box
-# end
-#
-#end
diff --git a/install/ansible/group_vars/all b/install/ansible/group_vars/all
index dfa7529..311ec96 100644
--- a/install/ansible/group_vars/all
+++ b/install/ansible/group_vars/all
@@ -5,6 +5,8 @@
docker_push_registry: "vinstall:5000"
cord_home: /home/volthainstall/cord
target_voltha_dir: /cord/incubator/voltha
+docker_py_version: "1.7.0"
+netifaces_version: "0.10.4"
target_voltha_home: /home/voltha
voltha_containers:
- voltha/nginx
diff --git a/install/ansible/roles/cluster-host/files/ssh_config b/install/ansible/roles/cluster-host/files/ssh_config
new file mode 100644
index 0000000..990a43d
--- /dev/null
+++ b/install/ansible/roles/cluster-host/files/ssh_config
@@ -0,0 +1,3 @@
+Host *
+ StrictHostKeyChecking no
+ UserKnownHostsFile=/dev/null
diff --git a/install/ansible/roles/cluster-host/tasks/cluster-host.yml b/install/ansible/roles/cluster-host/tasks/cluster-host.yml
index 20dcd15..20330c4 100644
--- a/install/ansible/roles/cluster-host/tasks/cluster-host.yml
+++ b/install/ansible/roles/cluster-host/tasks/cluster-host.yml
@@ -2,6 +2,39 @@
# is running to install voltha in the cluster hosts.
# Whe the target == "installer" the installer is being
# created.
+- name: A .ssh directory for the voltha user exists
+ file:
+ #path: "{{ ansible_env['HOME'] }}/.ssh"
+ path: "/home/voltha/.ssh"
+ state: directory
+ owner: voltha
+ group: voltha
+ tags: [cluster_host]
+
+- name: known_hosts file is absent for the voltha user
+ file:
+ path: "/home/voltha/.ssh/known_hosts"
+ state: absent
+ tags: [cluster_host]
+
+- name: Known host checking is disabled
+ copy:
+ src: files/ssh_config
+ dest: "/home/voltha/.ssh/config"
+ owner: voltha
+ group: voltha
+ mode: 0600
+ tags: [cluster_host]
+
+- name: Cluster host keys are propagated to all hosts in the cluster
+ copy:
+ src: files/.keys
+ dest: "/home/voltha"
+ owner: voltha
+ group: voltha
+ mode: 0600
+ tags: [cluster_host]
+
- name: Required configuration directories are copied
copy:
src: "/home/vinstall/{{ item }}"
@@ -13,7 +46,7 @@
- netifaces
- deb_files
when: target == "cluster"
- tags: [voltha]
+ tags: [cluster_host]
- name: Dependent software is installed
command: dpkg -i "{{ target_voltha_home }}/deb_files/{{ item }}"
@@ -21,20 +54,20 @@
when: target == "cluster"
ignore_errors: true
when: target == "cluster"
- tags: [voltha]
+ tags: [cluster_host]
- name: Dependent software is initialized
command: apt-get -f install
when: target == "cluster"
- tags: [voltha]
+ tags: [cluster_host]
-- name: Python packages are installe
+- name: Python packages are installed
command: pip install {{ item }} --no-index --find-links "file://{{ target_voltha_home }}/{{ item }}"
with_items:
- docker-py
- netifaces
when: target == "cluster"
- tags: [voltha]
+ tags: [cluster_host]
- name: Configuration directories are deleted
file:
@@ -45,5 +78,5 @@
- netifaces
- deb_files
when: target == "cluster"
- tags: [voltha]
+ tags: [cluster_host]
diff --git a/install/ansible/roles/common/tasks/main.yml b/install/ansible/roles/common/tasks/main.yml
index 8b1c054..c3bb649 100644
--- a/install/ansible/roles/common/tasks/main.yml
+++ b/install/ansible/roles/common/tasks/main.yml
@@ -2,6 +2,7 @@
apt:
name: jq
force: yes
+ when: target != "cluster"
tags: [common]
- name: Host is present
diff --git a/install/ansible/roles/docker/tasks/debian.yml b/install/ansible/roles/docker/tasks/debian.yml
index 8eed0ff..d9f3f37 100644
--- a/install/ansible/roles/docker/tasks/debian.yml
+++ b/install/ansible/roles/docker/tasks/debian.yml
@@ -24,7 +24,7 @@
- name: Debian docker-py is present
pip:
name: docker-py
- version: 1.6.0
+ version: "{{ docker_py_version }}"
state: present
when: target == "installer"
tags: [docker]
@@ -32,7 +32,7 @@
- name: netifaces pip package is present
pip:
name: netifaces
- version: 0.10.4
+ version: "{{ netifaces_version }}"
state: present
when: target == "installer"
tags: [docker]
@@ -49,7 +49,7 @@
when: copy_result|changed and is_systemd is defined
tags: [docker]
-- name: vagrant user is added to the docker group
+- name: Sudo user is added to the docker group
user:
name: "{{ ansible_env['SUDO_USER'] }}"
group: docker
diff --git a/install/ansible/roles/installer/tasks/installer.yml b/install/ansible/roles/installer/tasks/installer.yml
index 330d512..6be27ae 100644
--- a/install/ansible/roles/installer/tasks/installer.yml
+++ b/install/ansible/roles/installer/tasks/installer.yml
@@ -1,55 +1,55 @@
-- name: Ansible repository is available
- apt_repository:
- repo: 'ppa:ansible/ansible'
- tags: [installer]
-- name: Debian ansible is present
- apt:
- name: ansible
- state: latest
- force: yes
- tags: [installer]
-- name: Installer files and directories are copied
- copy:
- src: "{{ cord_home }}/incubator/voltha/{{ item }}"
- dest: /home/vinstall
- owner: vinstall
- group: vinstall
- with_items:
- - install/installer.sh
- - install/install.cfg
- - install/ansible
- - compose
- - nginx_config
- tags: [installer]
-- name: Determine if test mode is active
- become: false
- local_action: stat path="{{ cord_home }}/incubator/voltha/install/.test"
- register: file
- ignore_errors: true
-- name: Test mode file is copied
- copy:
- src: "{{ cord_home }}/incubator/voltha/install/.test"
- dest: /home/vinstall
- when: file.stat.exists
-- name: The installer is made executable
- file:
- path: /home/vinstall/installer.sh
- mode: 0744
- tags: [installer]
-- name: Python docker-py 1.6.0 package source is available
- command: pip download -d /home/vinstall/docker-py "docker-py==1.6.0"
- tags: [installer]
-- name: Python netifaces 0.10.4 package source is available
- command: pip download -d /home/vinstall/netifaces "netifaces==0.10.4"
- tags: [installer]
-- name: Deb file directory doesn't exist
- file:
- path: /home/vinstall/deb_files
- state: absent
- tags: [installer]
-- name: Deb files are saved.
- command: cp -r /var/cache/apt/archives /home/vinstall
- tags: [installer]
-- name: Deb file directory is renamed
- command: mv /home/vinstall/archives /home/vinstall/deb_files
- tags: [installer]
+- name: Ansible repository is available
+ apt_repository:
+ repo: 'ppa:ansible/ansible'
+ tags: [installer]
+- name: Debian ansible is present
+ apt:
+ name: ansible
+ state: latest
+ force: yes
+ tags: [installer]
+- name: Installer files and directories are copied
+ copy:
+ src: "{{ cord_home }}/incubator/voltha/{{ item }}"
+ dest: /home/vinstall
+ owner: vinstall
+ group: vinstall
+ with_items:
+ - install/installer.sh
+ - install/install.cfg
+ - install/ansible
+ - compose
+ - nginx_config
+ tags: [installer]
+- name: Determine if test mode is active
+ become: false
+ local_action: stat path="{{ cord_home }}/incubator/voltha/install/.test"
+ register: file
+ ignore_errors: true
+- name: Test mode file is copied
+ copy:
+ src: "{{ cord_home }}/incubator/voltha/install/.test"
+ dest: /home/vinstall
+ when: file.stat.exists
+- name: The installer is made executable
+ file:
+ path: /home/vinstall/installer.sh
+ mode: 0744
+ tags: [installer]
+- name: Python docker-py {{ docker_py_version }} package source is available
+ command: pip download -d /home/vinstall/docker-py "docker-py=={{ docker_py_version }}"
+ tags: [installer]
+- name: Python netifaces {{ netifaces_version }} package source is available
+ command: pip download -d /home/vinstall/netifaces "netifaces=={{ netifaces_version }}"
+ tags: [installer]
+- name: Deb file directory doesn't exist
+ file:
+ path: /home/vinstall/deb_files
+ state: absent
+ tags: [installer]
+- name: Deb files are saved.
+ command: cp -r /var/cache/apt/archives /home/vinstall
+ tags: [installer]
+- name: Deb file directory is renamed
+ command: mv /home/vinstall/archives /home/vinstall/deb_files
+ tags: [installer]
diff --git a/install/ansible/roles/swarm/tasks/main.yml b/install/ansible/roles/swarm/tasks/main.yml
new file mode 100644
index 0000000..92e73c2
--- /dev/null
+++ b/install/ansible/roles/swarm/tasks/main.yml
@@ -0,0 +1,2 @@
+- include: swarm.yml
+ when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
diff --git a/install/ansible/roles/swarm/tasks/swarm.yml b/install/ansible/roles/swarm/tasks/swarm.yml
new file mode 100644
index 0000000..b0a7009
--- /dev/null
+++ b/install/ansible/roles/swarm/tasks/swarm.yml
@@ -0,0 +1,24 @@
+---
+- name: Ensure Swarm Master Initialization
+ command: "docker swarm init --advertise-addr {{ swarm_master_addr }}"
+ when: target == "swarm-master"
+ tags: [swarm]
+
+- name: Capture Swarm Cluster Manager Token
+ become: voltha
+ shell: ssh -i /home/voltha/.keys/{{ swarm_master_addr }} voltha@{{ swarm_master_addr }} sudo docker swarm join-token -q manager 2>/dev/null
+ register: manager_token
+ changed_when: false
+ when: target == "swarm-master-backup"
+ tags: [swarm]
+
+- name: Debug
+ debug:
+ msg: "TOKEN: {{ manager_token.stdout }}"
+ when: target == "swarm-master-backup"
+ tags: [swarm]
+
+- name: Join Swarm Cluster
+ command: "docker swarm join --token {{ manager_token.stdout }} {{ swarm_master_addr }}:2377"
+ when: target == "swarm-master-backup"
+ tags: [swarm]
diff --git a/install/ansible/swarm-master-backup.yml b/install/ansible/swarm-master-backup.yml
new file mode 100644
index 0000000..1e8eb3b
--- /dev/null
+++ b/install/ansible/swarm-master-backup.yml
@@ -0,0 +1,7 @@
+- hosts: swarm-master-backup
+ remote_user: voltha
+ serial: 1
+ vars:
+ target: swarm-master-backup
+ roles:
+ - swarm
diff --git a/install/ansible/swarm-master.yml b/install/ansible/swarm-master.yml
new file mode 100644
index 0000000..2c956d2
--- /dev/null
+++ b/install/ansible/swarm-master.yml
@@ -0,0 +1,7 @@
+- hosts: swarm-master
+ remote_user: voltha
+ serial: 1
+ vars:
+ target: swarm-master
+ roles:
+ - swarm
diff --git a/install/installer.sh b/install/installer.sh
index 61d1927..465f5d9 100755
--- a/install/installer.sh
+++ b/install/installer.sh
@@ -1,10 +1,5 @@
#!/bin/bash
-baseImage="Ubuntu1604LTS"
-iVmName="Ubuntu1604LTS-1"
-shutdownTimeout=5
-ipTimeout=10
-
lBlue='\033[1;34m'
green='\033[0;32m'
orange='\033[0;33m'
@@ -39,7 +34,6 @@
sudo cp ~/.ssh/config /root/.ssh/config
-
for i in $hosts
do
# Generate the key for the host
@@ -106,8 +100,34 @@
echo " - `basename $i`" >> ansible/group_vars/all
done
+# Make sure the ssh keys propagate to all hosts allowing passwordless logins between them
+echo -e "${lBlue}Propagating ssh keys${NC}"
+cp -r .keys ansible/roles/cluster-host/files/.keys
+
# Running ansible
echo -e "${lBlue}Running ansible${NC}"
cp ansible/ansible.cfg .ansible.cfg
sudo ansible-playbook ansible/voltha.yml -i ansible/hosts/cluster
+# Now initialize the the docker swarm cluster with managers.
+# The first server needs to be the primary swarm manager
+# the other nodes are backup mangers that join the swarm.
+# In the future, worker nodes will likely be added.
+
+echo "[swarm-master]" > ansible/hosts/swarm-master
+echo "[swarm-master-backup]" > ansible/hosts/swarm-master-backup
+
+ctr=1
+for i in $hosts
+do
+ if [ $ctr -eq 1 ]; then
+ echo $i >> ansible/hosts/swarm-master
+ echo "swarm_master_addr: \"$i\"" >> ansible/group_vars/all
+ ctr=0
+ else
+ echo $i >> ansible/hosts/swarm-master-backup
+ fi
+done
+sudo ansible-playbook ansible/swarm-master.yml -i ansible/hosts/swarm-master
+sudo ansible-playbook ansible/swarm-master-backup.yml -i ansible/hosts/swarm-master-backup
+
diff --git a/install/settings.vagrant.yaml b/install/settings.vagrant.yaml
new file mode 100644
index 0000000..1d0b380
--- /dev/null
+++ b/install/settings.vagrant.yaml
@@ -0,0 +1,2 @@
+box_source: "ubuntu1604"
+server_name: "ha-serv"