rebuild xos-base image in xos-vm-install role playbook
run, which happens async to juju setup
(whitespace fix)
async testclient install
change single-node-pod.sh to use platform-install repo
reformat and minor fixes to README.md
pull xosproject/cord-app-build inside async xos-vm-install role
whitespace fixes v2
fix path for container build
don't start testclient container before databr has been plumbed
fix context
allow xos-vm-install to run longer as it's rebuilding base
daemonize lxc-start for testcliet, avoiding a hang
Change-Id: Icb5da9b69e942aaa79c8256ca5775219f63643d1
diff --git a/README.md b/README.md
index 63260de..43d4dc7 100644
--- a/README.md
+++ b/README.md
@@ -1,11 +1,14 @@
-# openstack-cluster-setup
-This repository contains [Ansible](http://docs.ansible.com) playbooks for installing and configuring an OpenStack Kilo cluster
-for use with XOS. This is how we build clusters for [OpenCloud](http://opencloud.us), and is the method of
-installing a [CORD](http://cord.onosproject.org) development POD as well.
+# platform-install
-All of the OpenStack controller services are installed in VMs on a
-single "head node" and connected by an isolated private network. [Juju](http://www.ubuntu.com/cloud/tools/juju) is used
-to install and configure the OpenStack services.
+This repository contains [Ansible](http://docs.ansible.com) playbooks for
+installing and configuring an OpenStack Kilo cluster for use with XOS. This is
+how we build clusters for [OpenCloud](http://opencloud.us), and is the method
+of installing a [CORD](http://cord.onosproject.org) development POD as well.
+
+All of the OpenStack controller services are installed in VMs on a single "head
+node" and connected by an isolated private network.
+[Juju](http://www.ubuntu.com/cloud/tools/juju) is used to install and configure
+the OpenStack services.
## Prerequisites (OpenCloud and CORD)
@@ -14,7 +17,7 @@
* Install a recent version of Ansible (Ansible 1.9.x on Mac OS X or Ubuntu should work).
* Be able to login to all of the cluster servers from the control machine using SSH.
* *Set up servers:* One server in the cluster will be the "head" node, running the OpenStack
- services. The rest will be "compute" nodes.
+ services. The rest will be "compute" nodes.
* Install Ubuntu 14.04 LTS on all servers.
* The user account used to login from the control machine must have *sudo* access.
* Each server should have a single active NIC (preferably eth0) with connectivity to the
@@ -22,11 +25,14 @@
## How to install a CORD POD
-The CORD POD install procedure uses the "head node" of the cluster as the control machine
-for the install. As mentioned above, install Ansible on the head node and check out this repository.
+The CORD POD install procedure uses the "head node" of the cluster as the
+control machine for the install. As mentioned above, install Ansible on the
+head node and check out this repository.
-The playbooks assume that a bridge called *mgmtbr* on the head node is connected to the management
-network. Note that also there must be a DHCP server on the management network that:
+The playbooks assume that a bridge called *mgmtbr* on the head node is
+connected to the management network. Note that also there must be a DHCP
+server on the management network that:
+
1. hands out IP addresses to VMs connected to *mgmtbr*
2. resolves VM names to IP addresses
3. is configured as a resolver on the head and compute nodes
@@ -35,35 +41,45 @@
take a look at [this example configuration](files/etc/dnsmasq.d/cord).
Then follow these steps:
-* Run the `bootstrap.sh` script to install Ansible and set up keys for login via `localhost`
-* Edit *cord-hosts* with the DNS names of your compute nodes, and update the *ansible_ssh_user*
-variable appropriately. Before proceeding, these commands needs to work on the head node:
+* Run the `bootstrap.sh` script to install Ansible and set up keys for login
+ via `localhost`
+* Edit *cord-hosts* with the DNS names of your compute nodes, and update the
+ *ansible_ssh_user* variable appropriately. Before proceeding, these commands
+ needs to work on the head node:
+
```
$ ansible -i cord-hosts head -m ping
$ ansible -i cord-hosts compute -m ping
```
+
* Run the following command:
+
```
ansible-playbook -i cord-hosts cord-setup.yml
```
-* Be patient! Some tasks in the above playbook can take a while to complete. For example,
- the "Add virtual machines to Juju's control" task will take about 10 minutes (or more, if you have a
- slow Internet connection).
-* After the playbook finishes, wait for the OpenStack services to come up. You can check on their progress
- using `juju status --format=tabular`. It should take about 30 minutes to install and configure all the OpenStack services.
-* Once the services are up, you can use the `admin-openrc.sh` credentials in the home directory to
- interact with OpenStack. You can SSH to any VM using `ssh ubuntu@<vm-name>`
+* Be patient! Some tasks in the above playbook can take a while to complete.
+ For example, the "Add virtual machines to Juju's control" task will take
+ about 10 minutes (or more, if you have a slow Internet connection).
+* After the playbook finishes, wait for the OpenStack services to come up. You
+ can check on their progress using `juju status --format=tabular`. It should
+ take about 30 minutes to install and configure all the OpenStack services.
+* Once the services are up, you can use the `admin-openrc.sh` credentials in
+ the home directory to interact with OpenStack. You can SSH to any VM using
+ `ssh ubuntu@<vm-name>`
-This will bring up various OpenStack services, including Neutron with the VTN plugin. It will also create
-two VMs called *xos* and *onos-cord* and prep them. Configuring and running XOS and ONOS in these VMs is beyond
-the scope of this README.
+This will bring up various OpenStack services, including Neutron with the VTN
+plugin. It will also create two VMs called *xos* and *onos-cord* and prep
+them. Configuring and running XOS and ONOS in these VMs is beyond the scope of
+this README.
-*NOTE:* The install process only brings up a single nova-compute node. To bring up more nodes
-as compute nodes, perform these steps on the head node:
+*NOTE:* The install process only brings up a single nova-compute node. To
+bring up more nodes as compute nodes, perform these steps on the head node:
+
```
$ juju add-machine ssh:<user>@<compute-host>
$ juju add-unit nova-compute --to <juju-machine-id>
```
+
Refer to the [Juju documentation](https://jujucharms.com/docs/stable/config-manual)
for more information.
@@ -75,49 +91,65 @@
Setting up a single-node CORD environment is simple.
-* Start a CloudLab experiment using profile *OnePC-Ubuntu14.04.4* and login to the node
-* `wget https://raw.githubusercontent.com/open-cloud/openstack-cluster-setup/master/scripts/single-node-pod.sh`
+* Start a CloudLab experiment using profile *OnePC-Ubuntu14.04.4* and login to
+ the node
+* `wget
+ https://raw.githubusercontent.com/opencord/platform-install/master/scripts/single-node-pod.sh`
* `bash single-node-pod.sh [-t] [-e]`
- * With no options, the script installs the OpenStack services and a simulated fabric. It creates VMs for
- XOS and ONOS but does not start these services.
- * Adding the `-t` option will start XOS, bring up a vSG, install a test client, and run a simple E2E test.
- * Adding the `-e` option will add the [ExampleService](http://guide.xosproject.org/devguide/exampleservice/)
- to XOS (and test it if `-t` is also specified).
+ * With no options, the script installs the OpenStack services and a simulated
+ fabric. It creates VMs for XOS and ONOS but does not start these services.
+ * Adding the `-t` option will start XOS, bring up a vSG, install a test
+ client, and run a simple E2E test.
+ * Adding the `-e` option will add the
+ [ExampleService](http://guide.xosproject.org/devguide/exampleservice/) to
+ XOS (and test it if `-t` is also specified).
-As mentioned above, be patient! With a fast Internet connection, the entire process will take at least
-one hour to complete.
+As mentioned above, be patient! With a fast Internet connection, the entire
+process will take at least one hour to complete.
-The install will bring up various OpenStack services, including Neutron with the VTN plugin. It will also create
-two VMs called *xos* and *onos-cord* and prep them. It creates a single nova-compute
-node running inside a VM.
+The install will bring up various OpenStack services, including Neutron with
+the VTN plugin. It will also create two VMs called *xos* and *onos-cord* and
+prep them. It creates a single nova-compute node running inside a VM.
-It should be possible to use this method on any server running Ubuntu 14.04, as long as it has
-sufficient CPU cores and disk space. A server with at least 12 cores and 48GB RAM is recommended.
+It should be possible to use this method on any server running Ubuntu 14.04, as
+long as it has sufficient CPU cores and disk space. A server with at least 12
+cores and 48GB RAM is recommended.
## How to install an OpenCloud cluster
-Once the prerequisites are satisfied, here are the basic steps for installing a new OpenCloud cluster named 'foo':
+Once the prerequisites are satisfied, here are the basic steps for installing a
+new OpenCloud cluster named 'foo':
-* Create *foo-setup.yml* and *foo-compute.yml* files using *cloudlab-setup.yml* and *cloudlab-compute.yml* as templates. Create a *foo-hosts* file with the DNS names of your nodes based on *cloudlab-hosts*.
-* If you are **not** installing on CloudLab, edit *foo-hosts* and add *cloudlab=False*
-under *[all:vars]*.
-* If you are installing a cluster for inclusion in the **public OpenCloud**, change *mgmt_net_prefix* in *foo-setup.yml* to be unique across all OpenCloud clusters.
-* To set up Juju, use it to install the OpenStack services on the head node, and prep the compute nodes, run on the head node:
-```
-$ ansible-playbook -i foo-hosts foo-setup.yaml
-```
-* Log into the head node. For each compute node, put it under control of Juju, e.g.:
-```
-$ juju add-machine ssh:ubuntu@compute-node
-```
-* To install the *nova-compute* service on the compute nodes that were added to Juju, run on the control machine:
-```
-$ ansible-playbook -i foo-hosts foo-compute.yaml
-```
+* Create *foo-setup.yml* and *foo-compute.yml* files using *cloudlab-setup.yml*
+ and *cloudlab-compute.yml* as templates. Create a *foo-hosts* file with the
+ DNS names of your nodes based on *cloudlab-hosts*.
+* If you are **not** installing on CloudLab, edit *foo-hosts* and add
+ *cloudlab=False* under *[all:vars]*.
+* If you are installing a cluster for inclusion in the **public OpenCloud**,
+ change *mgmt_net_prefix* in *foo-setup.yml* to be unique across all OpenCloud
+ clusters.
+* To set up Juju, use it to install the OpenStack services on the head node,
+ and prep the compute nodes, run on the head node: ``` $ ansible-playbook -i
+ foo-hosts foo-setup.yaml ```
+* Log into the head node. For each compute node, put it under control of Juju,
+ e.g.: ``` $ juju add-machine ssh:ubuntu@compute-node ```
+* To install the *nova-compute* service on the compute nodes that were added to
+ Juju, run on the control machine: ``` $ ansible-playbook -i foo-hosts
+ foo-compute.yaml ```
### Caveats
-* The installation configures port forwarding so that the OpenStack services can be accessed from outside the private network. Some OpenCloud-specific firewalling is also introduced, which will likely require modification for other setups. See: [files/etc/libvirt/hooks/qemu](https://github.com/andybavier/opencloud-cluster-setup/blob/master/files/etc/libvirt/hooks/qemu).
-* By default the compute nodes are controlled and updated automatically using *ansible-pull* from [this repo](https://github.com/andybavier/opencloud-nova-compute-ansible). You may want to change this.
-* All of the service interfaces are configured to use SSL because that's what OpenCloud uses in production. To turn this off, look for the relevant Juju commands in *cloudlab-setup.yaml*.
+* The installation configures port forwarding so that the OpenStack services
+ can be accessed from outside the private network. Some OpenCloud-specific
+ firewalling is also introduced, which will likely require modification for
+ other setups. See:
+ [files/etc/libvirt/hooks/qemu](https://github.com/andybavier/opencloud-cluster-setup/blob/master/files/etc/libvirt/hooks/qemu).
+* By default the compute nodes are controlled and updated automatically using
+ *ansible-pull* from [this
+ repo](https://github.com/andybavier/opencloud-nova-compute-ansible). You may
+ want to change this.
+* All of the service interfaces are configured to use SSL because that's what
+ OpenCloud uses in production. To turn this off, look for the relevant Juju
+ commands in *cloudlab-setup.yaml*.
+
diff --git a/cord-single-playbook.yml b/cord-single-playbook.yml
index 81036ad..ee657e9 100644
--- a/cord-single-playbook.yml
+++ b/cord-single-playbook.yml
@@ -44,6 +44,7 @@
roles:
- xos-vm-install
- onos-vm-install
+ - { role: test-client-install, when: test_client_install }
- juju-setup
- docker-compose
- simulate-fabric
diff --git a/roles/create-vms/tasks/main.yml b/roles/create-vms/tasks/main.yml
index d12abe5..b20c82e 100644
--- a/roles/create-vms/tasks/main.yml
+++ b/roles/create-vms/tasks/main.yml
@@ -34,7 +34,7 @@
- name: Update apt cache
command: ansible services -m apt -b -u ubuntu -a "update_cache=yes"
-
+
- name: Update software in all the VMs
command: ansible services -m apt -b -u ubuntu -a "upgrade=dist"
diff --git a/roles/docker-compose/tasks/main.yml b/roles/docker-compose/tasks/main.yml
index efb310a..5f1da1d 100644
--- a/roles/docker-compose/tasks/main.yml
+++ b/roles/docker-compose/tasks/main.yml
@@ -30,3 +30,12 @@
- name: Copy admin-openrc.sh into XOS container
command: ansible xos-1 -u ubuntu -m copy \
-a "src=~/admin-openrc.sh dest={{ service_profile_repo_dest }}/{{ xos_configuration }}"
+
+- name: Wait for test client to complete installation
+ when: test_client_install
+ async_status: jid={{ test_client_playbook.ansible_job_id }}
+ register: test_client_playbook_result
+ until: test_client_playbook_result.finished
+ delay: 10
+ retries: 120
+
diff --git a/roles/test-client-install/files/test-client-playbook.yml b/roles/test-client-install/files/test-client-playbook.yml
new file mode 100644
index 0000000..ab17491
--- /dev/null
+++ b/roles/test-client-install/files/test-client-playbook.yml
@@ -0,0 +1,26 @@
+---
+- hosts: nova-compute-1
+ remote_user: ubuntu
+
+ tasks:
+ - name: Install software
+ apt:
+ name={{ item }}
+ update_cache=yes
+ cache_valid_time=3600
+ become: yes
+ with_items:
+ - lxc
+
+ # replaces: sudo sed -i 's/lxcbr0/databr/' /etc/lxc/default.conf
+ - name: set lxc bridge interface to be databr
+ become: yes
+ lineinfile:
+ dest: /etc/lxc/default.conf
+ regexp: "^lxc.network.link ="
+ line: "lxc.network.link = databr"
+
+ - name: Create testclient
+ become: yes
+ command: lxc-create -t ubuntu -n testclient
+
diff --git a/roles/test-client-install/tasks/main.yml b/roles/test-client-install/tasks/main.yml
new file mode 100644
index 0000000..d10512d
--- /dev/null
+++ b/roles/test-client-install/tasks/main.yml
@@ -0,0 +1,14 @@
+---
+# test-client-install/tasks/main.yml
+
+- name: Copy over test-client ansible playbook
+ copy:
+ src=test-client-playbook.yml
+ dest={{ ansible_user_dir }}/test-client-playbook.yml
+
+- name: Run the test-client ansible playbook
+ command: ansible-playbook {{ ansible_user_dir }}/test-client-playbook.yml
+ async: 3600
+ poll: 0
+ register: test_client_playbook
+
diff --git a/roles/xos-vm-install/defaults/main.yml b/roles/xos-vm-install/defaults/main.yml
index 64c4421..42723d9 100644
--- a/roles/xos-vm-install/defaults/main.yml
+++ b/roles/xos-vm-install/defaults/main.yml
@@ -6,7 +6,7 @@
xos_configuration: "devel"
-xos_container_rebuild: false
+xos_container_rebuild: True
service_profile_repo_url: "https://gerrit.opencord.org/p/service-profile.git"
service_profile_repo_dest: "~/service-profile"
diff --git a/roles/xos-vm-install/files/xos-setup-cord-pod-playbook.yml b/roles/xos-vm-install/files/xos-setup-cord-pod-playbook.yml
index 364882e..8a99769 100644
--- a/roles/xos-vm-install/files/xos-setup-cord-pod-playbook.yml
+++ b/roles/xos-vm-install/files/xos-setup-cord-pod-playbook.yml
@@ -49,24 +49,31 @@
src=~/.ssh/id_rsa
dest={{ service_profile_repo_dest }}/{{ xos_configuration }}/node_key
- - name: download software image
+ - name: Download Glance VM images
get_url:
url={{ item.url }}
checksum={{ item.checksum }}
dest={{ service_profile_repo_dest }}/{{ xos_configuration }}/images/{{ item.name }}.img
with_items: "{{ xos_images }}"
+ - name: Pull database and cord-app-build image
+ become: yes
+ command: docker pull {{ item }}
+ with_items:
+ - xosproject/xos-postgres
+ - xosproject/cord-app-build
+
- name: Pull docker images for XOS
when: not xos_container_rebuild
become: yes
command: docker pull {{ item }}
with_items:
- xosproject/xos-base
- - xosproject/xos-postgres
- name: Rebuild XOS containers
when: xos_container_rebuild
command: make {{ item }}
- chdir="{{ service_profile_repo_dest }}/containers/xos/"
+ chdir="{{ xos_repo_dest }}/containers/xos/"
with_items:
- base
+
diff --git a/roles/xos-vm-install/tasks/main.yml b/roles/xos-vm-install/tasks/main.yml
index 1aa66a9..a4fc803 100644
--- a/roles/xos-vm-install/tasks/main.yml
+++ b/roles/xos-vm-install/tasks/main.yml
@@ -15,7 +15,7 @@
- name: Run the XOS ansible playbook
command: ansible-playbook {{ ansible_user_dir }}/xos-setup-playbook.yml
- async: 2400
+ async: 4800
poll: 0
register: xos_setup_playbook
diff --git a/scripts/single-node-pod.sh b/scripts/single-node-pod.sh
index 9e505d4..201f768 100755
--- a/scripts/single-node-pod.sh
+++ b/scripts/single-node-pod.sh
@@ -16,7 +16,7 @@
echo "Cleaning up files"
rm -rf ~/.juju
rm -f ~/.ssh/known_hosts
- rm -rf ~/openstack-cluster-setup
+ rm -rf ~/platform-install
echo "Cleaning up libvirt/dnsmasq"
sudo rm -f /var/lib/libvirt/dnsmasq/xos-mgmtbr.leases
@@ -35,8 +35,8 @@
[ -e ~/.ssh/id_rsa ] || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
- git clone https://github.com/open-cloud/openstack-cluster-setup.git
- cd ~/openstack-cluster-setup
+ git clone $SETUP_REPO_URL platform-install
+ cd ~/platform-install
git checkout $SETUP_BRANCH
sed -i "s/replaceme/`whoami`/" $INVENTORY
@@ -47,7 +47,7 @@
}
function setup_openstack() {
- cd ~/openstack-cluster-setup
+ cd ~/platform-install
extra_vars="xos_repo_url=$XOS_REPO_URL xos_repo_branch=$XOS_BRANCH"
@@ -76,14 +76,20 @@
}
function setup_test_client() {
- ssh ubuntu@nova-compute "sudo apt-get -y install lxc"
- # Change default bridge
- ssh ubuntu@nova-compute "sudo sed -i 's/lxcbr0/databr/' /etc/lxc/default.conf"
+ # prep moved to roles/test-client-install
- # Create test client
- ssh ubuntu@nova-compute "sudo lxc-create -t ubuntu -n testclient"
- ssh ubuntu@nova-compute "sudo lxc-start -n testclient"
+ # start the test client
+ echo "starting testclient"
+ ssh ubuntu@nova-compute "sudo lxc-start -n testclient -d"
+
+ i=0
+ until ssh ubuntu@nova-compute "sudo lxc-wait -n testclient -s RUNNING -t 60"
+ do
+ echo "Waited $i minutes for testclient to start"
+ done
+
+ echo "test client started, configuring testclient network"
# Configure network interface inside of test client with s-tag and c-tag
ssh ubuntu@nova-compute "sudo lxc-attach -n testclient -- ip link add link eth0 name eth0.222 type vlan id 222"
@@ -190,12 +196,13 @@
RUN_TEST=0
EXAMPLESERVICE=0
SETUP_BRANCH="master"
+SETUP_REPO_URL="https://github.com/opencord/platform-install"
INVENTORY="inventory/single-localhost"
XOS_BRANCH="master"
-XOS_REPO_URL="https://gerrit.opencord.org/xos"
+XOS_REPO_URL="https://github.com/opencord/xos"
DIAGNOSTICS=1
-while getopts "b:dehi:r:ts:" opt; do
+while getopts "b:dehi:p:r:ts:" opt; do
case ${opt} in
b ) XOS_BRANCH=$OPTARG
;;
@@ -205,18 +212,21 @@
;;
h ) echo "Usage:"
echo " $0 install OpenStack and prep XOS and ONOS VMs [default]"
- echo " $0 -b <branch> build XOS containers using the <branch> branch of XOS git repo"
+ echo " $0 -b <branch> checkout <branch> of the xos git repo"
echo " $0 -d don't run diagnostic collector"
echo " $0 -e add exampleservice to XOS"
echo " $0 -h display this help message"
echo " $0 -i <inv_file> specify an inventory file (default is inventory/single-localhost)"
- echo " $0 -r <url> use <url> to obtain the the XOS repo"
+ echo " $0 -p <git_url> use <git_url> to obtain the platform-install git repo"
+ echo " $0 -r <git_url> use <git_url> to obtain the xos git repo"
+ echo " $0 -s <branch> checkout <branch> of the platform-install git repo"
echo " $0 -t do install, bring up cord-pod configuration, run E2E test"
- echo " $0 -s <branch> use branch <branch> of the openstack-cluster-setup git repo"
exit 0
;;
i ) INVENTORY=$OPTARG
;;
+ p ) SETUP_REPO_URL=$OPTARG
+ ;;
r ) XOS_REPO_URL=$OPTARG
;;
t ) RUN_TEST=1
diff --git a/vars/cord_defaults.yml b/vars/cord_defaults.yml
index da45ddf..842e9c4 100644
--- a/vars/cord_defaults.yml
+++ b/vars/cord_defaults.yml
@@ -19,6 +19,8 @@
xos_repo_dest: "~/xos"
+xos_container_rebuild: True
+
apt_cacher_name: apt-cache
apt_ssl_sites:
diff --git a/vars/cord_single_defaults.yml b/vars/cord_single_defaults.yml
index 9491b37..cb21344 100644
--- a/vars/cord_single_defaults.yml
+++ b/vars/cord_single_defaults.yml
@@ -17,6 +17,10 @@
xos_repo_branch: "master"
+xos_container_rebuild: True
+
+test_client_install: True
+
apt_cacher_name: apt-cache
apt_ssl_sites: