Merge "Add post-deploy tests as an Ansible role"
diff --git a/INSTALL_SINGLE_NODE.md b/INSTALL_SINGLE_NODE.md
index 17a9718..47d2477 100644
--- a/INSTALL_SINGLE_NODE.md
+++ b/INSTALL_SINGLE_NODE.md
@@ -38,37 +38,46 @@
 
 ## Bring up the developer environment
 On the build host, clone the
-[`platform-install`](https://gerrit.opencord.org/platform-install) repository
+[`cord`](https://gerrit.opencord.org/cord) repository
 anonymously and switch into its top directory:
 
 ```
-git clone https://gerrit.opencord.org/platform-install
-cd platform-install
+git clone https://gerrit.opencord.org/cord
+cd cord
 ```
 
 Bring up the development Vagrant box.  This will take a few minutes, depending on your
 connection speed:
 
 ```
-vagrant up
+vagrant up corddev
 ```
 
 Login to the Vagrant box:
 
 ```
-vagrant ssh
+vagrant ssh corddev
 ```
 
-Switch to the `platform-install` directory.
+Switch to the `/cord` directory.
 
 ```
-cd /platform-install
+cd /cord
 ```
 
+Fetch the sub-modules required by CORD:
+
+```
+./gradlew fetch
+```
+
+Note that the above steps are standard for installing a single-node or multi-node CORD POD.
+
 ## Prepare the configuration file
 
-Edit the configuration file `config/default.yml`.  Add the IP address of your target
-server as well as the username / password for accessing the server.  
+Edit the configuration file `/cord/components/platform-install/config/default.yml`.  Add the IP address of your target
+server as well as the `username / password` for accessing the server.  You can skip adding the password if you can SSH
+to the target server from inside the Vagrant VM as `username` without one (e.g., by running `ssh-agent`).
 
 If your target server is a CloudLab machine, uncomment the following two lines in the
 configuration file:
@@ -78,6 +87,12 @@
 #  - 'on_cloudlab=True'
 ```
 
+Edit `/cord/gradle.properties` to add the following line:
+
+```
+deployConfig=/cord/components/platform-install/config/default.yml
+```
+
 ## Deploy the single-node CORD POD on the target server
 
 Deploy the CORD software to the the target server and configure it to form a running POD.
@@ -90,9 +105,6 @@
 > This command uses an Ansible playbook (cord-single-playbook.yml) to install
 > OpenStack services, ONOS, and XOS in VMs on the target server.  It also brings up
 > a compute node as a VM.
->
-> (You *could* also run the above Ansible playbook directly, but Gradle is the
-> top-level build tool of CORD and so we use it here for consistency.)
 
 Note that this step usually takes *at least an hour* to complete.  Be patient!
 
diff --git a/PLATFORM_INSTALL_INTERNALS.md b/PLATFORM_INSTALL_INTERNALS.md
new file mode 100644
index 0000000..f38b82e
--- /dev/null
+++ b/PLATFORM_INSTALL_INTERNALS.md
@@ -0,0 +1,83 @@
+# Platform-Install Internals
+
+## Prerequisites
+
+When platform-install starts, it is assumed that `gradelew fetch` has already been run on the cord repo, fetching the necessary subrepositories for CORD. This includes fetching the platform-install repository.
+
+For the purposes of this document, paths are relative to the root of the platform-install repo unless specified otherwise. When starting from the uber-cord repo, platform-install is usually located at `/cord/components/platform-install`.
+
+## Configuration
+
+Platform-install uses a configuration file, `config/default.yml`, that contains several variables that will be passed to Ansible playbooks. Notable variables include the IP address of the target machine and user account information for SSHing into the target machine. There's also an extra-variable, `on-cloudlab` that will trigger additional cloudlab-specific actions to run.
+
+Cloudlab nodes boot with small disk partitions setup, and most of the disk space unallocated. Setting the variable `on-cloudlab` in `config/default.yml` to true will cause actions to be run that will allocate this unallocated space. 
+
+## Gradle Scripts
+
+The main gradle script is located in `build.gradle`. 
+
+`build.gradle` includes two notable tasks, `deployPlatform` and `deploySingle`. These are for multi-node and single-node pod installs and end up executing the Ansible playbooks `cord-head-playbook.yml` and `cord-single-playbook.yml` respectively. 
+
+## Ansible Playbooks
+
+Platform-install makes extensive use of Ansible Roles, and the roles are selected via two playbooks: `cord-head-playbook.yml` and `cord-single-playbook.yml`. 
+
+They key differences are that:
+* The single-node playbook sets up a simulated fabric, whereas the multi-node install uses a real fabric.
+* The single-node playbook sets up a single compute node running in a VM, whereas the multi-node playbook uses maas to provision compute nodes.
+* The single-node playbook installs a DNS server. The multi-node playbook only installs a DNS Server when maas is not used. 
+
+## Ansible Roles and Variables
+
+Ansible roles are located in the `roles` directory.
+
+Ansible variables are located in the `vars` directory. 
+
+### DNS-server and Apt Cache
+
+The first step in bringing up the platform is to setup a DNS server. This is done for the single-node install, and for the multi-node install if maas is not used. An apt cache is setup to facilitate package installation in the many VMs that will be setup as part of the platform. Roles executed include:
+
+* dns-nsd
+* dns-unbound
+* apt-cacher-ng
+
+### Pointing to the DNS server
+
+Assuming a DNS server was setup in the previous step, then the next step is to point the head node to use that DNS server. Roles executed include:
+
+* dns-configure
+
+### Prep system
+
+The next step is to prepare the system. This includes such tasks as installing default packages (tmux, vim, etc), configuring editors, etc. Roles executed include:
+
+* common-prep
+
+### Configuring the head node and setting up VMs
+
+Next the head node is configured and VMs are created to host the OpenStack and XOS services. Roles executed include:
+
+* head-prep
+* config-virt
+* create-vms
+
+### Set up VMs, juju, simulate fabric
+
+Finally, we install the appropriate software in the VMs. This is a large, time consuming step since it includes launching the OpenStack services (using juju), launching ONOS, and launching XOS (using service-platform). Roles executed include:
+
+* xos-vm-install
+* onos-vm-install
+* test-client-install
+* juju-setup
+* docker-compose
+* simulate-fabric
+* onos-load-apps
+* xos-start
+
+Juju is leveraged to perform the OpenStack portion of the install. Cord specific juju charm changes are documented in [Internals of the CORD Build Process](https://wiki.opencord.org/display/CORD/Internals+of+the+CORD+Build+Process).
+
+## Starting XOS
+
+The final ansible role executed by platform-install is to start XOS. This uses the XOS `service-profile` repository to bring up a stack of CORD services. 
+
+For a discussion of how the XOS service-profile system works, please see [Dynamic On-boarding System and Service Profiles](https://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles). 
\ No newline at end of file
diff --git a/cord-compute-playbook.yml b/cord-compute-playbook.yml
index 0176226..6461e1f 100644
--- a/cord-compute-playbook.yml
+++ b/cord-compute-playbook.yml
@@ -12,7 +12,7 @@
   hosts: all
   become: yes
   roles:
-    - { role: dns-configure, when: not cord_provisioned }
+    - { role: dns-configure, when: not on_maas }
 
 - name: Prep systems
   hosts: compute
diff --git a/cord-head-playbook.yml b/cord-head-playbook.yml
index 0f7b930..7b9fac6 100644
--- a/cord-head-playbook.yml
+++ b/cord-head-playbook.yml
@@ -15,15 +15,15 @@
   hosts: head
   become: yes
   roles:
-    - { role: dns-nsd, when: not cord_provisioned }
-    - { role: dns-unbound, when: not cord_provisioned }
+    - { role: dns-nsd, when: not on_maas }
+    - { role: dns-unbound, when: not on_maas }
     - apt-cacher-ng
 
 - name: Configure all hosts to use DNS server
   hosts: all
   become: yes
   roles:
-    - { role: dns-configure, when: not cord_provisioned }
+    - { role: dns-configure, when: not on_maas }
 
 - name: Prep systems
   hosts: all
@@ -51,5 +51,5 @@
 - name: Set up Automated Compute Node Provisioning
   hosts: head
   roles:
-    - { role: automation-integration, when on_maas }
+    - { role: automation-integration, when: on_maas }
 
diff --git a/roles/compute-prep/tasks/main.yml b/roles/compute-prep/tasks/main.yml
index 0c57979..1ddee39 100644
--- a/roles/compute-prep/tasks/main.yml
+++ b/roles/compute-prep/tasks/main.yml
@@ -32,7 +32,7 @@
     mode=0755
   notify:
     - run rc.local
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: Create /var/lib/nova dir
   file:
diff --git a/roles/config-virt/tasks/main.yml b/roles/config-virt/tasks/main.yml
index f3dc91d..67a14a1 100644
--- a/roles/config-virt/tasks/main.yml
+++ b/roles/config-virt/tasks/main.yml
@@ -13,7 +13,7 @@
     command=facts
 
 - name: Tear down libvirt's default network
-  when: not cord_provisioned and ansible_libvirt_networks["default"] is defined
+  when: not on_maas and ansible_libvirt_networks["default"] is defined
   virt_net:
     command={{ item }}
     name=default
@@ -28,22 +28,22 @@
     command=define
     xml='{{ lookup("template", "virt_net.xml.j2") }}'
   with_items: '{{ virt_nets }}'
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: collect libvirt network facts after defining new network
   virt_net:
     command=facts
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: start libvirt networks
-  when: not cord_provisioned and ansible_libvirt_networks["xos-{{ item.name }}"].state != "active"
+  when: not on_maas and ansible_libvirt_networks["xos-{{ item.name }}"].state != "active"
   virt_net:
     name=xos-{{ item.name }}
     command=create
   with_items: '{{ virt_nets }}'
 
 - name: have libvirt networks autostart
-  when: not cord_provisioned and ansible_libvirt_networks["xos-{{ item.name }}"].autostart != "yes"
+  when: not on_maas and ansible_libvirt_networks["xos-{{ item.name }}"].autostart != "yes"
   virt_net:
     name=xos-{{ item.name }}
     autostart=yes
@@ -61,7 +61,7 @@
   notify:
     - reload libvirt-bin
     - run qemu hook
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: Wait for uvt-kvm image to be available
   async_status: jid={{ uvt_sync.ansible_job_id }}
diff --git a/roles/create-vms/tasks/main.yml b/roles/create-vms/tasks/main.yml
index b20c82e..48cbb3a 100644
--- a/roles/create-vms/tasks/main.yml
+++ b/roles/create-vms/tasks/main.yml
@@ -42,19 +42,19 @@
   template:
     src=eth0.cfg.j2
     dest={{ ansible_user_dir }}/eth0.cfg
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: Copy eth0 interface config file to all VMs
   command: ansible services -b -u ubuntu -m copy -a "src={{ ansible_user_dir }}/eth0.cfg dest=/etc/network/interfaces.d/eth0.cfg owner=root group=root mode=0644"
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: Restart eth0 interface on all VMs
   command: ansible services -b -u ubuntu -m shell -a "ifdown eth0 ; ifup eth0"
-  when: not cord_provisioned
+  when: not on_maas
 
 - name: Verify that we can log into every VM after restarting network interfaces
   command: ansible services -m ping -u ubuntu
-  when: not cord_provisioned
+  when: not on_maas
 
 # sshkey is registered in head-prep task
 - name: Enable root ssh login on VM's that require it
diff --git a/roles/onos-load-apps/tasks/main.yml b/roles/onos-load-apps/tasks/main.yml
index 3a1bc2c..e4f165a 100644
--- a/roles/onos-load-apps/tasks/main.yml
+++ b/roles/onos-load-apps/tasks/main.yml
@@ -9,7 +9,7 @@
 
 - name: Load the apps using Docker
   command: ansible xos-1 -u ubuntu -m shell \
-    -a "cd ~/xos/containers/cord-apps; make {{ item }}; docker run xosproject/cord-app-{{ item }}"
+    -a "cd ~/xos/containers/cord-apps; make {{ item }}; sudo docker run xosproject/cord-app-{{ item }}"
   with_items: "{{ cord_apps }}"
 
 - name: Enable debugging for cord apps
diff --git a/vars/cord.yml b/vars/cord.yml
index 036b59d..65b7f4a 100644
--- a/vars/cord.yml
+++ b/vars/cord.yml
@@ -47,6 +47,3 @@
 
 # turn this on, or override when running playbook with --extra-vars="on_cloudlab=True"
 on_cloudlab: False
-
-# turn this off, or override when running playbook with --extra-vars="on_maas=False"
-on_maas: True
diff --git a/vars/cord_defaults.yml b/vars/cord_defaults.yml
index 842e9c4..56a85e3 100644
--- a/vars/cord_defaults.yml
+++ b/vars/cord_defaults.yml
@@ -1,9 +1,8 @@
 ---
 # vars/cord_defaults.yml
 
-# indicate that the nodes have been provisioned by CORD MaaS
-# Change or override for a multi-node install on CloudLab
-cord_provisioned: True
+# turn this off, or override when running playbook with --extra-vars="on_maas=False"
+on_maas: True
 
 openstack_version: kilo
 
diff --git a/vars/cord_single_defaults.yml b/vars/cord_single_defaults.yml
index cb21344..e5890d4 100644
--- a/vars/cord_single_defaults.yml
+++ b/vars/cord_single_defaults.yml
@@ -3,7 +3,7 @@
 
 # For a single-node case, we don't expect the node to already have been
 # provisioned by CORD MaaS.  It's just Ubuntu 14.04.
-cord_provisioned: False
+on_maas: False
 
 openstack_version: kilo