Clarifying and updating docs

Change-Id: I4ce9be636983d29342380024d306c02f4b069606
diff --git a/INSTALL_SINGLE_NODE.md b/INSTALL_SINGLE_NODE.md
index 47d2477..b1489e0 100644
--- a/INSTALL_SINGLE_NODE.md
+++ b/INSTALL_SINGLE_NODE.md
@@ -1,119 +1,82 @@
 # Installing a CORD POD on a Single Physical Host
-[*This description is for bringing up a CORD POD on virtual machines on a single physical host. The purpose
-of this solution is to enable those interested in understanding how CORD works to examine and interact with a running CORD environment.*]
 
-This tutorial walks you through the steps to bring up a CORD "POD" on a single server using multiple virtual machines.
+*A full description of how to bring up a CORD POD on a single physical host, using the CORD developer
+environment, is [described here](https://github.com/opencord/cord/blob/master/docs/quickstart.md).
+That's probably what you want.*
+
+This page describes a simple alternative method for setting up a single-node POD that does not
+require a separate build host running Vagrant.  It's mainly for developers looking to
+set up a custom POD and run tests on it.
 
 ## What you need (Prerequisites)
-You will need a build machine (can be your developer laptop) and a target server.
-
-Build host:
-* Mac OS X, Linux, or Windows with a 64-bit OS
-* [`git`](https://git-scm.com/) (2.5.4 or later)
-* [`Vagrant`](https://www.vagrantup.com/) (1.8.1 or later)
-* Access to the Internet
-* SSH access to the target server
-
-Target server:
+You need a target server meeting the requirements below:
 * Fresh install of Ubuntu 14.04 LTS with latest updates
 * Minimum 12 CPU cores, 48GB RAM, 1TB disk
 * Access to the Internet
-* Account used to SSH from build host has password-less *sudo* capability
+* A user account with password-less *sudo* capability (e.g., the *ubuntu* user)
 
-### Running on CloudLab (optional)
-If you do not have a target server available, you can borrow one on
-[CloudLab](https://www.cloudlab.us).  Sign up for an account using your organization's
-email address and choose "Join Existing Project"; for "Project Name" enter `cord-testdrive`.
+## Run scripts/single-node-pod.sh
 
-[*Note: CloudLab is supporting CORD as a courtesy.  It is expected that you will
-not use CloudLab resources for purposes other than evaluating CORD.  If, after a
-week or two, you wish to continue using CloudLab to experiment with or develop CORD,
-then you must apply for your own separate CloudLab project.*]
-
-Once your account is approved, start an experiment using the `OnePC-Ubuntu14.04.4` profile
-on either the Wisconsin or Clemson cluster.  This will provide you with a temporary target server
-meeting the above requirements.
-
-Refer to the [CloudLab documentation](https://docs.cloudlab.us) for more information.
-
-## Bring up the developer environment
-On the build host, clone the
-[`cord`](https://gerrit.opencord.org/cord) repository
-anonymously and switch into its top directory:
+The [single-node-pod.sh](scripts/single-node-pod.sh) script in the `scripts` directory
+of this repository can be used to build and test a single-node CORD POD.
+It should be run on the target server in a user account with password-less
+*sudo* capability.  The most basic way to run the script is as follows:
 
 ```
-git clone https://gerrit.opencord.org/cord
-cd cord
+$ wget https://raw.githubusercontent.com/opencord/platform-install/master/scripts/single-node-pod.sh
+$ bash single-node-pod.sh
 ```
 
-Bring up the development Vagrant box.  This will take a few minutes, depending on your
-connection speed:
+The script will load the necessary software onto the target server, download the `master` branch of
+this repository, and run an Ansible playbook to set up OpenStack, ONOS, and XOS.
+
+Note that this process
+will take at least an hour!  Also some individual steps in the playbook can take 30 minutes or more.
+*Be patient!*
+
+### Script options
+
+Run `bash single-node-pod.sh -h` for a list of options:
 
 ```
-vagrant up corddev
+~$ bash single-node-pod.sh -h
+Usage:
+    single-node-pod.sh                install OpenStack and prep XOS and ONOS VMs [default]
+    single-node-pod.sh -b <branch>    checkout <branch> of the xos git repo
+    single-node-pod.sh -c             cleanup from previous test
+    single-node-pod.sh -d             don't run diagnostic collector
+    single-node-pod.sh -h             display this help message
+    single-node-pod.sh -i <inv_file>  specify an inventory file (default is inventory/single-localhost)
+    single-node-pod.sh -p <git_url>   use <git_url> to obtain the platform-install git repo
+    single-node-pod.sh -r <git_url>   use <git_url> to obtain the xos git repo
+    single-node-pod.sh -s <branch>    checkout <branch> of the platform-install git repo
+    single-node-pod.sh -t             do install, bring up cord-pod configuration, run E2E test
 ```
 
-Login to the Vagrant box:
+A few useful options are:
+
+The `-s` option can be used to install different versions of the CORD POD.  For example, to install
+the latest CORD v1.0 release candidate:
 
 ```
-vagrant ssh corddev
+~$ bash single-node-pod.sh -s cord-1.0
 ```
 
-Switch to the `/cord` directory.
+The `-t` option runs a couple of tests on the POD after it has been built:
+  - `test-vsg:` Adds a CORD subscriber to XOS, brings up a vSG for the subscriber, creates a simulated
+     device in the subscriber's home (using an LXC container), and runs a `ping` from the device
+     through the vSG to the Internet.  This test demonstrates that the vSG is working.
+  - `test-exampleservice:` Assumes that `test-vsg` has already been run to set up a vSG.  Onboards
+     the `exampleservice` described in the
+     [Tutorial on Assembling and On-Boarding Services](https://wiki.opencord.org/display/CORD/Assembling+and+On-Boarding+Services%3A+A+Tutorial)
+     and creates an `exampleservice` tenant in XOS.  This causes the `exampleservice` synchronizer
+     to spin up a VM, install Apache in the VM, and configure Apache with a "hello world" welcome message.
+     This test demonstrates a customer-facing service being added to the POD.
 
+The `-c` option deletes all state left over from a previous install.  For example, the
+[nightly Jenkins E2E test](https://jenkins.opencord.org/job/cord-single-node-pod-e2e/) runs
+the script as follows:
 ```
-cd /cord
+~$ bash single-node-pod.sh -c -t
 ```
-
-Fetch the sub-modules required by CORD:
-
-```
-./gradlew fetch
-```
-
-Note that the above steps are standard for installing a single-node or multi-node CORD POD.
-
-## Prepare the configuration file
-
-Edit the configuration file `/cord/components/platform-install/config/default.yml`.  Add the IP address of your target
-server as well as the `username / password` for accessing the server.  You can skip adding the password if you can SSH
-to the target server from inside the Vagrant VM as `username` without one (e.g., by running `ssh-agent`).
-
-If your target server is a CloudLab machine, uncomment the following two lines in the
-configuration file:
-
-```
-#extraVars:
-#  - 'on_cloudlab=True'
-```
-
-Edit `/cord/gradle.properties` to add the following line:
-
-```
-deployConfig=/cord/components/platform-install/config/default.yml
-```
-
-## Deploy the single-node CORD POD on the target server
-
-Deploy the CORD software to the the target server and configure it to form a running POD.
-
-```
-./gradlew deploySingle
-```
-> *What this does:*
->
-> This command uses an Ansible playbook (cord-single-playbook.yml) to install
-> OpenStack services, ONOS, and XOS in VMs on the target server.  It also brings up
-> a compute node as a VM.
-
-Note that this step usually takes *at least an hour* to complete.  Be patient!
-
-Once the above step completes, you can log into XOS as follows:
-
-* URL: `http://<target-server>/`
-* Username: `padmin@vicci.org`
-* Password: `letmein`
-
-[*STILL TO DO*]:
-* Port forwarding for XOS login as described above
-* Add pointer to where to go next.  At this point the services are all in place, but the vSG has not been created yet.
+This invocation cleans up the previous build, brings up the POD, and runs the tests described above.
diff --git a/PLATFORM_INSTALL_INTERNALS.md b/PLATFORM_INSTALL_INTERNALS.md
index 1a21924..503ec09 100644
--- a/PLATFORM_INSTALL_INTERNALS.md
+++ b/PLATFORM_INSTALL_INTERNALS.md
@@ -1,8 +1,13 @@
 # Platform-Install Internals
 
+This repository consists of some Ansible playbooks that deploy and configure OpenStack,
+ONOS, and XOS in a CORD POD, as well as some Gradle "glue" to invoke these playbooks
+during the process of building a [single-node POD](https://wiki.opencord.org/display/CORD/Build+CORD-in-a-Box)
+and a [multi-node POD](https://wiki.opencord.org/display/CORD/Build+a+CORD+POD).
+
 ## Prerequisites
 
-When platform-install starts, it is assumed that `gradelew fetch` has already been run on the cord repo, fetching the necessary subrepositories for CORD. This includes fetching the platform-install repository.
+When platform-install starts, it is assumed that `gradlew fetch` has already been run on the cord repo, fetching the necessary sub-repositories for CORD. This includes fetching the platform-install repository.
 
 For the purposes of this document, paths are relative to the root of the platform-install repo unless specified otherwise. When starting from the uber-cord repo, platform-install is usually located at `/cord/components/platform-install`.
 
@@ -10,28 +15,28 @@
 
 Platform-install uses a configuration file, `config/default.yml`, that contains several variables that will be passed to Ansible playbooks. Notable variables include the IP address of the target machine and user account information for SSHing into the target machine. There's also an extra-variable, `on-cloudlab` that will trigger additional cloudlab-specific actions to run.
 
-Cloudlab nodes boot with small disk partitions setup, and most of the disk space unallocated. Setting the variable `on-cloudlab` in `config/default.yml` to true will cause actions to be run that will allocate this unallocated space. 
+Cloudlab nodes boot with small disk partitions setup, and most of the disk space unallocated. Setting the variable `on-cloudlab` in `config/default.yml` to true will cause actions to be run that will allocate this unallocated space.
 
 ## Gradle Scripts
 
-The main gradle script is located in `build.gradle`. 
+The main gradle script is located in `build.gradle`.
 
-`build.gradle` includes two notable tasks, `deployPlatform` and `deploySingle`. These are for multi-node and single-node pod installs and end up executing the Ansible playbooks `cord-head-playbook.yml` and `cord-single-playbook.yml` respectively. 
+`build.gradle` includes two notable tasks, `deployPlatform` and `deploySingle`. These are for multi-node and single-node pod installs and end up executing the Ansible playbooks `cord-head-playbook.yml` and `cord-single-playbook.yml` respectively.
 
 ## Ansible Playbooks
 
-Platform-install makes extensive use of Ansible Roles, and the roles are selected via two playbooks: `cord-head-playbook.yml` and `cord-single-playbook.yml`. 
+Platform-install makes extensive use of Ansible Roles, and the roles are selected via two playbooks: `cord-head-playbook.yml` and `cord-single-playbook.yml`.
 
 They key differences are that:
 * The single-node playbook sets up a simulated fabric, whereas the multi-node install uses a real fabric.
 * The single-node playbook sets up a single compute node running in a VM, whereas the multi-node playbook uses maas to provision compute nodes.
-* The single-node playbook installs a DNS server. The multi-node playbook only installs a DNS Server when maas is not used. 
+* The single-node playbook installs a DNS server. The multi-node playbook only installs a DNS Server when maas is not used.
 
 ## Ansible Roles and Variables
 
 Ansible roles are located in the `roles` directory.
 
-Ansible variables are located in the `vars` directory. 
+Ansible variables are located in the `vars` directory.
 
 ### DNS-server and Apt Cache
 
@@ -78,10 +83,10 @@
 
 ## Starting XOS
 
-The final ansible role executed by platform-install is to start XOS. This uses the XOS `service-profile` repository to bring up a stack of CORD services. 
+The final ansible role executed by platform-install is to start XOS. This uses the XOS `service-profile` repository to bring up a stack of CORD services.
 
-For a discussion of how the XOS service-profile system works, please see [Dynamic On-boarding System and Service Profiles](https://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles). 
+For a discussion of how the XOS service-profile system works, please see [Dynamic On-boarding System and Service Profiles](https://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles).
 
 ## Helpful log files and diagnostic information
 
-The xos-build and xos-onboard steps run ansible playbooks to setup the xos virtual machine. The output of these playbooks is stored in the files `service-profile/cord-pod/xos-build.out` and `service-profile/cord-pod/xos-onboard.out` respectively. 
\ No newline at end of file
+The xos-build and xos-onboard steps run ansible playbooks to setup the xos virtual machine. The output of these playbooks is stored in the files `service-profile/cord-pod/xos-build.out` and `service-profile/cord-pod/xos-onboard.out` respectively.