refactored documentation to eliminate sections that are elsewhere

Change-Id: I3cfb95f9ec6f70a4a50726310b912cd2751509e3
diff --git a/cord-pod/README.md b/cord-pod/README.md
index e73b680..6b51be4 100644
--- a/cord-pod/README.md
+++ b/cord-pod/README.md
@@ -3,137 +3,66 @@
 ## Introduction
 
 This directory holds files that are used to configure a development POD for
-CORD.  For more information on the CORD project, check out
-[the CORD website](http://cord.onosproject.org/).
+CORD.  For more information on the CORD project, including how to get started, check out
+[the CORD wiki](http://wiki.opencord.org/).
 
-XOS is composed of several core services:
+XOS is composed of several core services that are typically containerized. [Dynamic On-boarding System and Service Profiles](http://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles) describes these containers and how they fit together. 
 
-  * A database backend (postgres)
-  * A webserver front end (django)
-  * A synchronizer daemon that interacts with the openstack backend
-  * A synchronizer for each configured XOS service
+This document is primarily focused on how to start the cord-pod service-profile. This service profile is usually located in `~/service-profile/cord-pod/` on an installed pod. This directory is usually located in the `xos` virtual machine.
 
-Each service runs in a separate Docker container.  The containers are built
-automatically by [Docker Hub](https://hub.docker.com/u/xosproject/) using
-the HEAD of the XOS repository.
+### Prerequisites
 
-## How to bring up CORD
+The following prerequisites should be met:
 
-Installing a CORD POD involves these steps:
- 1. Install OpenStack on a cluster
- 2. Set up the ONOS VTN app and configuring OVS on the nova-compute nodes to be
-    controlled by VTN
- 3. Set up external connectivity for VMs (if not using the CORD fabric)
- 4. Bring up XOS with the CORD services
+1. OpenStack should be installed, and OpenStack services (keystone, nova, neutron, glance, etc) should be started.
+2. ONOS should be installed, and at a minimum, ONOS-Cord should be running the VTN app.
 
-### Install OpenStack
+### Makefile Targets to launch this service-profile
 
-To set up OpenStack, follow the instructions in the
-[README.md](https://github.com/open-cloud/openstack-cluster-setup/blob/master/README.md)
-file of the [open-cloud/openstack-cluster-setup](https://github.com/open-cloud/openstack-cluster-setup/)
-repository.  If you're just getting started with CORD, it's probably best to begin with the
-single-node CORD test environment to familiarize yourself with the overall setup.
+These are generally executed in sequence:
 
-**NOTE: In order to use the cord-pod configuration, you must set up OpenStack using the above recipe.**
+#### `make local_containers`
+Builds the `xosproject/xos`, `xosproject/xos-synchronizer`, and `xosproject/xos-onboarding-synchronizer` container images from source.
 
-### Set up ONOS VTN
+#### `make`
+Bootstraps xos and onboards a stack of typical cord services. While the services are onboarded, they are not yet configured
 
-The OpenStack installer above creates a VM called *onos-cord* on the head node.
-To bring up ONOS in this VM, log into the head node and run:
-```
-$ ssh ubuntu@onos-cord
-ubuntu@onos-cord:~$ cd cord; sudo docker-compose up -d
-```
+#### `make vtn`
+Configures the vtn service. If you are using a custom platform that differs from a typical single-node-pod experiment, then you may wish to `make vtn-external.yaml`, then edit the autogenerated `vtn-external.yaml`, and finally run `make vtn`.  
 
-### Set up external connectivity for VMs
+#### `make fabric`
+Configures the fabric service.
 
-The CORD fabric is responsible for providing external (Internet) connectivity
-for VMs created on CORD.  If you are running on CloudLab (or another development
-environment) and want external connectivity without the fabric, download [this script](https://raw.githubusercontent.com/open-cloud/openstack-cluster-setup/master/scripts/compute-ext-net.sh)
- and run it on the Nova compute node(s) as root:
- ```
- $ sudo compute-ext-net.sh
- ```
+#### `make cord`
+Configures the cord stack.
 
-The script creates a bridge (*databr*) on the node as well as a veth pair
-(*veth0/veth1*).  The *veth0* interface is added as a port on *databr* and
-VTN is configured to use *veth1* as its data plane interface.  Traffic coming
-from *databr* is NAT'ed to the external network via `iptables`.  The configuration
-assumes that *databr* takes the MAC address of *veth0* when it is added as a port
--- this seems to always be the case (though not sure why).
+#### `make cord-subscriber`
+Creates a sample subscriber in the cord stack.
 
-Note that setting up the full fabric is beyond the scope of this README.
+#### `make exampleservice`
+Builds an example service that launches a web server. 
 
-### Build XOS
+### Utility Makefile targets
 
-To build the XOS container images from source, use the following:
+#### `make stop`
+Stops all running containers.
 
-Then run:
+#### `make rm`
+Stops all running containers and then permanently destroys them. As the database is destroyed, this will cause loss of data. 
 
-```
-ubuntu@xos:~/service-profile/cord-pod$ make local_containers
-```
+#### `make cleanup`
+Performs both `make stop` and `make cleanup`, and then goes to some extra effort to destroy associated networks, VMs, etc. This is handy when developing using single-node-pod, as it will cleanup the XOS installation and allow the profile to be started fresh.
 
-### Bringing up XOS
+### Developer workflow
 
-The OpenStack installer above creates a VM called *xos* on the head node.
-To bring up XOS in this VM, first log into the head node and run:
-```
-$ ssh ubuntu@xos
-ubuntu@xos:~$ cd service-profile/cord-pod
-```
+A common developer workflow that involves completely restarting the profile is:
 
-Next, check that the following files exist in this directory
-(they will have been put there for you by the cluster installation scripts):
+1. Upload new code
+2. Execute `make cleanup; make; make vtn; make fabric; make cord; make cord-subscriber; make exampleservice`
 
- * *admin-openrc.sh*: Admin credentials for your OpenStack cloud
- * *id_rsa[.pub]*: A keypair that will be used by the various services
- * *node_key*: A private key that allows root login to the compute nodes
+### Useful diagnostics
 
-XOS can then be brought up for CORD by running a few `make` commands.
-First, run:
-
-```
-ubuntu@xos:~/service-profile/cord-pod$ make
-```
-
-Before proceeding, you should verify that objects in XOS are
-being sync'ed with OpenStack. [Login to the XOS GUI](#logging-into-xos-on-cloudlab-or-any-remote-host) 
-and select *Users* at left.  Make sure there is a green check next to `padmin@vicci.org`.
-
-> If you are **not** building the single-node development POD, the next
-> step is to create and edit the VTN configuration.  Run `make vtn-external.yaml`
-> then edit the `vtn-external.yml` TOSCA file.  The `rest_hostname:`
-> field points to the host where ONOS should run the VTN app.  The
-> fields in the `service_vtn` and the objects of type `tosca.nodes.Tag`
-> correspond to the VTN fields listed
-> on [the CORD VTN page on the ONOS Wiki](https://wiki.onosproject.org/display/ONOS/CORD+VTN),
-> under the **ONOS Settings** heading; refer there for the fields'
-> meanings.  
-
-Then run:
-
-```
-ubuntu@xos:~/service-profile/cord-pod$ make vtn
-```
-The above step configures the ONOS VTN app by generating a configuration
-and pushing it to ONOS.  You are able to see and modify the configuration
-via the GUI as follows:
-
-* To see the generated configuration, go to *http://xos/admin/onos/onosapp/* 
-([caveat](#logging-into-xos-on-cloudlab-or-any-remote-host)), select
-*VTN_ONOS_app*, then the *Attributes* tab, and look for the
-`rest_onos/v1/network/configuration/` attribute.  
-
-* To change the VTN configuration, modify the fields of the VTN Service object
-and the Tag objects associated with Nodes.  Don't forget to select *Save*.
-
-* After modifying the above fields, delete the `rest_onos/v1/network/configuration/` attribute
-in the *ONOS_VTN_app* and select *Save*.  The attribute will be regenerated using the new information.
-
-* Alternatively, if you want to load your own VTN configuration manually, you can delete the
-`autogenerate` attribute from the *ONOS_VTN_app*, edit the configuration in the
-`rest_onos/v1/network/configuration/` attribute, and select *Save*.
+#### Checking that VTN is functional
 
 Before proceeding, check that the VTN app is controlling Open vSwitch on the compute nodes.  Log
 into ONOS and run the `cordvtn-nodes` command:
@@ -148,30 +77,7 @@
 [the CORD VTN page on the ONOS Wiki](https://wiki.onosproject.org/display/ONOS/CORD+VTN) for
 help fixing the problem.  This must be working to bring up VMs on the POD.
 
-> If you are **not** building the single-node development POD, modify `cord-vtn-vsg.yml` 
-> and change `addresses_vsg` so that it contains the IP address block,
-> gateway IP, and gateway MAC of the CORD fabric.  
-
-Then run:
-
-```
-ubuntu@xos:~/service-profile/cord-pod$ make fabric
-```
-
-Then run:
-
-```
-ubuntu@xos:~/service-profile/cord-pod$ make cord
-```
-
-Then run:
-
-```
-ubuntu@xos:~/service-profile/cord-pod$ make cord-subscriber
-```
-
-
-### Inspecting the vSG
+#### Inspecting the vSG
 
 The above series of `make` commands will spin up a vSG for a sample subscriber.  The
 vSG is implemented as a Docker container (using the
@@ -204,7 +110,7 @@
 2b0bfb3662c7        andybavier/docker-vcpe   "/sbin/my_init"     5 days ago          Up 5 days                               vcpe-222-111
 ```
 
-### Logging into XOS on CloudLab (or any remote host)
+#### Logging into XOS on CloudLab (or any remote host)
 
 The XOS service is accessible on the POD at `http://xos/`, but `xos` maps to a private IP address
 on the management network.  If you install CORD on CloudLab