Merge "Add -i node_key" into cord-1.0
diff --git a/cord-pod/NOTES.txt b/cord-pod/NOTES.txt
deleted file mode 100644
index d832f2b..0000000
--- a/cord-pod/NOTES.txt
+++ /dev/null
@@ -1,37 +0,0 @@
-Notes on setup
-
-Requirements:
-* admin-openrc.sh: Admin credentials for your OpenStack cloud
-* id_rsa[.pub]: Keypair for use by the various services
-* node_key: Private key that allows root login to the compute nodes
-
-Steps for bringing up the POD:
-
-OpenStack
-* Configure management net
-  - mgmtbr on head nodes
-  - dnsmasq on head1 using cord config file
-* Install OpenStack using the openstack-cluster-install repo
-
-VTN
-* onos-cord VM is created by openstack-cluster-install
-* Bring up ONOS
-  # cd cord; docker-compose up -d
-* On each compute node it's necessary perform a few manual steps (FIX ME)
-  - Disable neutron-plugin-openvswitch-agent. As root:
-    # service neutron-plugin-openvswitch-agent stop
-    # echo manual > /etc/init/neutron-plugin-openvswitch-agent.override
-  - Clean up OVS: delete br-int and any other bridges
-  - Listen for connections from VTN:
-    # ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6641
-
-XOS
-* xos VM is created by openstack-cluster-install
-  - requirements listed above should already be satisfied by install
-* cd xos/xos/configurations/cord-pod
-* Bring up XOS cord-pod configuration
-  # make
-  # make vtn
-  # make cord
-* Login to XOS at http://xos
-  - padmin@vicci.org / letmein
diff --git a/cord-pod/README-Tutorial.md b/cord-pod/README-Tutorial.md
deleted file mode 100644
index 9f8c9e9..0000000
--- a/cord-pod/README-Tutorial.md
+++ /dev/null
@@ -1,182 +0,0 @@
-# Setting up the XOS Tutorial
-
-The XOS Tutorial demonstrates how to add a new subscriber-facing
-service to CORD.  
-
-## Prepare the development POD
-
-This tutorial runs on a single-node CORD POD development environment.
-For best results, prepare a clean Ubuntu 14.04
-LTS installation on a server with at least 48GB RAM and 12 CPU cores.
-Update the packages to the latest versions.
-
-To set up the POD, run
-[this script](https://github.com/open-cloud/openstack-cluster-setup/blob/master/scripts/single-node-pod.sh)
-with the `-e` option:
-
-```
-ubuntu@pod:~$ wget https://raw.githubusercontent.com/open-cloud/openstack-cluster-setup/master/scripts/single-node-pod.sh
-ubuntu@pod:~$ bash single-node-pod.sh -e
-```
-
-> NOTE: The above script can also automatically perform all tutoral steps if run as `bash single-node-pod -e -t`.  
-
-Be patient... it will take **at least one hour** to fully set up the single-node POD.
-
-## Include ExampleService in XOS
-
-On the POD, SSH into the XOS VM: `$ ssh ubuntu@xos`.  You will see the XOS repository
-checked out under `~/xos/`
-
-Change the XOS code as described in the
-[ExampleService Tutorial](http://guide.xosproject.org/devguide/exampleservice/)
-under the **Install the Service in Django** heading, and rebuild the XOS containers as
-follows:
-
-```
-ubuntu@xos:~$ cd xos/xos/configurations/cord-pod
-ubuntu@xos:~/xos/xos/configurations/cord-pod$ make local_containers
-```
-
-Modify the `docker-compose.yml` file in the `cord-pod` directory to include the synchronizer
-for ExampleService:
-
-```yaml
-xos_synchronizer_exampleservice:
-    image: xosproject/xos-synchronizer-openstack
-    command: bash -c "sleep 120; python /opt/xos/synchronizers/exampleservice/exampleservice-synchronizer.py -C /root/setup/files/exampleservice_config"
-    labels:
-        org.xosproject.kind: synchronizer
-        org.xosproject.target: exampleservice
-    links:
-        - xos_db
-    volumes:
-        - .:/root/setup:ro
-        - ../common/xos_common_config:/opt/xos/xos_configuration/xos_common_config:ro
-        - ./id_rsa:/opt/xos/synchronizers/exampleservice/exampleservice_private_key:ro
-```
-
-Also, add ExampleService's public key to the `volumes` section of the `xos` docker container:
-
-```yaml
-xos:
-    ...
-    volumes:
-        ...
-        - ./id_rsa.pub:/opt/xos/synchronizers/exampleservice/exampleservice_public_key:ro 
-```
-
-## Bring up XOS
-
-Run the `make` commands described in the [Bringing up XOS](https://github.com/open-cloud/xos/blob/master/xos/configurations/cord-pod/README.md#bringing-up-xos)
-section of the README.md file.
-
-## Configure ExampleService in XOS
-
-The TOSCA file `pod-exampleservice.yaml` contains the service declaration.
-Tell XOS to process it by running:
-
-```
-ubuntu@xos:~/xos/xos/configurations/cord-pod$ make exampleservice
-```
-
-This will add the ExampleService to XOS.  It will also create an ExampleTenant,
-which causes a VM to be created with Apache running inside.
-
-
-## Set up a Subscriber Device
-
-The single-node POD does not include a virtual OLT, but a device at the
-subscriber’s premises can be simulated by an LXC container running on the
-nova-compute node.
-
-In the nova-compute VM:
-
-```
-ubuntu@nova-compute:~$ sudo apt-get install lxc
-```
-
-Next edit `/etc/lxc/default.conf` and change the default bridge name to `databr`:
-
-```
-  lxc.network.link = databr
-```
-
-Create the client container and attach to it:
-
-```
-ubuntu@nova-compute:~$ sudo lxc-create -t ubuntu -n testclient
-ubuntu@nova-compute:~$ sudo lxc-start -n testclient
-ubuntu@nova-compute:~$ sudo lxc-attach -n testclient
-```
-
-(The lxc-start command may throw an error but it seems to be unimportant.)
-
-Finally, inside the container set up an interface so that outgoing traffic
-is tagged with the s-tag (222) and c-tag (111) configured for the
-sample subscriber:
-
-```
-root@testclient:~# ip link add link eth0 name eth0.222 type vlan id 222
-root@testclient:~# ip link add link eth0.222 name eth0.222.111 type vlan id 111
-root@testclient:~# ifconfig eth0.222 up
-root@testclient:~# ifconfig eth0.222.111 up
-root@testclient:~# dhclient eth0.222.111
-```
-
-If the vSG is up and everything is working correctly, the eth0.222.111
-interface should acquire an IP address via DHCP and have external connectivity.
-
-## Access ExampleService from the Subscriber Device
-
-To test that the subscriber device can access the ExampleService, find the IP
-address of the ExampleService Instance in the XOS GUI, and then curl this
-address from inside the testclient container:
-
-```
-root@testclient:~# sudo apt-get install curl
-root@testclient:~# curl 10.168.1.3
-ExampleService
- Service Message: "service_message"
- Tenant Message: "tenant_message"
-```
-
-Hooray!  This shows that the subscriber (1) has external connectivity, and
-(2) can access the new service via the vSG.
-
-## Troubleshooting
-
-Sometimes the ExampleService instance comes up with the wrong default route.  If the 
-ExampleService instance is active but the `curl` command does not work, SSH to the
-instance and check its default gateway.  Assuming the management address of the `mysite_exampleservice`
-VM is 172.27.0.2:
-
-```
-ubuntu@pod:~$ ssh-agent bash
-ubuntu@pod:~$ ssh-add
-ubuntu@pod:~$ ssh -A ubuntu@nova-compute
-ubuntu@nova-compute:~$ ssh ubuntu@172.27.0.2
-ubuntu@mysite-exampleservice-2:~$ route -n
-Kernel IP routing table
-Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
-0.0.0.0         172.27.0.1      0.0.0.0         UG    0      0        0 eth1
-10.168.1.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
-172.27.0.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1
-```
-
-If the default gateway is not `10.168.1.1`, manually set it to this value.
-
-```
-ubuntu@mysite-exampleservice-2:~$ sudo bash
-root@mysite-exampleservice-2:~# route del default gw 172.27.0.1
-root@mysite-exampleservice-2:~# route add default gw 10.168.1.1
-root@mysite-exampleservice-2:~# route -n
-Kernel IP routing table
-Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
-0.0.0.0         10.168.1.1      0.0.0.0         UG    0      0        0 eth0
-10.168.1.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
-172.27.0.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1
-```
-
-Now the VM should have Internet connectivity and XOS will start downloading Apache. 
-A short while later the `curl` test should complete.
diff --git a/cord-pod/README.md b/cord-pod/README.md
index 6b51be4..120a9ce 100644
--- a/cord-pod/README.md
+++ b/cord-pod/README.md
@@ -6,7 +6,7 @@
 CORD.  For more information on the CORD project, including how to get started, check out
 [the CORD wiki](http://wiki.opencord.org/).
 
-XOS is composed of several core services that are typically containerized. [Dynamic On-boarding System and Service Profiles](http://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles) describes these containers and how they fit together. 
+XOS is composed of several core services that are typically containerized. [Dynamic On-boarding System and Service Profiles](http://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles) describes these containers and how they fit together.
 
 This document is primarily focused on how to start the cord-pod service-profile. This service profile is usually located in `~/service-profile/cord-pod/` on an installed pod. This directory is usually located in the `xos` virtual machine.
 
@@ -17,6 +17,9 @@
 1. OpenStack should be installed, and OpenStack services (keystone, nova, neutron, glance, etc) should be started.
 2. ONOS should be installed, and at a minimum, ONOS-Cord should be running the VTN app.
 
+The usual way to meet these prerequisites is by following one of the methods of
+[building a CORD POD on the CORD Wiki](https://wiki.opencord.org/display/CORD/Build+a+CORD+POD).
+
 ### Makefile Targets to launch this service-profile
 
 These are generally executed in sequence:
@@ -40,7 +43,7 @@
 Creates a sample subscriber in the cord stack.
 
 #### `make exampleservice`
-Builds an example service that launches a web server. 
+Builds an example service that launches a web server.
 
 ### Utility Makefile targets
 
@@ -48,7 +51,7 @@
 Stops all running containers.
 
 #### `make rm`
-Stops all running containers and then permanently destroys them. As the database is destroyed, this will cause loss of data. 
+Stops all running containers and then permanently destroys them. As the database is destroyed, this will cause loss of data.
 
 #### `make cleanup`
 Performs both `make stop` and `make cleanup`, and then goes to some extra effort to destroy associated networks, VMs, etc. This is handy when developing using single-node-pod, as it will cleanup the XOS installation and allow the profile to be started fresh.
@@ -60,6 +63,14 @@
 1. Upload new code
 2. Execute `make cleanup; make; make vtn; make fabric; make cord; make cord-subscriber; make exampleservice`
 
+This workflow exercises many of the capabilities of a CORD POD.  It
+does the following:
+  - Tears down XOS as well as all OpenStack state that it created
+  - Onboards all CORD services, and configures the ONOS apps
+  - Creates a sample CORD subscriber (which spins up a vSG)
+  - Onboards `exampleservice` (described in the [Tutorial on Assembling and On-Boarding Services](https://wiki.opencord.org/display/CORD/Assembling+and+On-Boarding+Services%3A+A+Tutorial))
+  - Creates an `exampleservice` tenant (which creates a VM and loads and configures Apache in it)
+
 ### Useful diagnostics
 
 #### Checking that VTN is functional
@@ -74,7 +85,7 @@
 Total 1 nodes
 ```
 The important part is the `init=COMPLETE` at the end.  If you do not see this, refer to
-[the CORD VTN page on the ONOS Wiki](https://wiki.onosproject.org/display/ONOS/CORD+VTN) for
+[the CORD VTN Configuration Guide](https://wiki.opencord.org/display/CORD/VTN+Configuration+Guide) for
 help fixing the problem.  This must be working to bring up VMs on the POD.
 
 #### Inspecting the vSG
@@ -112,17 +123,7 @@
 
 #### Logging into XOS on CloudLab (or any remote host)
 
-The XOS service is accessible on the POD at `http://xos/`, but `xos` maps to a private IP address
-on the management network.  If you install CORD on CloudLab 
-you will not be able to directly access the XOS GUI.
-In order to log into the XOS GUI in the browser on your local machine (desktop or laptop), 
-you can set up an SSH tunnel to your CloudLab node.  Assuming that 
-`<your-cloudlab-node>` is the DNS name of the CloudLab node hosting your experiment,
-run the following on your local machine to create the tunnel:
-
-```
-$ ssh -L 8888:xos:80 <your-cloudlab-node>
-```
-
-Then you should be able to access the XOS GUI by pointing your browser to
-`http://localhost:8888`.  Default username/password is `padmin@vicci.org/letmein`.
+The [CORD POD installation process](https://wiki.opencord.org/display/CORD/Build+a+CORD+POD)
+forwards port 80 on the head node to the `xos` VM.
+You should be able to access the XOS GUI by simply pointing your browser at the head
+node.  Default username/password is `padmin@vicci.org/letmein`.