Update fabric documentation to include setting up head node NAT gateway
Update vSG configuration doc

Change-Id: I1b64a5338a5ffe39164d31f80bca8be883e5799c
(cherry picked from commit a67aaa1e45502449ea30c0285159437b79f6edc1)
diff --git a/docs/appendix_basic_config.md b/docs/appendix_basic_config.md
index 8737c57..ff9a00c 100644
--- a/docs/appendix_basic_config.md
+++ b/docs/appendix_basic_config.md
@@ -1,14 +1,26 @@
 #  Basic Configuration
 
-The following provides instructions on how to configure an installed POD.
+The following provides instructions on how to configure the fabric on an installed full POD with two leaf and two spine switches.
+The fabric needs to be configured to forward traffic between the different components of the POD. More info about how to configure the fabric can be found [here](https://wiki.opencord.org/pages/viewpage.action?pageId=3014916).
 
-##Fabric
+Each leaf switch on the fabric corresponds to a separate IP subnet.  The recommended configuration is a POD with two leaves; the leaf1 subnet is `10.6.1.0/24` and the leaf2 subnet is `10.6.2.0/24`.
 
-This section describes how to apply a basic configuration to a freshly installed fabric. The fabric needs to be configured to forward traffic between the different components of the POD. More info about how to configure the fabric can be found here.
+##Configure the Compute Nodes
 
-##Configure Routes on the Compute Nodes
+The compute nodes must be configured with data plane IP addresses appropriate to their subnet.  The POD build process assigns data plane IP addresses to nodes, but it is not subnet-aware and so IP addresses must be changed for compute nodes on the leaf2 switch.
 
-Each leaf switch on the fabric corresponds to a separate IP subnet.
+###Assign IP addresses
+
+Log into the XOS GUI an click on `Core`, then `Tags`.  Each compute node is tagged with its data plane IP address.  For nodes connected to leaf2, change the `dataPlaneIp` tag to a unique IP address on the `10.6.2.0/24` subnet and click `Save`.
+
+XOS will communicate the new IP address to the ONOS VTN app, which will change it on the nodes.  Log into each compute node and verify that `br-int` has the new IP address:
+
+```
+ip addr list br-int
+```
+
+###Add Routes to Fabric Subnets
+
 Routes must be manually configured on the compute nodes so that traffic between nodes on different leaves will be forwarded via the local spine switch.
 
 Run commands of this form on each compute node:
@@ -17,7 +29,7 @@
 sudo ip route add <remote-leaf-subnet> via <local-spine-ip>
 ```
 
-The recommended configuration is a POD with two leaves; the leaf1 subnet is `10.6.1.0/24` and the leaf2 subnet is `10.6.2.0/24`.  In this configuration, on the compute nodes attached to leaf1, run:
+In this configuration, on the nodes attached to leaf1 (including the head node), run:
 
 ```
 sudo ip route add 10.6.2.0/24 via 10.6.1.254
@@ -29,13 +41,36 @@
 sudo ip route add 10.6.1.0/24 via 10.6.2.254
 ```
 
->NOTE: it’s strongly suggested to add it as a permanent route to the compute node, so the route will still be there after a reboot
+>NOTE: it’s strongly suggested to add it as a permanent route to the nodes, so the route will still be there after a reboot
+
+##Configure NAT Gateway on the Head Node (Optional)
+
+In a production POD, a vRouter is responsible for providing connectivity between the fabric and the Internet, but this requires configuring BGP peering between the vRouter and an upstream router.  In environments where this is not feasible, it is possible to use the head node as a NAT gateway for the fabric by configuring some routes on the head node and in ONOS as described below.
+
+###Add Routes for Fabric Subnets
+
+The default POD configuration uses the `10.7.1.0/24` subnet for vSG traffic to the Internet, and `10.8.1.0/24` for other Internet traffic.  Add routes on the head node to forward traffic to these subnets into the fabric:
+
+```
+sudo route add -net 10.7.1.0/24 gw 10.6.1.254
+sudo route add -net 10.8.1.0/24 gw 10.6.1.254
+```
+
+###Add Default Route to Head Node from Fabric
+
+ONOS must be configured to forward all outgoing Internet traffic to the head node's fabric interface, which by default has IP address `10.6.1.1`:
+
+```
+ssh -p 8101 onos@onos-fabric route-add 0.0.0.0/0 10.6.1.1
+```
+
+>NOTE: When prompted, use password "rocks".
 
 ##Configure the Fabric:  Overview
 
-On the head node there is a service able to generate an ONOS network configuration to control the leaf and spine network fabric. This configuration is generated querying ONOS for the known switches and compute nodes and producing a JSON structure that can be posted to ONOS to implement the fabric.
+A service running on the head node can produce an ONOS network configuration to control the leaf and spine network fabric. This configuration is generated by querying ONOS for the known switches and compute nodes and producing a JSON structure that can be posted to ONOS to implement the fabric.
 
-The configuration generator can be invoked using the CORD generate command, which print the configuration at screen (standard output).
+The configuration generator can be invoked using the `cord generate` command, which prints the configuration to standard output.
 
 ##Remove Stale ONOS Data
 
@@ -60,7 +95,7 @@
 Wiping regions
 ```
 
->NOTE: When prompt, use password "rocks".
+>NOTE: When prompted, use password "rocks".
 
 To ensure ONOS is aware of all the switches and the compute nodes, you must have each switch "connected" to the controller and let each compute node ping over its fabric interface to the controller.
 
@@ -92,6 +127,8 @@
 
 >NOTE: When prompt, use password "rocks".
 
+>NOTE: It may take a few seconds for the switches to initialize and connect to ONOS
+
 ##Connect Compute Nodes to ONOS
 
 To make sure that ONOS is aware of the compute nodes, the following commands will send a ping over the fabric interface on the head node and each compute node.
@@ -103,7 +140,9 @@
 done
 ```
 
-It is fine if the `ping` command fails; the purpose is to register the node with ONOS.  You can verify ONOS has recognized the nodes using the following command:
+> NOTE: It is fine if the `ping` command fails; the purpose is to register the node with ONOS.
+
+You can verify ONOS has recognized the nodes using the following command:
 
 ```
 ssh -p 8101 onos@onos-fabric hosts
diff --git a/docs/appendix_vsg.md b/docs/appendix_vsg.md
index e38756c..dd17cd8 100644
--- a/docs/appendix_vsg.md
+++ b/docs/appendix_vsg.md
@@ -1,5 +1,7 @@
 # vSG Configuration
 
+>NOTE: This section is only relevant if you wish to change the default IP address block or gateway MAC address associated with the vSG subnet.  One reason you might want to do this is to associate more IP addresses with the vSG network (the default is a /24).
+
 First, login to the CORD head node (`ssh head1` in *CiaB*) and go to the
 `/opt/cord_profile` directory. To configure the fabric gateway, you will need
 to edit the file `cord-services.yaml`. You will see a section that looks like
@@ -9,14 +11,14 @@
 addresses_vsg:
   type: tosca.nodes.AddressPool
     properties:
-      addresses: 10.6.1.128/26
-      gateway_ip: 10.6.1.129
-      gateway_mac: 02:42:0a:06:01:01
+      addresses: 10.7.1.0/24
+      gateway_ip: 10.7.1.1
+      gateway_mac: a4:23:05:06:01:01
 ```
 
-Edit this section so that it reflects the fabric address block assigned to the
+Edit this section so that it reflects the fabric address block that you wish to assign to the
 vSGs, as well as the gateway IP and the MAC address that the vSG should use to
-reach the Internet.
+reach the Internet (e.g., for the vRouter)
 
 Once the `cord-services.yaml` TOSCA file has been edited as described above,
 push it to XOS by running the following:
@@ -28,56 +30,40 @@
 ```
 
 This step is complete once you see the correct information in the VTN app
-configuration in XOS and ONOS.
-
-To check that the VTN configuration maintained by XOS:
-
- 1. Go to the "ONOS apps" page in the CORD GUI:
-   * URL: `http://<head-node>/xos#/onos/onosapp/`
-   * Username: `xosadmin@opencord.org`
-   * Password: <content of
-     /opt/cord/build/platform-install/credentials/xosadmin@opencord.org>
-
- 2. Select VTN_ONOS_app in the table
-
- 3. Verify that the `Backend status` is `1`
-
-To check that the network configuration has been successfully pushed to the
-ONOS VTN app and processed by it:
+configuration in ONOS.  To check that XOS has successfully pushed the network configuration to the ONOS VTN app:
 
  1.  Log into ONOS from the head node
 
     * Command: `ssh -p 8102 onos@onos-cord`
     * Password: `rocks`
 
- 2. Run the `cordvtn-nodes` command
-
- 3. Verify that the information for all nodes is correct
-
- 4.  Verify that the initialization status of all nodes is `COMPLETE`.
-
-This will look like the following:
-
-```
-onos> cordvtn-nodes
-	Hostname                      Management IP       Data IP             Data Iface     Br-int                  State
-	sturdy-baseball               10.1.0.14/24        10.6.1.2/24         fabric         of:0000525400d7cf3c     COMPLETE
-	Total 1 nodes
-```
-
-Run the `netcfg` command. Verify that the updated gateway information is
+ 2. Run the `netcfg` command. Verify that the updated gateway information is
 present under publicGateways:
 
 ```json
 onos> netcfg
 "publicGateways" : [
   {
-    "gatewayIp" : "10.6.1.193",
-    "gatewayMac" : "02:42:0a:06:01:01"
+    "gatewayIp" : "10.7.1.1",
+    "gatewayMac" : "a4:23:05:06:01:01"
   }, {
-    "gatewayIp" : "10.6.1.129",
-    "gatewayMac" : "02:42:0a:06:01:01"
+    "gatewayIp" : "10.8.1.1",
+    "gatewayMac" : "a4:23:05:06:01:01"
   }
 ],
 ```
 
+> NOTE: The above output is just a sample; you should see the values you configured
+
+ 3. Run the `cordvtn-nodes` command.  This will look like the following:
+
+```
+onos> cordvtn-nodes
+  Hostname                      Management IP       Data IP             Data Iface     Br-int                  State
+  sturdy-baseball               10.1.0.14/24        10.6.1.2/24         fabric         of:0000525400d7cf3c     COMPLETE
+  Total 1 nodes
+```
+
+ 4. Verify that the information for all nodes is correct
+
+ 5. Verify that the initialization status of all nodes is `COMPLETE`.