Add example of configuring kube-dns upstream servers

Change-Id: I586f49fe57c7de8372d5a1d2848086f8f41b722c
diff --git a/prereqs/openstack-helm.md b/prereqs/openstack-helm.md
index 1d9f651..74dc8cd 100644
--- a/prereqs/openstack-helm.md
+++ b/prereqs/openstack-helm.md
@@ -148,6 +148,6 @@
 * Install software like Kubernetes and Helm
 * Build the Helm charts and install them in a local Helm repository
 * Install requried packages
-* Configure DNS on the nodes
+* Configure DNS on the nodes (_NOTE: The `openstack-helm` install overwrites `/etc/resolv.conf` on the compute hosts and points the upstream nameservers to Google DNS.  If a local upstream is required, [see this note](https://docs.openstack.org/openstack-helm/latest/install/developer/kubernetes-and-common-setup.html#clone-the-openstack-helm-repos)_.)
 * Generate `values.yaml` files based on the environment and install Helm charts using these files
 * Run post-install tests on the OpenStack services
diff --git a/prereqs/vtn-setup.md b/prereqs/vtn-setup.md
index e5ff99b..1d41e2e 100644
--- a/prereqs/vtn-setup.md
+++ b/prereqs/vtn-setup.md
@@ -2,7 +2,12 @@
 
 The ONOS VTN app provides virtual networking between VMs on an OpenStack cluster.  Prior to installing the [base-openstack](../charts/base-openstack.md) chart that installs and configures VTN, make sure that the following requirements are satisfied.
 
-First, VTN requires the ability to SSH to each compute node _using an account with passwordless `sudo` capability_.  Before installing this chart, first create an SSH keypair and copy it to the `authorized_keys` files of all nodes in the cluster:
+## SSH access to hosts
+
+VTN requires the ability to SSH to each compute node _using an account with
+passwordless `sudo` capability_.  Before installing this chart, first create
+an SSH keypair and copy it to the `authorized_keys` files of all nodes in the
+cluster:
 
 Generate a keypair:
 
@@ -22,7 +27,38 @@
 cp ~/.ssh/id_rsa xos-profiles/base-openstack/files/node_key
 ```
 
-Second, the VTN app requires a fabric interface on the compute nodes.  VTN will not successfully initialize if this interface is not present. By default the name of this interface is expected to be named `fabric`. If there is not an actual fabric interface on the compute node, create a dummy interface as follows:
+## Fabric interface
+
+The VTN app requires a fabric interface on the compute nodes.  VTN will not
+successfully initialize if this interface is not present. By default the name
+of this interface is expected to be `fabric`.
+
+### Interface not named 'fabric'
+
+If you have a fabric interface on the compute node but it is not named
+`fabric`, create a bridge named `fabric` and add the interface to it.
+Assuming the fabric interface is named `eth2`:
+
+```shell
+sudo brctl addbr fabric
+sudo brctl addif fabric eth2
+sudo ifconfig fabric up
+sudo ifconfig eth2 up
+```
+
+To make this configuration persistent, add the following to
+`/etc/network/interfaces`:
+
+```text
+auto fabric
+iface fabric inet manual
+  bridge_ports eth2
+```
+
+### Dummy interface
+
+If there is not an actual fabric
+interface on the compute node, create a dummy interface as follows:
 
 ```shell
 sudo modprobe dummy
@@ -30,7 +66,9 @@
 sudo ifconfig fabric up
 ```
 
-Finally, in order to be added to the VTN configuration, each compute node must
+## DNS setup
+
+In order to be added to the VTN configuration, each compute node must
 be resolvable in DNS.  If a server's hostname is not resolvable, it can be
 added to the local `kube-dns` server (substitute _HOSTNAME_ with the output of
 the `hostname` command, and _HOST-IP-ADDRESS_ with the node's primary IP
diff --git a/profiles/mcord/install.md b/profiles/mcord/install.md
index 3fda193..27159ad 100644
--- a/profiles/mcord/install.md
+++ b/profiles/mcord/install.md
@@ -6,16 +6,36 @@
 node, suitable for evaluation or testing.  Requirements:
 
 - An _Ubuntu 16.04.4 LTS_ server with at least 64GB of RAM and 32 virtual CPUs
+- Latest versions of released software installed on the server: `sudo apt update; sudo apt -y upgrade`
 - User invoking the script has passwordless `sudo` capability
+- Open access to the Internet (not behind a proxy)
+- Google DNS servers (e.g., 8.8.8.8) are accessible
+
+### Target server on CloudLab (optional)
+
+If you do not have a target server available that meets the above
+requirements, you can borrow one on [CloudLab](https://www.cloudlab.us). Sign
+up for an account using your organization's email address and choose "Join
+Existing Project"; for "Project Name" enter `cord-testdrive`.
+
+> NOTE: CloudLab is supporting CORD as a courtesy. It is expected that you will not use CloudLab resources for purposes other than evaluating CORD. If, after a week or two, you wish to continue using CloudLab to experiment with or develop CORD, then you must apply for your own separate CloudLab project.
+
+Once your account is approved, start an experiment using the
+`OnePC-Ubuntu16.04-HWE` profile on the Wisconsin cluster. This will provide
+you with a temporary target server meeting the above requirements.
+
+Refer to the [CloudLab documentation](http://docs.cloudlab.us/) for more information.
+
+### Convenience Script
+
+This script takes about an hour to complete.  If you run it, you can skip
+directly to [Validating the Installation](#validating-the-installation) below.
 
 ```bash
 git clone https://gerrit.opencord.org/automation-tools
 automation-tools/mcord/mcord-in-a-box.sh
 ```
 
-This script takes about an hour to complete.  If you run it, you can skip
-directly to [Validating the Installation](#validating-the-installation) below.
-
 ## Prerequisites
 
 M-CORD requires OpenStack to run VNFs.  The OpenStack installation